Surge in Fake News Cases: Increase in Misinformation Incidents And Circulation Of Fake Images
News Mania Desk/Agnibeena Ghosh/21th August 2024
The issue of fake news is not confined to politics but has seeped into various aspects of daily life, with misleading content impacting everything from public health to electoral perceptions. The rise of fake news underscores the urgent need for more effective measures to combat misinformation and ensure that the public receives accurate and reliable information. The substantial increase in cases during the pandemic reflects the growing challenge of addressing misinformation in an increasingly digital and interconnected world.
The spread of fake news and misinformation has seen an alarming rise, with a 214% increase in recorded cases during the pandemic year of 2020. According to the latest data from the National Crime Records Bureau (NCRB), there were 1,527 instances of fake news reported in 2020, a significant jump from 486 cases in 2019 and 280 cases in 2018, the year this category was first included in crime statistics.
The “Crime In India – 2020” report highlights that incidents of circulating false or misleading news and rumors, which fall under criminal offenses according to the Indian Penal Code, nearly tripled in 2020 compared to the previous year. Telangana led the statistics with 273 cases, followed by Tamil Nadu with 188 cases. Uttar Pradesh recorded 166 cases, while Bihar and Maharashtra registered 144 and 132 cases respectively.
Interestingly, some Union Territories, such as Lakshadweep and Chandigarh, reported no cases at all. Additionally, 10 out of the 28 states, including Kerala and Punjab, saw fewer than 10 cases each. Among the total cases reported for 2020, seven involved juveniles.
The surge in fake news cases coincides with the outbreak of the COVID-19 pandemic, which significantly contributed to the proliferation of misinformation. During 2020, numerous false claims circulated regarding unproven treatments for the coronavirus, including dubious remedies like cow urine and cow dung, as well as misleading advice such as placing lemon drops in the nose to prevent infection. The Quint’s WebQoof team and Quint FIT worked to debunk many such false claims, highlighting the dangers of misinformation. Experts warned that even seemingly harmless misinformation could foster a false sense of security, potentially leading individuals to neglect essential health precautions.
The issue of manipulated images is far from new; indeed, doctored photographs have been around since the early days of photography. For instance, a well-known portrait of Abraham Lincoln is widely believed to be a composite, combining Lincoln’s head with the body of another individual. With the advent of digital cameras and sophisticated photo-editing software, identifying such fakes has become increasingly complex.
Governments have also been known to release altered images. A notable example is from Iran in 2008, where a missile test image was doctored to make a failed launch appear successful by adding an extra projectile. This incident underscores the importance of verifying the authenticity of images used in critical security decisions, particularly from regions with restricted access like North Korea, Iraq, and Syria.
To address this issue, the Defense Advanced Research Projects Agency (DARPA) is developing technology aimed at automatically detecting image and video manipulations. Researchers like Kevin Conner, co-founder of the image analysis firm Fourandsix, are working on tools such as izitru, which analyzes how a file is structured to determine its authenticity. Despite these advancements, technology is still not at a point where it can definitively validate every image. As Conner notes, achieving a foolproof system remains a challenging goal.
Human ability to detect fake images is notably poor. Research from Stanford University revealed that students, ranging from middle school to college, often struggle to evaluate the credibility of online content. For instance, when shown an image purportedly of “Fukushima Nuclear Flowers” posted without proper credentials, less than 20% of high school students were able to question its validity.
Further research conducted at the Federal University of Rio Grande do Sul in Brazil found that even when participants were asked to identify manipulated images, they could only accurately detect fakes about 47% of the time. Many images that appeared altered were, in reality, composites or contained edited areas, making them challenging to identify as fake.
Victor Schetinger, a doctoral candidate involved in this study, highlights the difficulty of assessing image authenticity visually. He notes that many visual clues that might suggest an image is fake can also result from factors like camera saturation or dust on the lens. As a result, even experienced observers can struggle with distinguishing genuine images from altered ones.
The solution to this problem lies in leveraging advanced computer algorithms designed for photographic forensics. These techniques involve examining whether images adhere to physical laws and detecting inconsistencies that may indicate manipulation. While achieving absolute certainty in image authenticity may remain elusive, forensic methods provide valuable tools for improving our ability to detect fake images and maintain the integrity of visual information.