Breeding misinformation in virtual space
The phenomenon of fake news has rece-ived significant sc-holarly and media attention over the last few years. In March, Sir Tim Berners Lee, inventor of the World Wide Web, has called for a crackdown on fake news, stating in an open letter that “misinformation, or fake news, which is surprising, shocking, or designed to appeal to our biases, can spread like wildfire.”
Gartner, which annually predicts what the next year in technology will look like, highlighted ‘increased fake news’ as one of its predictions.
The report states that by 2022, “majority of individuals in mature economies will consume more false information than true information. Due to its wide popularity and reach, social media has come to play a central role in the fake news debate.”
Researchers have suggested that rumours penetrate deeper within a social network than outside, indicating the susceptibility of this medium. Social networks such as Facebook and communities on messaging services such as Whats-App groups provide the perfect environment for spreading rumours. Information received via friends tends to be trusted, and online networks allow in-dividuals to transmit information to many friends at once.
In order to understand the recent phenomenon of fake news, it is important to recognise that the problem of misinformation and propaganda has existed for a long time. The historical examples of fake news go back centuries where, prior to his coronation as Roman Emperor, Octavian ran a disinformation campaign against Marcus Antonius to turn the Roman populace against him.
The advent of the printing press in the 15th century led to widespread publication; however, there were no standards of verification and journalistic ethics. Andrew Pettigrew wri-tes in his The Invention of News, that news reporting in the 16th and 17th centuries was full of portents about “comets, celestial apparitions, freaks of nature and natural disasters.”
In India, the immediate cause for the 1857 War of Indepen-dence was rumours that the bones of cows and pigs were mixed with flour and used to grease the cartridges used by the sepoys.
Leading up to the Second World War, the radio emerged as a strong medium for dissemination of disinformation, used by the Nazis and other Axis powers. More recently, the milk miracle in the mid-1990s consisting of stories of the idol of Ganesha drinking milk was a popular fake news phenomenon. In 2008, rumours about the popular snack, Kurkure, being made out of plastic became so widespread that Pepsi, its holding company, had to publicly rebut them.
A quick survey by us at the Centre of Internet and Society, for a forthcoming report, of the different kinds of misinformation being circulated in India, suggested four different kinds of fake news.
The first is a case of manufactured primary content. This includes instances where the entire premise on which an argument is based is patently false. In August 2017, a leading TV channel reported that electricity had been cut to the Jama Masjid in New Delhi for non-payment of bills. This was based on a false report carried by a news portal.
The second kind of fake news involves manipulation or editing of primary content so as to misrepresent it as something else. This form of fake news is often seen with respect to multimedia content such as images, pictures, audios and videos. These two forms of fake news tend to originate outside traditional media such as newspapers and television channels, and can be often sourced back to social media and WhatsApp forwards.
However, we see such unverified stories being picked up by traditional media. Further, there are instances where genuine content such as text and pictures are shared with fallacious contexts and descriptions. Earlier this year, several dailies pointed out that an image shared by the ministry of home affairs, purportedly of the floodlit India-Pakistan border, was actually an image of the Spain-Morocco border. In this case, the image was not doctored but the accompanying information was false.
Third, more complicated cases of misinformation involve the primary content itself not being false or manipulated, but the facts when they are reported may be quoted out of context. Most examples of misinformation spread by mainstream media, which has more evolved systems of fact checking and verification, and editorial controls, would tend to fall under this.
Finally, there are instances of lack of diligence in fully understanding the issues before reporting. Such misrepresentations are often encountered while reporting in fields that require specialised knowledge, such as science and technology, law, finance etc. Such forms of misinformation, while not suggestive of malafide intent can still prove to be quite dangerous in shaping erroneous opinions.
While the widespread dissemination of fake news contributes greatly to its effectiveness, it also has a lot to do with the manner in which it is designed to pander to our cognitive biases. Directionally motivated reasoning prompts people confronted with political information to process it with an intention to reach a certain pre-decided conclusion, and not with the intention to assess it in a dispassionate manner. This further results in greater susceptibility to confirmation bias, disconfirmation bias and prior attitude effect.
Fake news is also linked to the idea of “naïve realism,” the belief people have that their perception of reality is the only accurate view, and those in disagreement are necessarily uninformed, irrational, or biased. This also explains why so much fake news simply does not engage with alternative points of view.
A well-informed citizenry and institutions that provide good information are fundamental to a functional democracy. The use of the digital medium for fast, unhindered and unchecked spread of information presents a fertile ground for those seeking to spread misinformation. How we respond to this issue will be vital for democratic societies in our immediate future. Fake news presents a complex regulatory challenge that requires the participation of different stakeholders such as the content disseminators, platforms, norm guardians which include institutional fact checkers, trade organisations, and “name-and-shaming” watchdogs, regulators and consumers.
(The author works at The Centre for Internet and Society. He works on issues surrounding privacy, big data, and cyber security)