Fake news, the DNA of tech firms: The big fight against fake news
Technology companies like Google, Twitter, Facebook and Microsoft are also gearing up to fight the menace of fake news.
The rise and crowning of Donald Trump has led to global interest in the phenomenon of “fake news”, though the dissemination of false and malicious information has been around for thousands of years. Rameses the Great, who was the second-most durable king of ancient Egypt, propagated a compelling story, with detailed accounts of battle scenes, of a stunning victory for Egyptians in the Battle of Kadesh, though now it is established that the battle was in fact a stalemate.
The Oxford Leaner’s Dictionary defines fake news as “false reports of events, written and read on websites”. While the problem of intentionally malicious news stories in print media, or “yellow journalism” has been around for centuries, the Internet and web-based technologies have enabled the spread of false information at lightning speed with negligible costs to propagators.
Through freemium business models which generate advertising revenue by merely attracting users to the website, unscrupulous media publishers find it profitable to publish sensational and even false stories to earn profits. In turn, these fake stories spread to millions of users on social media. (Freemium business model, especially on the Internet, is one by which basic services are provided free of charge while more advanced features must be paid for).
A Buzzfeed analysis found that Facebook News Feed played a key catalyst role in the spread of the top 20 fake news stories relating to the 2016 US presidential election. Thus, platforms like Facebook and Google and cross-platform messenger apps like WhatsApp have also democratised misinformation, indoctrination and spread of rumours. A BBC News story on December 5, 2016, said: “Many of the fake news websites that sprang up during the US election campaign have been traced to a small city in Macedonia, where teenagers are pumping out sensationalist stories to earn cash from advertising.” The phenomenon of news in the world of online communities and platforms with global reach provides power to malicious elements without any commensurate accountability and has the potential to disrupt the social order and adversely impact open societies in democracies. In July 2018, a British parliamentary committee recommended that the UK government should hold technology companies responsible and liable for “harmful and illegal content on their platforms” because “fake news” threatens democracy.
While it is acknowledged that the originating source for “fake news” may have malicious intent, the lightning spear of fake news is propelled by millions of social media platform users facilitated by technology companies like Google and Facebook, which owns WhatsApp, and Microsoft, which owns the search engine Bing.com. Due to First Amendment rights in the US which provide constitutional guarantee to free speech, and various legal provisions to protect freedom of expression in democratic societies, it is often hard to prevent a person from writing an article devoid of facts. Therefore, the key to combating the menace of fake news lies in users of social media platforms, technology companies and regulatory authorities.
Increasingly, more governments are rising to meet this challenge. A Singapore parliamentary committee last week, as reported by Reuters, recommended that governments should enact laws to ensure that technology companies implement measures to fight “deliberate online falsehoods”. This followed efforts to apply pressure on the UK, Germany, France and EU to regulate technology companies, requiring them to rein in fake news. Recognising fake news is the first step that users of social media must take if they intend to be “part of the solution” rather than be “the problem”.
The International Federation of Library Associations and Institutions (https://www. Ifla.org) provides a guide to assist users like us to recognise fake news. Most importantly, they suggest that readers should consider the reliability of source, assess the supporting sources, check the identity of authors, and most critically review their own biases before forwarding or broadcasting that content to others.
International Fact-Checking Network (https://www.poynter.org/channels/fact-checking) operating since 2015 is an international collaborative effort and not only provides fact-checking services and training but has also published a code of principles. Governments are also taking the lead and as a first, Taiwan has introduced a new school curriculum in 2017 that teaches students to identify propaganda and evaluate sources of online content through a new course called “media literacy”.
Technology companies like Google, Twitter, Facebook and Microsoft are also gearing up, willingly or unwillingly, to implement appropriate algorithms to fight the menace of fake news. After being in a state of denial for almost a year, Mark Zuckerberg in September 2017, admitted, “After the election, I made a comment that I thought the idea that misinformation on Facebook changed the outcome of the election was a crazy idea. Calling that crazy was dismissive and I regret it. This is too important an issue to be dismissive.”
Twitter is creating Ads Transparency Centre for political ads to ensure full disclosure, improving their algorithms to “stamp out bot accounts targeting election related content”, and proposes to “monitor trending topics and conversations” for any fake news. Google is working on a four-step plan which requires advertisers to identify their location as well as provide credentials, “provide disclosures on political ads”, including funding source, “release a transparency report” on political ads and “publish a creative library where all the purchased ads are made public”. Facebook has been working on “increasing its political ads transparency” and has added tools to “fight fake news, with the use of machine learning and adding a ‘Related Article’ section in articles for context”.
While developments of the last two years, including the Cambri-dge Analytica scandal involving Facebook have highlighted the lurking dangers of online technologies facilitating content sharing, users, technology companies and regulators have been increasingly taking proactive action to nip the menace of “fake news” in the bud.
A recent study by Oxford University, relea-sed in July 2018, highlighted the danger of fake news and warned that “the weaponisation of social media platforms like WhatsApp to spread fake news will gather momentum as India enters an election year”.
The key to fighting this menace lies in technology companies and regulators being vigilant and in proactive action by users like us to identify fake news and not be a tool in the hands of unscrupulous elements by spreading it.
(The writer is Associate Professor of Information Systems at University of South Florida. With additional inputs by Mridula Sinha, an independent policy analyst. Both were in the Indian Administrative Service before transitioning into current careers)