What you see on FB, Twitter is twisted, and how
Several psychologists and media experts have highlighted the dangers of the Echo Chamber Effect in the social media.
The ongoing controversy over Cambridge Analytica influencing elections in several countries, including India, has drawn attention to various high technology tools and techniques being used by social media platforms (like Facebook, Twitter, YouTube, Google, Instagram, Snapchat, etc), as well as political parties and advertisers to psychologically manipulate consumer choices and political, religious, social and economic opinions.
For several years, email providers and social media platforms have used advanced technologies such as Artificial Intelligence, Machine Learning, Deep Learning, Neural Networks, Big Data Analytics, etc, to deliver targeted advertising to their users.
It has long been a truism in the cyber world that “if an organisation is offering you services for free, then you, the individual, are their product”. By compiling detailed information about you, they are able to sell that information to advertisers and marketers. Another adage in the advertising world is: “Fifty per cent of my advertising budget is wasted. The problem is that I don’t know which 50 per cent it is.”
Everyone who uses free email services like Gmail or Yahoo Mail would have seen that when they compose any email mentioning certain topics, they immediately see advertisements on related products and services.
Or on social media, such as Twitter and Facebook, when you click “like” or retweet or share certain posts, you immediately see advertisements about related products and services. All this is done in real time by using technologies such as Artificial Intelligence, Machine Learning, Deep Learning, Neural Networks, Big Data Analytics, etc.
By using such technologies, social media platforms can quickly develop detailed psychographic profiles of billions of individuals, using data provided by the consumers themselves under the user agreements.
For instance, to find out what Twitter predicted my interests and choices were likely to be, based on my tweets and retweets over the years, I clicked under “Settings”, then “Your Twitter Data”, then “Interests from Twitter”. I found that Twitter had listed about a hundred topics of interest to me, which were uncannily accurate. Twitter also knew what topics I was not likely to be interested in at all. Twitter certainly knew more about my likes and dislikes than most of my decades-long friends did.
The explanation was “Interests from partners — Twitter’s partners build audiences around shopping decisions, lifestyle, and other online and offline behaviours” and “Tailored audiences — Tailored audiences are often built from email lists or browsing behaviours. They help advertisers reach prospective customers or people who have already expressed interest in their business.”
To find out what Facebook predicted my interests and choices were likely to be, based on my posts, shares and “likes” over the years, I clicked under “Settings”, then “Ads”. Again, Facebook had thousands of detailed predictions about my tastes, which were uncannily accurate, ranging from my business and professional interests, the goods and services which I was likely to purchase, the news sources I was likely to read and watch, the magazines and books I was likely to read, the places which I would like to visit, etc.
But what is insidious is that the line has subtly been crossed from merely trying to predict what refrigerator or breakfast cereal I would buy, into spoon feeding me news and opinions which would influence my political, social, economic, and religious views, and influence me to either vote for a political party or cast a negative vote against another party.
This detailed information about me on Twitter and Facebook could be accurately used by political parties to predict which approach they should use with me. Was I more likely to vote for them because of my dislike of their opponents? Or was I more likely to respond to their positive policies? Should they try to “demonise” their opponents? Or should they emphasise their economic policies, or push their religious agenda with me, or their caste agenda? Would I vote for them even if I disagreed vehemently with their caste politics provided I agreed with their economic policies? They could send me customised messages tailored exactly for me on my WhatsApp, Facebook and Twitter messengers.
For years, social scientists have claimed that social media platforms would lead to greater social and political awareness and understanding, by providing easy access to multiple points of view, and not just the news and opinions dished out by the mainstream television channels, magazines and newspapers. However, the social media platforms have actually led to much greater polarisation between extreme viewpoints, to greater chasms of political, religious and economic opinions.
Psychologically, this is because of the “Echo Chamber Effect”. The way the algorithms of Twitter and Facebook are written, their newsfeeds would show me more and more of the news which they think I would like to see and hear. My newsfeeds would not show other points of view, or opinions which differ significantly from mine. Since I am exposed only to news and views which reinforce my existing beliefs, I become more and more convinced that I am always right, and that anyone who does not share the same beliefs as me is a menace to society — which is the source of political and religious and racial hatreds.
Several psychologists and media experts have highlighted the dangers of the Echo Chamber Effect in the social media. At the Massachusetts Institute of Technology, MIT Media Lab’s Centre for Civic Media states: “Facebook, Twitter, and other sites use complicated computerised rules to decide which posts you see at the top of your feed and which you don’t. These algorithms have reinforced our echo chamber — showing us content like what we share — and made hard to burst our filter bubbles — hiding content that is different than what we believe. We think escaping these echo chambers and seeing a wider picture of the news is a critical piece of democratic society.” Its director Ethan Zuckerman stated: “Being able to escape echo chambers and encounter a wide picture of news may be a necessary precursor towards a functioning democracy. The early United States featured a highly partisan press — historian Paul Starr argues that political parties emerged from newspapers, rather than vice versa — but also had a strong cultural norm of republishing a wide range of stories from different parts of the early nation and different political leanings. Examining early American newspapers raises the uncomfortable possibility that our forebears, more than two centuries ago, may have encountered a wider range of views than we elect to encounter today.” Zuckerman added: “An idea for those seeking a technical solution to our polarisation and isolation: public social media. Private platforms like Facebook are under no obligation to provide us a diverse worldview. If it is more profitable to bring us baby pictures from our friends than political stories, or to isolate us in a bubble of ideologically comfortable information, they will. A public social media platform would have the civic mission of providing us a diverse and global view of the world. Instead of focusing resources on reporting, it would focus on aggregating and curating, pushing unfamiliar perspectives into our feeds and nudging us to diversity away from the ideologically comfortable material we all gravitate towards.”
Another danger is that now Artificial Intelligence and Deep Neural Networks techniques have advanced to the levels where it is possible to fabricate entirely fake videos and audios of persons from a few minutes of their actual speeches, the video and audio equivalents of PhotoShopped fake images.
A few weeks ago, scientists at the University of Washington successfully demonstrated a programme which turned audio clips into a realistic, lip-synced video of the person speaking those words. Professors at Stanford University recently developed programmes that combine and mix recorded video footage with real-time face tracking to create fake manipulated videos to make world leaders appear to say things they never actually said.
Ian Goodfellow, a scientist at Google Brain, who developed the first Generative Adversarial Network, cautioned that Artificial Intelligence could set news consumption back a hundred years. In June 2018, an event is being held in New York City called “Fake News Horror Show” which will showcase “terrifying propaganda tools”.
We are fast reaching a stage where you can never trust anything which you see on a screen — whether a television screen, or a computer or smartphone screen.