Clarity in a fishbowl: Confirmation bias beats the Web's purpose
When the results of the 2016 US elections rolled out, there was utter dismay (and silence) across the liberal world. Everyone (and their friends) took a while to recover from the shock (and by the time they did, we were already into the inaugural attendance controversy).
If I were to look at my newsfeeds across channels, I would never have believed that Trump would win. Nor for that matter did the machine inform me that the BJP would win India with such ease in 2014. Nor did I expect Brexit to happen. Now, where did I go so horribly wrong in my analysis of trends?
From all that I saw, the pundits broadcasting to me got it horribly wrong as well. The truth is that I was isolated from divergent views in my filter bubble, and frankly, so were the pundits.
The Internet was designed to bring all knowledge and opinions to one place, made accessible to all. But the fact is that while it is accessible, algorithms have ensured that they are not visible, unless I expressly search for them. According to Pariser, today’s Internet is not designed to bring differently minded people together to engage in debate. In these elections, all of us only foresaw what we hoped would happen, what we would have liked to happen.
This, by the way, is also what went wrong with the Hillary campaign. The entire backbone of the campaign was an algorithm called Ada (named after Ada Lovelace, the British mathematician who worked with Charles Babbage on the mechanical general purpose computer) which played a vital role in every campaign decision. The problem was that Ada retained the biases her creators had programmed her with. Instead of generating recommendations to collect new data that might have contradicted and falsified her premises, Ada kept showing what her creators had wanted to see. Ada’s creators were convinced that Pennsylvania, historically and otherwise, was safe. And Ada mirrored that as well. As it turned out, Pennsylvania wasn’t safe.
What is worrying is that our media is also trapped in a filter bubble of their own making. Instead of driving down and talking to people in the teashops and to garage mechanics, they fly around and stay in fancy hotels and publish stories from within their respective filter bubbles. Or worse, they believe that Twitter is a good microsample of a macroworld.
In a process now called cyberbalkanization, the Internet is dividing people into like-minded subgroups each cocooned in the warmth of their own virtual communities. Remember John Donne? No man is an island, Entire of itself; Every man is a piece of the continent, A part of the main. If a clod be washed away by the sea, Europe is the less. This should make us cry today (and not just because of Brexit). Cyberbalkanization will make us islands — will make us opinionated, absolutist islands and perfect, dangerous arseholes.
I Believe I Am Right
The effects of the filter bubble are beginning to show. We see this more often now on social media, the unwavering belief that what we say is right. And not just a subjective correctness but an informed, objective veracity. But we also know this, that ‘pride goeth before destruction, and a haughty spirit, before a fall’. Overconfidence is one of the primary signs of what is known as confirmation bias. Confirmation biases are errors of cognitive reasoning marked by a tendency to seek out or only remember information that confirms your hypotheses, or to interpret information in a way that suits your opinions. It is time to watch out when we start believing our personal truths to be universal truths.
Very soon, you will start quoting ambiguous evidence to bolster your position. With time, the degree of ambiguity of the evidence increases. The more emotionally charged you are about an issue, the greater the bias and its effects. The problem is that these biases will eventually lead to extremely poor decision-making in political and organizational contexts.
Excerpted from Atul Jalan’s Where Will Man Take Us?with permission from Penguin Portfolio