Top

Data dilemma: AI and a sense of beauty

We’re teaching machines to recognise people and objects. But why is artificial intelligence becoming racist

We’re teaching machines to recognise people and objects. But why is artificial intelligence becoming racist

Machine learning. It’s when you trust a microchip to make human decisions. Examples of machine learning are all around us. Your favourite streaming service uses algorithms to determine what TV show you might like next. That photos app on your phone is slowly learning to identify friends and family.

The machines are using algorithms prepared by human masters to learn the world around. But those lines of pure code are starting to reveal some of our ugliest prejudices.

Early this year, the futurists at Beauty.AI started off with an ambitious idea. They wanted a beauty pageant judged entirely by an artificial intelligence. About 6,000 humans — from over 100 countries — applied by sending their selfies over to the company which then presented the images to the AI. It picked 44 winners and almost immediately, triggered worldwide controversy.

Nearly all the pageant winners were white-skinned individuals. Just one human in the winning final 44 was dark skinned and data reads into the competition revealed that large groups of Indians had sent their photos to Beauty.AI, only to be ignored by the machine.

Which makes you wonder what that AI was taught. And the problem is not limited to small firms such as Beauty.AI. Last year, Google’s photos app tagged an Afri-can-American duo as “gorillas”. Years before that, some of HP’s laptops refused to track the faces of black people. These “issues” are caused by engineers who are putting machines on just one data diet. If all the machine is getting are white people, it would have trouble learning the nuances of say, Asian or African skin.

This blatant glorification of the “fair skin” begins in our maternity wards. If the baby is fair, everyone’s going to gush and congratulate the couple. If not, well, everyone hides a certain disappointment. Some make it worse by trying to “console” parents. Later, right through school, the fair one is pushed to the front or is made to hand mementos over to the “chief guests”. Many, many years later, the dark-skinned one would’ve grown into an adult only to will face discrimination at work. That HR manager, dozed with movies that push “fair”, will be no different from that Bot which judged the beauty pageant.

Can we rectify the problem Well, it is going to be a long, hard battle. The white complexion is due to skin-deep pigmentation, but prejudices associated with it are embedded deeper.

If all these algorithms are being taught our own prejudices then we’re just bad parents to machines and artificial intelligence. We find that we’re writing into them the exact flaws in thought and belief we have picked up over the course of about 80 years of targeted advertising.

It’s important then to pay attention to this AI crisis because machines are now increasingly making choices and decisions for us. Within the confines of your home, they are trying to identify your friends. Outside, for law enforcement agencies, machines are identifying “threats”. A ProPublica report earlier this year discovered that policing software — being used to predict future crimes — was actually biased against black people. So if machine code — fed a diet of prejudice — can have an impact on an individual’s life we must absolutely consider this a problem because equating the fair complexion with not just beauty but also with virtue is deeply troubling.

Next Story