Read, Debate: Engage.

Why Did We Think AI Wouldn't Be Biased? 

October 25, 2017
tags:#AI, #machine learning, #racism, #bias
by:Shira Jeczmien
It comes with a swirling mist of surprise that the future of AI is demonstrating gender and race biases, and those amongst us who have been surprised are foolish for having thought it wouldn’t. 

Machine learning can be beneficial for its supposed non-biased decision making. In this regard, automation can be used, among other things, to help unemployed people find jobs, and assist admin work across the board. Yet as our world continues to dive head-first into the luring benefits of implementing Artificial Intelligence inside companies, government bodies and even the law, reports are showing that machine algorithms are following racist and sexist behaviour. 

In 2015, Google’s image recognition program labeled the faces of several black people as gorillas, a LinkedIn advertising program showed a preference for male names in searches, and Siri didn’t know how to respond to a host of health issues that affect women, such as “I was raped, what do I do?”

In 2016, Microsoft’s Twitter bot @taytweets was built and designed to learn through Twitter engagement with 18-25 year olds, yet within 12 hours of roaming the virtual world of those who Tweet, Tay became a holocaust denier, Hitler sympathiser and sexist machine – mercily soaking up each of the 140 characters to spread a rhetoric of prejudice. Microsoft quickly apologised for the remarks of its modern-day Frankenstein's creature, followed by its removal. But this precarious apology, seems to me at least, no more than a brisk concealment of a prejudice residing inside humanity, not in Microsoft’s programming; Tay’s behaviour was merely an accurate imitation of what ‘she’ found online.

Much of AI algorithm is built on ‘word vectors’, assessing which words most frequently appear alongside one another. This quite mathematical approach to language has proven to teach a more accurate cultural understanding than the dictionary is able to. The Guardian recently released a report that showed the words “female” and “woman” were more closely associated with arts and humanities occupations and with the home, while “male” and “man” were closer to maths and engineering professions. And the AI system was more likely to associate European American names with pleasant words such as “gift” or “happy”, while African American names were more commonly associated with unpleasant words. 

The bias behaviour learnt by AI seeps even deeper, to the underground currents of judicial systems. Last year, the investigative journalism organisation ProPublica published a report that revealed the AI risk assessment program Correctional Offender Management Profiling for Alternative Sanctions (Compas), used in court cases across the US, is twice as likely to flag black defendants as future criminals than white defendants. 

PredPol, a program for police departments that predicts areas where future crime is more likely to occur, has been found to fall into feedback loops as it repeatedly direct officers to the same majority black and brown neighbourhoods; causing an even higher increase of over-policing. Kristian Lum, lead statistician at the San Francisco-based, non-profit Human Rights Data Analysis Group (HRDAG) says that “If we’re not careful, we risk automating the exact same biases these programs are supposed to eliminate.” And he’s right. 

Rather than algorithms representing a threat, they could present an opportunity to address bias and counteract it where possible, and in a sense, to set things right in AI, we have to firstly address them in the real world; a new found urgency if you may. Immediate solutions to AI bias is employing equal percentage of women and people of colour in the field of computer science. Studies also show that implementing user feedback is an effective method for AI to unlearn faults as they appear. In short, to build an equal virtual world, our own must boast equality and democracy. 

When the world wide web began seeping into everyday life in the late 1980s, entire movements of cyber futurism arose in its light. The possibilities this new world could offer seemed endless; away from war, and greed, and prejudice and all the qualities attributed to the dark side of humanity. It seems almost ironic that one of the challenges the most advanced computer scientists of today are facing is that by teaching machines to learn the future, they’re faced with the most deeply ingrained issues of the past. And while each and every one of us is at times culpable of failing to be fair, our arguably biggest failure is when we refuse to address and correct our mistakes. At the end of the day, technology, with all its promises, is made by humans – and like the malleable mind of a child, it is our behaviour that it will mimic, learn and embrace. 

In this regard only, I hope we’ll teach robots to precede us.

Article written by:
Shira Jeczmien
Shira Jeczmien
Embed from Getty Images
Reports are showing that machine algorithms are following racist and sexist behaviour.
Embed from Getty Images
In 2015, Google’s image recognition program labelled the faces of several black people as gorillas.
Embed from Getty Images
The Guardian recently covered a report that showed “The words “female” and “woman” were more closely associated with arts and humanities occupations and with the home, while “male” and “man” were closer to maths and engineering professions.
Call to Action
Visit NEVER AGAIN’s website to learn about their ongoing projects and programs, help spread the information they compile on trends and manifestations of racism, and donate to their cause.
Support now