Human biases could be biggest challenge to future of AI

Web Blog: Programs predominantly built by men are likely to discriminate against women

Many AI-powered devices which we give orders to are given female voices such as Siri and Alexa.

In a recent TED talk from AI technologist Kriti Sharma, we are told that the biggest worry around the future of AI is not that it will take our jobs but that it will reinforce human biases already present in the sector. Sharma, who has several degrees in computer science and has been building robots from the age of 15, says that based on her appearance and gender it is regularly assumed she doesn’t know much about artificial intelligence.

In her talk she gives examples of when AI takes these biases and reinforces them: men are more likely to be programmers and historically outnumber women in this field; based on this data, an algorithm designed for recruitment purposes can assume that male applicants are preferable and filter out female candidates, she explains.

“This is not about the talent; this is about an elitism in AI that says a programmer needs to look like a certain person,” says Sharma.

Sharma says these biases are reinforced by the genders we assign to AI-powered devices. She gives the examples of Siri and Alexa, AI technologies with female voices which we are used to giving orders to. On the other hand, more powerful AI programmes such as Watson are designated “male”.

https://www.ted.com/talks/kriti_sharma_how_to_keep_human_biases_out_of_ai/discussionOpens in new window ]