Who is more racist, sexist and biased: You or your computer?

Unthinkable: Algorithms embed prejudice with ‘no accountability’, says Abeba Birhane

Humans find it hard to shake off prejudice. So can artificial intelligence (AI) do a better job of making decisions that are blind to gender, colour and creed?

Initial indications are far from encouraging. Search online for a stock image of a “CEO” and you’ll be presented with rows of headshots of mainly white men. If data sets are biased to start with, outcomes will also be tainted, reinforcing old perceptions.

Now think of all the areas we are starting to cede to algorithms – in job recruitment (filtering applications), policing (suspect identification), banking (loan approvals); the list goes on. And it’s happening at a time when the technology is still in its infancy.

Tests have shown that even top-performing facial recognition systems misidentify black people at rates five to 10 times higher than they do whites. Nonetheless, such technology is already being rolled out – including in London where the Metropolitan police began using it earlier this year.

READ MORE

Some data scientists – notably many of them women – have been sounding the alarm about algorithmic prejudice for some time but the gallop towards an AI-controlled future continues.

Highlighting both the lack of transparency surrounding algorithms and also their capacity to heighten racism and sexism, American mathematician Cathy O’Neil has coined the phrase “algorithms are opinions embedded in code”. In her research, Harvard scholar Shoshana Zuboff highlights the commercial drive underpinning AI, and how algorithmic systems are being used to supercharge “surveillance capitalism”.

Another vocal critic is Abeba Birhane, an Ethiopian-born cognitive scientist based at University College Dublin, who recently helped to uncover racist and misogynistic terms in a Massachusetts Institute of Technology (MIT) image library that was used to train AI. MIT has since withdrawn the database – a small victory in what's set to be a long war.

Birhane, who is a PhD candidate at the Complex Software Lab at UCD’s School of Computer Science and Lero, the Science Foundation Ireland Research Centre for Software, is today’s Unthinkable guest.

Why not hand over decision-making to algorithms if they can be programmed to be more objective than us?

Abeba Birhane: “That is one of the most persistent thoughts – the idea that if we work hard enough, and if we have good enough data or good enough algorithms, we will end up with better, less biased outcomes than humans. But I think this is one of the biggest misconceptions on so many levels.

My motto is unless an algorithmic system has been scrutinised and gone through critical assessment you are safe to assume that it perpetuates the status quo

“Philosophically speaking, when you are talking about social issues – whether it’s about designing an algorithm to find the best hire, or in the criminal justice sphere to identify people likely to commit crime – any of these applications assume that we can somehow formalise dynamic, ambiguous, continually moving, social activities and finalise them or find a solution for them . . . That rests on very reductionist thinking.

“But leaving the philosophy aside, it is also very problematic ethically speaking. AI models are very good at finding patterns and similarities and most predictive models are built on historical data where these historical patterns are taken as the ground truth. And we know the past is full of injustice and discriminatory practices.

“Moreover, the integration of algorithmic systems into the social sphere often emerges from the need to make life easier for those already in positions of power not from the desire to protect and benefit the end user.”

Is there a case to be made for using algorithms for limited tasks?

“I don’t want to be completely dismissive and say there is no use for building predictive systems at all. But it comes down to what you are trying to do with it, what your objective is, and who your model ends up empowering.

“My motto is unless an algorithmic system has been scrutinised and gone through critical assessment you are safe to assume that it perpetuates the status quo and it repeats the past and harms minoritised and underserved individuals and communities.”

Tech companies refuse to disclose how their algorithms work on commercial grounds. Do we need to take these mechanisms into public ownership or at least make them transparent?

“Yes, sometimes it feels really absurd that we have private firms making moral and ethical decisions about things that people have debated for so long in the public sphere and where there is no single right answer: Using facial recognition systems to monitor ‘criminal activities’, for example, or using a tool to decide who is deserving of social welfare.

“These are ongoing challenges and open questions yet private firms, coming with their algorithms treat them as mathematical or technical matters that can be ‘solved’ once and for all.

“Their algorithms increasingly replace, or rather remove that ongoing debate and conversation while they are held to no accountability or scrutiny. Why? Because they hide behind proprietary rights – they have the rights to keep their code and data hidden“Judges or government bodies making similar decisions would be subjected to so much scrutiny. Now we have these companies making decisions hidden under an algorithm - as though they are just providing a technical solution rather than deciding on social and moral matters.”

There is growing awareness of racial and gender discrimination in science. Are things starting to change?

“I have oscillating views on this. Sometimes I feel really positive. I see so much encouraging work, especially coming from black women from various fields.

“People who are negatively impacted the most seem to do the most work highlighting the negative impact of racism and problems surrounding algorithmic decision making because if something doesn’t impact you it’s difficult for you to notice it in the first place, never mind develop a solution for it. So I see a lot of communities producing brilliant work that is changing the attitude and discourse. That gives me hope.

“On the other hand, I see the problem is so deeply ingrained we are only scratching the surface. And there’s so much reductive and simplistic thinking such as de-biasing your data set to algorithms that shouldn’t exist in the first place, or assembling a predominantly white women diversity and inclusion board to tackle deeply ingrained racial issues.

“These might be good first steps in some cases – although they might do more harm than good in other cases – but we need to look beyond that. We need to interrogate historical injustices. We need to ask difficult questions such as how current structures in academia, tech, and society in general allow certain people – those that satisfy the status quo – an easy pass while creating obstacles to those outside it.“How do we move beyond thinking in terms of individualistic solutions such as asking people to take implicit bias tests to thinking in broader terms like creating an environment that welcomes and keeps minoritised scholars?”

Do you still encounter a certain amount of defensiveness from scientists who, professionally speaking, like to think they are above prejudice or bias?

“Speaking from my own experience, yes, you do find a lot of people adhering to this illusion of objectivity – this illusion that they are doing science from ‘the view from nowhere’, whereas often ‘the view from nowhere’ is the view of the status quo masquerading as the view from nowhere.

“Unfortunately, much of Western science is built on the foundations of this illusion that we can dissociate from our subject of enquiry and measure and analyse things as a disinterested observer from afar. But also you find more and more people realising this idea of neutrality is an illusion.

“Culturally speaking, society has a stereotypical image of what a scientist looks like which is a stereotypical white male in some kind of white lab coat. This means that those that don’t conform to this image face a challenge when it comes to being taken seriously as a scientist or a professor.

“But I think there’s more and more realisation and more nuanced understandings and conversations, at least in my corner of academia, which again gives me hope and encourages me.”