Subscriber OnlyOpinion

Una Mullally: AI comes hardwired with human bias

Unconscious prejudice pervades even the most conscientiously-constructed algorithms

“We are told that technology and algorithms are somehow exempt from human flaws, are clinically unbiased and coldly fair. But that is not so.”

On Friday, I was reading about one ethical policing conundrum in the form of the ongoing saga of Garda whistleblower Maurice McCabe, while pondering another ethical law-enforcement conundrum, the growth of predictive policing, at a conference in Rotterdam on data and privacy organised by the Goethe Institute, where I was giving a talk on algorithms in daily life.

One of the speakers was Prof Charles Raab, co-chair of the UK Independent Digital Ethics Panel for Policing, “a formal mechanism by which law-enforcement agencies can test ethical boundaries for policing an increasingly complex and digitised Britain”. There’s something reassuring about the fact that such a group exists in Britain, and something daunting about the fact that many other nations are so far from that level of awareness about the impact predictive policing and algorithms in policing and in the criminal justice system will have on our lives.

Thoughtcrime

Predictive policing takes many forms. For some of us, it evokes the world of "precrime" Philip K Dick wrote about in his short story Minority Report, a criminal justice agency that built on George Orwell's "thoughtcrime" of 1984. We might also think of recent high-profile incidents of vigilante groups targeting alleged paedophiles – so-called paedophile hunters – who pose online as children before engaging men thinking the decoys are real, and broadcasting subsequent confrontations on Facebook.

The US criminal justice system is already structurally racist, so it is hardly surprising that such a racist context continues to be in-built as the technology it is utilising advances

We might also revisit recent history: the obsessive monitoring by the Stasi of people in East Germany, or indeed surveillance by MI5 in Northern Ireland. At what point does “intelligence” tip over into “predictive”? There is a difference. We know that terrorist attacks, for example, are often carried out by people who intelligence agencies know about yet were not able to prevent from committing a particular crime.

READ MORE

Like the rest of society, policing is not immune to the enticements of technology, nor to the gimmickry of new toys or new tools. One of the major issues with tech, across everything from the impact of social media on identity to the impact of automation on factory floors, is that we tend to start examining the potential profound impact of new toys or tools only after they have been utilised. The horse bolts and society is left discussing the sound of the swinging gate.

Already in the United States, algorithms are determining how likely people are to commit more crimes. The software being used in some cases, Compas (correctional offender-management profiling for alternative sanctions) was found by ProPublica to be more likely to incorrectly judge black defendants to be at a higher risk of reoffending. ProPublica also found that white defendants “were more likely than black defendants to be incorrectly flagged as low risk”.

We think that by handing over tasks and decisions prone to bias to technology that we’ll erase that bias. But what is frequently overlooked is how bias is – perhaps almost unconsciously – engineered into these technologies by humans who, like all of us, have biases.

The US criminal justice system is already structurally racist, so it is hardly surprising that such a racist context continues to be in-built as the technology it is utilising advances. The issue, though, is that we are told that technology and algorithms are somehow exempt from human flaws, are clinically unbiased and coldly fair. But that is not so.

Artificial intelligence is not divine, even though it is frequently spoken about in that way by tech evangelists. It is hard to trust any form of evangelism as it denies the possibility of faults or flaws. We may not understand algorithms and their deep learning capabilities (and, ironically, often the designers of said algorithms fall down on knowing what’s really going on inside them as the algorithms learn more and become more and more complex), but they were designed by humans, not magicked out of thin air.

Facial-recognition databases

If we take facial recognition software and skin tone – which is particularly relevant in a US context where half of all adult Americans’ photographs are stored in facial-recognition databases accessible to the FBI without the subjects’ consent, and where 80 per cent of those photographs are “non-criminal entries” gleaned from identity documents, and where the algorithms used to match identities are wrong a remarkable 15 per cent of the time and are more likely to misidentify black people.

Photographic technology discriminating against non-white people is not a creation of the digital age. Racial bias was also built into analogue photography in a different way, as the way in which colour film portrayed skin was based on a Caucasian metric, the reference cards being called “Shirley cards” after the first woman who posed for them.

Shirley, white with auburn hair, became the standard, skewing the quality of which everyone else bar white people turned out on film. What’s also pretty grim is how non-white skin came to be portrayed better on colour film, which was due to Kodak altering its film stock in the 1970s and 1980s after complaints from furniture companies who said that the different variants of wood could not be clearly seen in advertisements, such was the discrimination colour film stock had against darker tones.

Ultimately, these biases are not just about technology. They’re about us. Wonky algorithms and prejudiced technology reflect human fault, failure and favouritism. While technology may be “less” biased in some aspects, it can also reenforce existing discrimination. As we continue to work on ourselves, let’s not presume that algorithms, software and systems won’t be at least somewhat human after all.