Facial recognition technology is touted as one of the most powerful surveillance tools ever invented. It is hailed by proponents as a justified means of catching criminals and terrorists as well as helping to identify victims and the vulnerable.
But, critics say, without stringent safeguards, it is an Orwellian nightmare in the making.
How does it work?
Facial recognition technology can trace its roots back to the 1960s, when American mathematician Woody Bledsoe developed a system on his Rand Table - a primitive precursor to the iPad. It classified faces using co-ordinates such as the width of eyes, mouth and nose as well as the distances between these features. The results were then matched against a database of pictures, processing the results of the closest matches. The basics remain in today’s technology, although it is vastly advanced, using algorithms to determine a “faceprint” and matching it against a variety of image banks, generally ranked by accuracy. It is also able to exploit driving-licence databases, the near-ubiquity of cameras in modern society and tens of billions of images scraped from social media.
Christmas digestifs: buckle up for the strong stuff once dinner is done
Western indifference to Israel’s thirst for war defines a grotesque year of hypocrisy
Why do so many news sites look so boringly similar? Because they have to play by Google and Meta’s rules
Christmas dinner for under €35? We went shopping to see what the grocery shop really costs
Don’t we use this technology all the time?
Yes, many of us do. From unlocking our smartphones, to checking in at airport security, or even going to a match at a sporting stadium, organisations employ facial recognition technology as a fast, effective security measure. It can also be used to pay for goods and sort photos, based on the identity of individuals, on everyday devices or platforms like Facebook. It is used by casinos to track the movements of identified cheats and as an early-warning system for five-star hotels to alert staff about the arrival of a VIP.
What’s the big deal?
Its use becomes more controversial when it steps into the realm of state authority, and policing, in particular. Human rights and civil liberties campaigners warn about its limits, potential for wrongdoing and danger of reinforcing the cultural prejudices of institutions - which includes police forces - when it comes to the likes of racial profiling or identifying people involved in a protest.
Trenchant concerns have also been voiced about the accuracy of the technology, which relies heavily on the diversity of images fed into the system. If those images are mostly of white men say - which has been the case in many systems - that means less accurate results in matching women or ethnic minorities.
According to a report by the National Institute of Standards and Technology in the US, Asian and African Americans were up to 100 times more likely to be misidentified, compared to white men, on existing facial recognition systems. Native Americans were the most likely to be misidentified. A separate report by the University of Essex into the system used by the Metropolitan Police in London found four out of five people it identified as possible suspects are innocent.
The independent study, which concluded it is “highly possible” the Met’s use of the system would be found unlawful if challenged in court, prompted calls for the technology to be shelved.
Has it been shelved?
The Met defends its continued use of facial recognition technology. The force has been using live facial recognition (LVR) to monitor crowds since it was first deployed at the Notting Hill Festival in August 2016. In January this year, the Met deployed LVR in Westminster the day after mask-wearing requirements were relaxed. Four people were arrested - one for an extradition warrant related to alleged drug offences and serious assault, the others separately for drug, traffic and alleged death-threat offences.
The Met and Nottinghamshire Police are also using retrospective software, which matches police mugshots against images from the likes of CCTV and social media.
Other UK police forces in Hampshire, Humberside, North Wales and South Yorkshire are piloting similar technology.
Has the technology been legally challenged?
Britain’s court of appeal ruled in 2020 that the use of the technology by South Wales Police breached both privacy rights and equality laws. In the wake of the finding, fresh guidance was issued by the UK’s College of Policing this year, defending its use for finding missing persons potentially at a risk of harm, people who “may pose a threat to themselves or others” and to arrest suspects “wanted by police or courts”.
As far back as 2001, use of the technology was challenged in the US as an alleged violation of Fourth Amendment rights against unreasonable search after it was deployed at the Super Bowl.
Since then it has been used for helping identify Osama bin Laden in 2011, and Taylor Swift deployed facial recognition at a gig in 2018 to cross-reference images with a database of the pop star’s known stalkers.
While some cities in the US have banned its use by government agencies, others hire technology from private companies. The FBI has its own database of more than 400 million photos, including from driving licences.
Who are the private companies behind the technology?
There are many. Three of the biggest - Amazon, Microsoft and IBM - have all moved to stymie use of their systems by police in recent years. Following pressure over police brutality and the Black Lives Matters demonstrations, IBM said it would temporarily suspend selling facial recognition technology to law enforcement agencies in the US. Its chief executive, Arvind Krishna, said the company “firmly opposes” use of the technology for “mass surveillance, racial profiling, violations of basic human rights and freedoms” and urged a “national dialogue” on its use by police.
Amazon followed suit, suspending sale of its Rekognition software to police, while Microsoft announced it doesn’t sell its system to police and would not until regulations “grounded in human rights” were brought in.
Just this week, the UK’s data watchdog, the Information Commissioner’s Office, fined a facial recognition company £7.5 million for using images of people from social media for its database.
The US-based Clearview AI, which collected more than 20 billion images from the likes of Facebook, was ordered to delete all UK residents from its system. The Metropolitan Police and the UK’s National Crime Agency are among the company’s previous clients.
How will the technology be used by gardaí in Ireland?
Minister for Justice Helen McEntee is to ask Cabinet for approval for an amendment to the Garda Síochána (Recording Devices) Bill to allow for its use by gardaí. Sources close to the Minister insist it will not be used for indiscriminate surveillance, mass data gathering or racial profiling and that it will help in child exploitation cases involving thousands of hours of video footage, currently analysed by human eye.
The proposed legislation could be enacted by the end of the year. Damien McCarthy, of the grassroots Garda Representative Association, said it would give gardaí a “very positive” advantage when tackling serious crime and could speed up the process, saving “thousands of hours”.
But Fianna Fáil TD James Lawless warned it could be “a dystopian nightmare” where a computer would effectively say “go arrest that person”, when it would turn out to be the wrong person. Elizabeth Farries, assistant professor at the UCD Centre for Digital Policy, said there was evidence the technology actually made society less safe. “It doesn’t accomplish the goal that guards are seeking. It’s not accurate, it can be discriminatory - and it moves us further towards a surveillance society that is somewhat dystopian in character.”