EuropeOpinion

Europe Letter: A rogue artificial intelligence is no longer science fiction

Draft legislation proposed by the European Commission is an attempt to control AI before it can control us

From the curvature of cucumbers to the size of fish fingers, no aspect of life is too small or insignificant to escape the watchful gaze of Brussels. So it should come as no surprise that the EU is now turning its attention to the world of artificial intelligence. That’s right, folks – the EU has released a set of draft regulations on AI.

The above sentences were not written by me, but are in fact the work of an artificial intelligence. They were written by ChatGPT, a language model built by the research lab and corporation OpenAI. The AI was fed large amounts of text and trained to produce plausible original responses to prompts. In this case, I instructed ChatGPT to “write a funny article about EU plans to regulate AI”.

The sentences illustrate how AIs are only as good as the data used to train them. ChatGPT appears to have learned from the Boris Johnson style of English-language reporting on the EU. In line with this tradition, which tends to exaggerate or fabricate supposed plans for the petty regulation of foodstuffs by a nebulous “Brussels”, ChatGPT has hallucinated an EU regulation about the size of fish fingers that doesn’t exist.

The advance of artificial intelligence in daily life, from autonomous crop irrigation systems to messaging apps that anticipate what you want to type, has raised a number of ethical and regulatory dilemmas.

READ MORE

Bias in AIs can be far more serious than built-in euroscepticism. The systems are increasingly used in hiring processes to filter out unsuitable job applications. Learning from past decisions can mean baking in racial and gender discrimination for the future.

Training AIs on medical data could improve the speed and accuracy of diagnosis. But it comes with privacy concerns, and ethical questions too. Medical insurance companies could use an AI to exclude people with diagnostic warning signs or certain family medical histories from coverage, leading to unjust outcomes.

Then there is the question of liability. AI systems are able to act autonomously, so if they make an error, who is responsible? The risks are significant, whether a misdiagnosis or a self-driving vehicle that causes a crash.

It is no longer the realm of science fiction to question whether artificial intelligence could ultimately work against the interests of humanity.

AIs can behave with some autonomy, so an AI with access to the internet could train itself to become the most effective ever scam network, running infinite simultaneous phishing or romance scams, and learning from each experience to be more effective in the next.

An AI could even build a new artificial intelligence system of its own, to serve a further purpose still. Repressive governments are already using AIs fed on the mass data collection of the population as a way to bestow or withhold benefits.

The EU’s attempt to grapple with this emerging brave new world is called the AI Act. As early-mover regulation that would apply to 450 million of the world’s wealthier people, it is likely to be globally influential.

The draft law proposed by the European Commission takes a risk-based approach. It bans outright the kind of AIs that are deemed to carry “unacceptable risk”, such as the kind of social scoring system associated with the Chinese government.

“High risk” systems, such as those used in transport, education, law enforcement, or recruitment, are obliged to reduce risk and build in human oversight. Systems with “minimal” or “limited” risk, such as chatbots, spam filters, or video games, have looser rules.

Artificial intelligence systems have the potential to become hugely powerful. They are being developed by the world’s wealthiest tech companies. We can therefore expect them to be designed in line with these already-powerful interests, to be culturally slanted towards the United States, and potentially to entrench economic inequality.

A key consideration of the EU’s AI legislation has always been to avoid overly-stringent rules that might stifle innovation, in the hopes of encouraging the development within Europe of an industry expected to offer technological breakthroughs and economic growth.

In a recent compromise text reached by the 27 member states in advance of negotiations with the European Parliament, other interests are evident too.

The draft exempts military, defence, and national security applications of AI from the scope of the regulation. It would also allow police in exceptional circumstances to use remote biometric surveillance in public spaces, such as using facial scanning to find suspects.

The tweaks reveal how European states view AI – with a keen eye to how the technology could serve their own power.