When science fiction writer Isaac Asimov in 1942 devised a set of ethical rules to be hardwired into the software of robots, he opened a rich vein of speculation for the writers who followed, but also helped shape the field of artificial intelligence (AI). His prescience has been given a new relevance by the launch of powerful chatboxes like ChatGTP and Google’s Bard, neural networks capable of analysing vast amounts of data to generate humanlike conversations, create essays and perform complex tasks like writing computer code.
However liberating they may be, these tools have been criticised for getting things wrong and for spreading misinformation. Experts worry that they could be tools to spread disinformation, or encourage dangerous behaviour.
Should their developers be seen as having any responsibility for their machines, or be expected to set internal ethical constraints on the algorithms that steer their work? An earlier insistence by social media giants that they were just hosts for others, not to be held responsible for users’ actions, has been decisively debunked.
Now 1,000 leaders in the AI field have all signed an open letter demanding an immediate pause on the creation of “giant” AIs for six months, so that the capabilities and dangers of this technology can be assessed. The signatories include Elon Musk, the Twitter owner and co-founder of the research lab responsible for ChatGPT; Emad Mostaque, founder of London-based Stability AI; Steve Wozniak, co-founder of Apple; and many engineers from the leading tech companies.
Matt Williams: Take a deep breath and see how Sam Prendergast copes with big Fiji test
New Irish citizens: ‘I hear the racist and xenophobic slurs on the streets. Everything is blamed on immigrants’
Crucial election weekend begins amid campaign as bland as an Uncle Colm monologue on Derry Girls
Jack Reynor: ‘We were in two minds between eloping or going the whole hog but we got married in Wicklow with about 220 people’
If researchers will not voluntarily pause their work then “governments should step in”, the authors say, calling for “new and capable regulatory authorities dedicated to AI.” Powerful AI systems should only be developed, they say, once we are confident that their effects will be positive and their risks will be manageable.
And they are right. The industry, driven solely by commercial imperatives, cannot be trusted to self-regulate.