Some of the biggest names in the development of artificial intelligence (AI) have called for global leaders to work towards mitigating the risk of “extinction” from the technology.
In a short statement, which did not clarify what they think may become extinct, business and academic leaders said the risks from AI should be treated with the same urgency as pandemics or nuclear war.
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” they said.
The statement was organised by the Centre for AI Safety, a San Francisco-based non-profit which aims “to reduce societal-scale risks from AI”.
How does VAT in Ireland compare with countries across Europe? A guide to a contentious tax
‘I was a cleaner in my dad’s office, which makes me a nepo baby. I got €50 a shift’
Will we have a tax liability if Dad gives us his home while he is alive?
Finding a solution for a tenant who can’t meet rent after splitting with partner
It said the use of AI in warfare could be “extremely harmful” as it could be used to develop new chemical weapons and enhance aerial combat.
The letter was signed by some of the biggest names in the field, including Geoffrey Hinton, who is sometimes nicknamed the “Godfather of AI”.
The signatories also include Sam Altman and Ilya Sutskever, the chief executive and co-founder respectively of ChatGPT-developer OpenAI.
The list also included dozens of academics, senior bosses at companies like Google DeepMind, the co-founder of Skype, and the founders of AI company Anthropic.
AI is now in the global consciousness after several firms released new tools allowing users to generate text, images and even computer code by just asking for what they want.
Experts say the technology could take over jobs from humans – but this statement warns of an even deeper concern. – PA