In the film Her, a lonely writer called Theodore Twombly falls in love with the disembodied voice of Samantha, a digital assistant played by the actor Scarlett Johansson. “I can’t believe I’m having this conversation with my computer,” Twombly tells Samantha. “You’re not. You’re having this conversation with me,” Samantha coos.
The genius of Spike Jonze’s script is its exploration of the borderlands between the artificial and the real. But the science fiction film, released in 2013, has acquired ironic resonance today after OpenAI launched its latest GPT-4o multimodal artificial intelligence chatbot, seemingly mimicking Johansson’s voice.
Johansson said she had declined OpenAI’s requests to use her voice, adding that she was “shocked and angered” to discover the company had deployed one “eerily similar” to her own.
She called for greater transparency and appropriate legislation to ensure that individual rights were protected. OpenAI paused the use of the voice, which it later explained belonged to another, unnamed actor.
Housing in Ireland is among the most expensive and most affordable in the EU. How does that happen?
Ceann comhairle election key task as 34th Dáil convenes for first time
Your EV questions answered: Am I better to drive my 13-year-old diesel until it dies than buy a new EV?
Workplace wrangles: Staying on the right side of your HR department, and more labrynthine aspects of employment law
The incident might have struck officials attending the latest AI Safety Summit in Seoul last week as a diverting celebrity tantrum. But the dispute chimes with three more general concerns about generative AI: the theft of identity, the corrosion of intellectual property and the erosion of trust.
Can AI companies responsibly deploy the technology? Unnervingly, even some of those previously in charge of ensuring safety are asking that question.
Google has had its own problems with generative AI when its Gemini chatbot generated ahistorical images of Black and Asian Nazi stormtroopers
In recent weeks, Jan Leike resigned as head of a safety team at OpenAI following the departure of Ilya Sutskever, one of the company’s co-founders and chief scientist. On X, Leike claimed that safety at the company had taken a back seat to “shiny products”. He argued that OpenAI should devote much more bandwidth to security, confidentiality, human alignment and societal impact. “These problems are quite hard to get right, and I am concerned we aren’t on a trajectory to get there,” he posted.
In his own parting remarks, Sutskever said he was confident OpenAI would build AI that was “both safe and beneficial”. However, Sutskever was one of the board members who last year tried to oust the company’s chief executive Sam Altman. After Altman was reinstated following a staff revolt, Sutskever said he regretted his participation in the coup. But his own departure will remove another counterweight to Altman.
It is not just OpenAI, though, that has stumbled in deploying AI technology. Google has had its own problems with generative AI when its Gemini chatbot generated ahistorical images of Black and Asian Nazi stormtroopers.
Both companies say missteps are inevitable when releasing new technologies and they respond quickly to their mistakes. Still, it would instil greater confidence if the leading AI companies were more transparent.
They have a long way to go, as shown by the Foundation Model Transparency Index, published last week by Stanford University. The index, which analyses 10 leading model developers across 100 indicators including data access, model trustworthiness, usage policies and downstream impacts, highlights how the big companies have been taking steps to improve transparency over the past six months but some models remain “extremely opaque”.
“What these models allow and disallow will define our culture. It is important to scrutinise them,” Percy Liang, the director of Stanford’s Center for Research on Foundation Models, tells me. What worries him most is the concentration of corporate power. “What happens when you have a few organisations controlling the content and behaviour of future AI systems?”
Such concerns may fuel demands for further regulatory intervention, such as the European Union’s AI Act, which received approval from the European Council this month. More than a quarter of US state legislatures are also considering bills to regulate AI. But some in the industry fear regulation may only strengthen the grip of the big AI companies.
“The voices in the room are Big Tech. They can entrench their power through regulation,” Martin Casado, an investment partner at the VC firm Andreessen Horowitz, tells me. Policymakers need to pay far more attention to Little Tech, the scores of start-ups using open-source AI models to compete against the bigger players.
Ten nations and the EU at the Seoul summit last week agreed to establish an international network of safety institutes to monitor the performance of frontier AI models, which is welcome. But they should now listen to Johansson and dig much deeper into the powerful corporate structures that deploy these models. – Copyright The Financial Times Limited 2024