One of our shoutiest moral panics these days is the fear that artificial intelligence-enabled deepfakes will degrade democracy. Half of the world’s population are voting in 70 countries this year. Some 1,500 experts polled by the World Economic Forum in late 2023 reckoned that misinformation and disinformation were the most severe global risk over the next two years. Even extreme weather risks and interstate armed conflict were seen as less threatening.
But, type it gently, their concerns appear overblown. Not for the first time, the Davos consensus might be wrong.
Deception has been a feature of human nature since the Greeks dumped a wooden horse outside Troy’s walls. The Daily Mail’s publication of the Zinoviev letter – a forged document purportedly from the Soviet head of Comintern – had a big impact on the British general election of 1924.
Of course, that was before the internet age. The concern now is that the power of AI might industrialise such disinformation. The internet has cut the cost of content distribution to zero. Generative AI is slashing the cost of content generation to zero. The result may be an overwhelming volume of information that can, as the US political strategist Steve Bannon memorably put it, “flood the zone with sh*t”.
File being prepared for DPP over insider trading
Christmas tech for kids: great gift ideas with safety features for parental peace of mind
MenoPal app offers proactive support to women going through menopause
Ezviz RE4 Plus review: Efficient budget robot cleaner but can suffer from wanderlust under the wrong conditions
Deepfakes – realistic, AI-generated audio, image or video impersonations – pose a particular threat. The latest avatars generated by leading AI companies are so good that they are all but indistinguishable from the real thing. In such a world of “counterfeit people”, as the late philosopher Daniel Dennett called them, who can you trust online? The danger is not so much that voters will trust the untrustworthy, but that they will distrust the trustworthy.
Yet, so far at least, deepfakes are not wreaking as much political damage as feared. Some generative AI start-ups argue that the problem is more about distribution than generation, passing the buck to the giant platform companies. At the Munich Security Conference in February, 20 of those big tech companies, including Google, Meta and TikTok, pledged to stifle deepfakes designed to mislead. How far the companies are living up to their promises is, as yet, hard to tell but the relative lack of scandals is encouraging.
The open-source intelligence movement, which includes legions of cyber sleuths, has also been effective at debunking disinformation. US academics have created a Political Deepfakes Incidents Database to track and expose the phenomenon, recording 114 cases up to this January. And it could well be that the increasing use of AI tools by millions of users is itself deepening public understanding of the technology, inoculating people against deepfakes.
Tech-savvy India, which has just held the world’s biggest democratic election with 642 million people casting a vote, was an interesting test case. There was extensive use of AI tools to impersonate candidates and celebrities, generate endorsements from dead politicians and throw mud at opponents in the political maelstrom of Indian democracy. Yet the election did not appear to be disfigured by the digital manipulation.
Two Harvard Kennedy School experts, Vandinika Shukla and Bruce Schneier, who studied the use of AI in the campaign, concluded that the technology was mostly used constructively.
For example, some politicians used the official Bhashini platform and AI apps to dub their speeches into India’s 22 official languages, deepening connections with voters. “The technology’s ability to produce non-consensual deepfakes of anyone can make it harder to tell truth from fiction, but its consensual uses are likely to make democracy more accessible,” Shukla and Schneier wrote.
[ Fake videos of Modi aides trigger political showdown in India electionOpens in new window ]
This does not mean use of deepfakes is always benign. They have already been used to cause criminal damage and personal distress. Earlier this year, the British engineering company Arup was scammed out of $25 million (€23 million) in Hong Kong after fraudsters used digitally cloned video of a senior manager to order a financial transfer. This month, explicit deepfake images of 50 girls from Bacchus Marsh Grammar school in Australia were circulated online. It appeared that the girls’ photos had been lifted from social media posts and manipulated to create the images.
Criminals are often among the earliest adopters of any new technology. It is their sinister use of deepfakes to target private individuals that should concern us most. Public uses of the technology for nefarious means are more likely to be rapidly exposed and countered. We should worry more about politicians spouting authentic nonsense than fake AI avatars generating inauthentic gibberish. – Copyright the Financial Times Limited 2024
- Sign up for Business push alerts and have the best news, analysis and comment delivered directly to your phone
- Find The Irish Times on WhatsApp and stay up to date
- Our Inside Business podcast is published weekly – Find the latest episode here