Investors must beware deepfake market manipulation

Proliferation of artificial intelligence tools could exacerbate issues with misinformation

The most immediate danger we should fret about is not that machines will independently run amok, but that humans will misuse them.
The most immediate danger we should fret about is not that machines will independently run amok, but that humans will misuse them.

Last month an event erupted online that should make any investor wince. A deepfake video of a purported explosion near the Pentagon went viral, after it was retweeted by outlets such as Russia Today, causing US stock markets to wobble.

Thankfully, the American authorities quickly flooded social media with statements declaring the video to be fake – and RT issued a sheepish statement admitting that “it’s just an AI-generated image”. Markets then rebounded.

However, the episode has created a sobering backdrop to this week’s visit by Rishi Sunak, British prime minister, to Washington – and his bid for a joint US-UK initiative to tackle the risks of artificial intelligence (AI).

There has recently been a rising chorus of alarm both inside and outside the tech sector about the dangers of hyper-intelligent, self-directed AI. Last week, more than 350 scientists issued a joint letter warning that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war”.

READ MORE

These long-term “extinction” threats are headline-grabbing. But experts such as Geoff Hinton – an academic and former Google employee viewed as one of the “godfathers of AI”– think that the most immediate danger we should fret about is not that machines will independently run amok, but that humans will misuse them.

Accountability for bankers and how it will work

Listen | 46:11

Most notably, as Hinton recently told a meeting at Cambridge university, the proliferation of AI tools could dramatically exacerbate existing cyber problems such as crime, hacking and misinformation.

There is already deep concern in Washington that deepfakes will poison the 2024 US presidential race. This spring it emerged that they have already had an impact on Venezuelan politics. And this week Ukrainian hackers broadcast a deepfake video of Vladimir Putin on some Russian television channels.

But the financial sphere is now emerging as another focus of concern. Last month the Kaspersky consultancy released an ethnographic study of the dark web, which noted “a significant demand for deepfakes”, with “prices-per-minute of deepfake video [ranging] from $300 [€272] to $20,000″. So far they have mostly been used for cryptocurrency scams, it says. But the deepfake Pentagon video shows how they could impact mainstream asset markets too. “We may see criminals using this for deliberate [market] manipulation,” as one US security official tells me.

Forget Skynet, AI is more likely to terminate low-pay jobsOpens in new window ]

So is there anything that Sunak and US president Joe Biden can do? Not easily. The White House recently held formal discussions about transatlantic AI policies with the European Union (which Britain, as a non-EU member, was excluded from). But this initiative has not yet produced any tangible pact. Both sides acknowledge the desperate need for cross-border AI policies, but the EU authorities are keener on top-down regulatory controls than Washington is – and determined to keep the US tech groups at a distance.

So some American officials suspect that it might be easier to start international co-ordination with a bilateral AI initiative with the UK, given the recent release of a more business-friendly policy paper. There are pre-existing close intelligence bonds, via the so-called Five Eyes security pact, and the two countries hold a big slice of the western AI ecosystem (as well as the financial markets).

What neo-Luddites get right – and wrong – about Big TechOpens in new window ]

Several ideas have been floated. One, pushed by Sunak, is to create a publicly funded international AI research institute akin to Cern, the particle physics centre. The hope is that this could develop AI safely, as well as create AI-enabled tools to combat misuse such as misinformation.

There is also a proposal to establish a global AI monitoring body similar to the International Atomic Energy Agency (IAEA); Sunak is keen for this to be based in London. A third idea is to create a global licensing framework for the development and deployment of AI tools. This could include measures to establish “watermarks” that show the provenance of online content and identify deepfakes.

These are all highly sensible ideas that could – and should – be deployed. But that is unlikely to happen swiftly or easily. Creating an AI-style Cern could be very costly and it will be hard to get rapid international backing for an IAEA-style monitoring body.

And the big problem that haunts any licensing system is how to bring the wider ecosystem into the net. The tech groups that dominate cutting-edge AI research in the west – such as Microsoft, Google and OpenAI – have indicated to the White House they would co-operate with licensing ideas. Their corporate users would almost certainly fall in line too.

However, pulling corporate tiddlers – and criminal groups – into a licensing net would be much harder. And there is already plenty of open source AI material out there that can be abused. The Pentagon video deepfake, for example, appears to have used rudimentary systems.

So the unpalatable truth is that, in the short term, the only realistic way to fight back against market manipulation risk is for financiers (and journalists) to deploy more due diligence – and for government sleuths to chase cyber criminals. If this week’s rhetoric from Sunak and Biden helps to raise public awareness about this, that would be a good thing. But nobody should be fooled into thinking that knowledge alone will fix the threat. Caveat emptor. – Copyright The Financial Times Limited 2023