Deepfake threat gives Twitter a chance to prove itself

The company is taking action against manipulated media but scepticism abounds

The threat of deepfakes – synthetic or manipulated media – is becoming increasingly alarming. While tech companies have been striving to create more powerful editing technology and increasingly realistic artificial intelligence, they may also have inadvertently created a monster.

In the worst-case scenario, experts say, deepfakes could be used to create audio and visuals of world leaders ordering military action, to justify starting wars and costing lives. The best-case scenario is that they make us distrust our own eyes and ears, something that has already begun.

So the news that Twitter is planning to deal with the problem by bringing in new rules to deal with the fake videos online should be greeted warmly. The company signalled some time ago it would take action, and on Monday, it revealed what that would be.

Tweets sharing faked media could see a notice placed next to them, warning people that they are sharing or liking tweets with synthetic media. Another option is to add a link to an article or Twitter moment that gives people access to information on why the media could be synthetic or manipulated. The worst tweets – those that use synthetic or manipulated media that could lead to serious harm – could also be removed from the platform.

READ MORE

Proposed rules

Twitter users will get the opportunity to have their say on the proposed rules, with a survey running online until November 27th. It’s not the only thing the company is doing; Adobe has teamed up with Twitter and the New York Times to create a new industry initiative that would make the origin of a photo or video clear, along with any changes that have been made.

While fears that deepfakes could be used for propaganda or disinformation campaigns are becoming heightened in the run-up to the US election next year, the truth is that most deepfakes are currently used for pornography, either celebrity or revenge porn, rather than politics. But that doesn’t mean it won’t happen, should an opportunity to exploit the technology be spotted.

You would be forgiven for being slightly sceptical that Twitter’s new rules will help stamp out the spread of fake videos online; Twitter’s own platform is frequently the medium for the spread of fake news articles and information, something that has become increasingly difficult to deal with. Given the company’s at times patchy response to abuse on its own platform, confidence that it will be able to deal with this new threat may not be high.

If it wants to be taken seriously, Twitter needs to ensure that it sticks to its own rules and not, as has been the perception in the past, only paid lip service.