Subscriber OnlyOpinion

We don’t need death threats against politicians to see how social media is harming democracy

Online platforms are leading us into polarised spaces but now we have a tool to deal with it

A screengrab from a video posted online in which a man allegedly makes threats to Sinn Féin's Mary Lou McDonald and others. Photograph: The Irish Times

Last week, a TikTok video containing alleged death threats against Sinn Féin leader Mary Lou McDonald went viral, the latest incident in a disturbing trend of political intimidation and violence in Ireland. A man suspected of making the threats against McDonald was subsequently charged under Section 5 of the Non-Fatal Offences Against the Person Act 1997 and denied bail.

This incident underscores the systemic issues within our online world.

Threats of political violence against our public representatives are becoming a dangerous fixture of public life. In the past week, Leo Varadkar spoke publicly about how he could not sleep in his own home after the Dublin riots, due to fears for his safety. The recent local and European elections saw assaults on canvassers, and street harassment filmed and posted online in order to humiliate and intimidate candidates.

It is right that individual threats of violence are treated as a criminal matter, but we also need to find tools for dealing with the underlying issues at play. The newly enacted Digital Services Act aims to address some of the issues.

READ MORE

There are three stories we tell about how these threats are intertwined with the online world. One is that they are the result of coordinated online campaigns, designed to harass officials, shape public opinion, and demonise individuals and groups. McDonald spoke about how there has been “an escalation of targeted online abuse directed at me over the last year” by a “very organised section of people online to have a run at Sinn Féin and at me in particular”.

The second perspective is to see this as the extreme end of the bubbling up online of genuine frustration at our elected representatives, or even as authentic online expressions of resentment rooted in public sentiment. McDonald was at pains to differentiate between genuine online discourse and what she experienced, insisting she is “not somebody who is touchy or sensitive about criticism and commentary that comes with being a public figure, but I do not accept, and I will not accept threats against my life”.

And thirdly, we talk about how online platforms are leading us into polarised “echo chambers” stoking our darkest emotions and hurting our ability to recognise humanity in those we disagree with. Whether we talk about Facebook “filter bubbles”, YouTube “rabbit holes” or TikTok’s “alt right pipeline”, we now have a full decade of research and narratives of social media algorithms pushing users to increasingly polarising and emotive content as they scroll. A recent European Commission report summarised concerns about a process where “individuals engage with content on the internet and eventually become radicalised to either adopt extreme beliefs or commit violence”.

Unpicking where each of these three stories begin and end, and how each influences the other, is incredibly difficult and becoming even more so. Take, for example, coordinated online campaigns which can often be well-planned and resourced efforts to manufacture a sense of momentum around an idea, with the most famous example being Russian efforts in the US 2016 election. Vast influence networks can be deployed by hostile states, political operatives, extremist groups, and anyone else to undermine, delegitimise, and even dehumanise political figures.

The lines between these operations, and both genuine public sentiment and platform curation by algorithms, can be blurry - coordinated campaigns often jump on public frustration that already exists, deepening and amplifying it, and they also manipulate social media recommender systems to bombard people with particular messaging.

These campaigns take planning and organising, and that work is increasingly happening in closed spaces and forums, making them harder to identify and dissect. This makes understanding the social media systems that underpin our online information environment all the more important, and a new piece of European Union regulation may help us understand the role online platforms are playing in shaping our increasingly hostile and potentially violent political world.

It is right that individual threats of violence are treated as a criminal matter, but we also need to address the underlying issues at play.

Efforts to deal with the role social media plays in polarisation have been stymied by a combination of limited legal instruments, and a lack of political will to enforce those we have. This has been especially acute in Ireland, where we have an outside role in enforcing EU rules, yet Government has faced repeated accusations of under-enforcing rules on the tech companies it relies on economically.

The Digital Services Act (DSA), which became operational in February, and which shares responsibility for implementation with the Commission, may address these issues.

It contains a unique mechanism that allows anyone to submit a complaint against a social media platform where they feel that it is causing them or others harm, where there have been systemic failures by the companies concerned. This includes failures by a platform to identify and mitigate risks to “actual or foreseeable negative effects on civic discourse and electoral processes, and public security”.

I tested this new mechanism in June when I submitted a complaint to the Irish regulator, Coimisiún na Meán, about TikTok and what I saw as its failure to implement adequate measures to protect Irish democracy during the recent elections. Last week, I received an update indicating progress on the complaint. The regulator has given TikTok two weeks to respond to one part of the complaint, after which an investigation may be initiated. Additionally, a further part of the complaint has been escalated to the European Commission.

The instrument has teeth. Fines of up to 6 per cent of annual turnover can be issued, which for TikTok could be hundreds of millions of dollars. The Act also empowers regulators to investigate and scrutinise platforms, including their recommender systems and other secretive processes. However, the regulator can only act if complaints are received, and so far, the office not been inundated with them.

It remains to be seen where this action will go, and if this new regulator will buck the trend of under-enforcement in tech regulation that has become an unfortunate part of our reputation in Europe. We will only know if those affected by the escalating harassment lodge complaints and put this new tool, and regulator, to the test.

Liz Carolan works on democracy and technology issues, and writes at TheBriefing.ie