Subscriber OnlyTechnology

Focus on toxic algorithms vital to regulate Big Tech in meaningful way

EU and other regulators must realise fines by themselves no deterrent to X and other Big Tech operators

In the category of “least surprising regulatory development of 2023”, I nominate the recently announced end-of-year convergence of Elon Musk, Twitter/X, the EU and Ireland. All have been brought together by the recently enacted Digital Services Act (DSA), a sprawling piece of EU regulation intended to place controls on some aspects of Big Tech.

While the Act applies to smaller organisations too, it places far more significant oversight on the big tech companies with enormous numbers of users. It mandates that large operators, such as the huge social media platforms, manage user risks. Big platforms such as Meta, TikTok, Alphabet and X are required to have effective controls in place to limit the spread of misinformation and disinformation. They also must constrain the use of so-called “dark patterns” (user interfaces that push users towards certain actions), offer transparency in advertising and provide access to platform data for researchers.

After reviewing X’s official transparency report released in November, its September risk assessment report and its response to an EU request for further information – in part linked to content on the platform related to the Hamas/Israel attacks – the EU has announced it will launch formal infringement proceedings against X under the DSA.

It’s the first such investigation under the Act, but the most predictable. In the year since billionaire Elon Musk bought the platform, he’s made changes in policy, intent and staffing levels that set X on an obvious collision course with EU regulation. Musk slashed the company’s workforce, severely pruning its content-moderation teams, many of which were based in its Dublin EMEA headquarters.

READ MORE

Just about every controversial decision taken by Musk since buying the platform features in the EU’s concerns. That swift Musk move to make the blue-tick verification on accounts available to anyone who took up a cheapish Twitter/X subscription? The EU thinks that might be a deceptive practice under the terms of the DSA, because blue ticks were an early identity verification service for accounts belonging to the person or organisation with public profile. Now, as any regular Twitter/X user knows, blue ticks mean virtually nothing but still imply trust in identity.

The EU also feels X isn’t giving researchers adequate access to platform data. And that it doesn’t actually mitigate risk effectively after dumping professional moderation teams in favour of its “Community Notes” approach, which outsources moderation (for free!) by letting anyone on X place a contextualising comment on a tweet.

Then there’s the related problem of X disseminating content considered illegal in the EU, such as some Hamas/Israel tweets, which EU commissioner Thierry Breton swiftly flagged as problematical (also sending letters of concern about DSA compliance to TikTok, Alphabet and Meta). On Monday, Breton tweeted the EU’s intent to take infringement proceedings against X.

Ireland is already waist-deep in this regulatory current. While the DSA gives the EU Commission oversight of the big platforms and operators, most of those key players have EU headquarters in Ireland, placing a very heavy operational burden – perhaps, overwhelmingly so – on Ireland’s new Coimisiún na Meán, and its Digital Services Commissioner, John Evans. The country has already offered support for the EU’s investigation.

Meanwhile the Irish Government and the coimisiún have had a particularly brutal awakening regarding Ireland’s personal stakes in this role, following the recent horrific stabbings in Dublin and consequent street riot, and the suspected role that messaging apps and social media platforms played in these events. Until then, events of this scale might have seemed to be outside problems with an Irish regulatory element, not direct Irish concerns. That has all changed, and will undoubtedly sharpen Irish sensibilities.

Any regular Twitter/X user knows blue ticks mean virtually nothing but still imply trust in identity. And the EU feels X isn’t giving researchers adequate access to platform data

Perhaps they’ll be honed enough to take truly meaningful steps. While the DSA (and Irish legislation) comes with the big stick of significant fines of up to 6 per cent of global annual income, the big platforms and operators have astonishing wealth and valuations in not millions, but billions to trillions of euro. Even billion euro-plus fines – as issued this year by the Irish Data Protection Commission to Meta – barely dented the company or investor confidence.

One alternative step was suggested this week by the Irish Council for Civil Liberties (ICCL), which recommended the EU go further with the DSA and turn off the big platforms’ “toxic algorithms” that secretly determine what users see and share. Coimisiún na Meán will require this for some content by disallowing the profiling of children, which means switching off algorithms that track their online behaviour and decide what they see next.

This is the deep structural element behind digital services that must be addressed to mediate its damage to individuals and society. Tracking and profiling users in order to sell user data, or give advertisers access to specific, defined sets of users is also what makes platforms highly manipulable, with the consequent problems the DSA hopes to minimise.

Huge fines in the US and EU haven’t effected any meaningful change so far. To move beyond this kind of regulatory theatre, regulators instead must tackle the algorithms and deep design of the platforms.