Subscriber OnlyTechnologyAnalysis

Is the Republic’s online safety code up to the task?

Reconciling the GDPR with the AI Act poses issues, but I think they are minor compared with the problem of making the new safety code work in any realistic way

GDPR on a screen with 1s and 0s
The online safety code is both too specific and too vague. It offers a long list of harms, but almost all of them fall into extremely complex speech and privacy rights areas

Earlier this month, I attended a Law Society seminar on Artificial Intelligence and General Data Protection Regulation (GDPR), part of the European Law Institute’s annual conference. The panel’s deliberations that evening cast some light and offered some useful context for a consideration of Ireland’s new, much debated online safety code, released this week by regulator Coimisiún na Meán.

I certainly don’t use “light” in the sense of clarity, but more of taking a spotlight to examine the state of the intersecting foundations of a couple of enormous buildings that share some critical and potentially wobbly support structures.

This wide-ranging, sometimes exasperating discussion about how GDPR will interact with and potentially constrain the European Union’s (EU) new Artificial Intelligence (AI) Act winkled out many of the possible – or rather, likely – contradictions and incompatibilities between the rights and protections within the EU’s massive data protection and privacy regulation, the GDPR and the apparent regulatory intent and protections in the AI Act.

The panel was chaired by Mr Justice Gerard Hogan, Supreme Court and former advocate general of the European Court of Justice, who was involved in several of Ireland’s most significant data protection judgments (and the EU’s, given that his referrals from Ireland resulted in landmark EU Court of Justice decisions on GDPR protections).

READ MORE

The line-up included Emma Redmond, associate general counsel and head of privacy and data protection at OpenAI; Jeremy Godfrey, chairman of Comisiún na Meán; Irene Nicolaidou, deputy chairwoman of the European Data Protection Board; and Sir Geoffrey Vos, master of the Rolls and head of civil justice in England and Wales, and vice-president of the European Law Institute. Prof Pascal Pichonnaz, president of the European Law Institute, introduced the topic.

“As technology advances, it is extremely important to remember … that we should not impede its beneficial adoption by premature regulation before the dangers posed by those technologies are clear and understood,” Voss said.

He noted the recent EU competitiveness report by Mario Draghi arguing that the EU’s regulatory environment was holding back innovation.

Redmond offered a smooth corporate talk on the beneficial wonders of AI, enumerating the ways in which ChatGPT’s parent company, OpenAI, was working hard towards ethical and safety controls.

While she stressed the need for regulation, she didn’t touch on the fact that several of those who know exactly what the company is doing, including some of OpenAI’s key technologists and board members, resigned in recent months because they fear OpenAI is not addressing those ethics and safety challenges.

Comisiún na Meán’s Godfrey (as well as Nicolaidou) took a centrist line, noting that balance was extremely important and regulation shouldn’t be a blindly constructed roadblock to innovation.

Nonetheless, Godfrey was rightly dismissive of the argument that supposed EU over-regulation was the reason Europe hasn’t had similar tech company development and growth as the US, a tiresome industry lobbying point and one seen by critics as a major reason some protections were removed from the AI Act.

Today’s US tech giants developed at a time when there was little regulation in Europe, he said, “so you can’t point to [EU] regulation as being the reason why some of these grew up in the US”. However, he also cautioned there were limitations to what regulation can do, but ethical protections were important.

Many of these GDPR/AI Act points also apply to the embattled genesis of his organisation’s new online safety code. Its introduction boldly states that “the era of self-regulation in the tech sector is over.” But as with the AI Act, it’s not clear what this means in practice.

Both the EU AI Act and the Irish Online Safety and Media Regulation Act 2022, which gives rise to the online safety code, use a risked-based assessment to determine which companies and technologies have the greatest regulatory obligations.

Both point to harms that would trigger punishments. But they also both kick the most substantial and tricky issues – how to manage or prevent those harms, and define them in the first place – back to companies.

Weirdly, the online safety code is both too specific and too vague. It offers a long list of harms, but almost all of them fall into extremely complex speech and privacy rights areas.

Most people would agree in broad terms with the harms and dangers, but they might define the specifics in opposite ways. Just look to the US and the polarisation around so many of these terms and harms. Even within the EU there are significant cultural differences in, for example, what is seen as adult content.

How are companies then, to define and identify problematical, offending material in breach of the code? Then there are enormous technical challenges. To date, no country or company anywhere has produced, say, age verification systems that are widely accepted, or successful.

Handily for government, while the code comes into (vague) effect now, it won’t require answers to most of these specifics until next year, long after a pesky election.

But none of these difficulties and problems of definition and implementation will have gone away. Reconciling the GDPR with the AI Act poses issues, but I think they are minor compared with the problem of making the new safety code work in any realistic way.