Children’s committee: App stores should be responsible for effective age verification - Meta

Oireachtas meeting will focus on protection of children in the use of AI

Committee meeting will focus on the protection of children in the use of artificial intelligence (AI). Photograph: iStock
Committee meeting will focus on the protection of children in the use of artificial intelligence (AI). Photograph: iStock

App stores should be responsible for effective age verification, Meta, the parent company of Facebook, Instagram and WhatsApp will tell an Oireachtas committee on Tuesday.

The committee meeting, which will focus on the protection of children in the use of artificial intelligence (AI), will also hear from representatives of TikTok and X, formerly Twitter.

It comes as Tánaiste Micheál Martin issued a warning on Saturday to social media giants to “get underage children off your apps” while Minister for Education Norma Foley has consistently highlighted concerns over children accessing inappropriate content.

Noting that the average US teenager uses 44 apps each day, Dualta Ó Broin, Head of Public Policy in Ireland for Meta, the owner of Facebook, Instagram and WhatsApp, will tell committee members that a significant step forward in age verification can be taken at a European level.

READ MORE

Mr Ó Broin will argue that this will ensure parents only need to verify the age of their child once, placing their child into an age appropriate experience on every single app.

“The most efficient and effective way in which this would work would be at the operating system or app store level,” he will say, adding that the move would not remove responsibility from the app to manage age effectively.

“The question of age verification is complicated, however we believe that the time has come to move forward with an effective solution that addresses the concerns of all stakeholders including parents,” he will tell TDs and senators.

Almost a quarter of six year olds have their own smartphone, survey findsOpens in new window ]

Children’s access to smartphones: ‘many sleep with them under their pillows’Opens in new window ]

Mr Ó Broin will say that it is “simply untrue” that Meta is financially motivated to promote harmful or hateful content on its platforms to increase engagement adding that AI plays a “central role” in reducing the volume of harmful online content.

Although AI is extremely effective in detecting fake accounts, Mr Ó'Broin will say, it is more difficult to identify bullying and harassment as it can be “quite contextual” and not as immediately apparent as a fake account.

Meanwhile, Susan Moss, Head of Public Policy at TikTok, will tell committee members that AI plays an “integral role” in the safety of the more than two million people in Ireland who use TikTok every month.

She will tell the committee that all content undergoes moderation to swiftly identify and address potential instances of harmful content while automated systems work to prevent violative content “from ever appearing on TikTok in the first place”.

“The adoption and evolution of AI in our processes has made it quick to spot and stop threats, allows us to better understand online behaviour, and improves the efficacy, speed, and consistency of our enforcement. Nowhere is this more important than the protection of teenagers,” she will say.

Ms Moss will say the introduction of new technologies “inevitably triggers unease,” prompting legitimate concerns around the legal system, privacy and bias.

“And so it is incumbent on all of us to play our part in ensuring that AI reduces inequity and does not contribute to it,” she will tell committee members.

Social media is 20 years old. How are we still getting so much wrong?Opens in new window ]