Social media platforms using AI to direct harmful content to children, committee told

Experts suggest a binding rule turning off algorithmic recommendations for young people by default

The increasingly sophisticated recommender systems used by the companies, the committee was told, routinely seize on initial signs of interest in topics like weight loss or military history to promote content containing self-harm or white supremacy.
The increasingly sophisticated recommender systems used by the companies, the committee was told, routinely seize on initial signs of interest in topics like weight loss or military history to promote content containing self-harm or white supremacy.

Online social media platforms are using AI to direct a steady stream of content that promotes “hurt, hate, self-loathing and suicide,” to what are often very young and vulnerable children, an Oireachtas committee has been told.

Members of the Joint Committee on Children, Equality, Disability, Integration and Youth heard that while AI in its many guises has huge potential to benefit children it also has the capacity to cause them enormous harm and social media companies need to be made to do much more to curb that negative impact.

The increasingly sophisticated recommender systems used by the companies, the committee was told, routinely seize on initial signs of interest in topics like weight loss or military history to promote content containing self-harm or white supremacy.

The lack of effective age control exercised in relation to young users by the likes of TikTok, Facebook, YouTube, Instagram and X, is contributing to a situation in which children are constantly exposed to hugely inappropriate material with little by way of sanctions to deter the firms from practices intended to extend usage and so enhance profits, the politicians heard.

READ MORE

The consequences can be tragic, they were told.

Clare Daly of online safety and monitoring organisation CyberSafeKids told the committee of a mother who had recently come to them. Her 13 year-old daughter had been bullied in school and posted a video on TikTok expressing her sadness about the issue.

“The app started flooding her feed with images of other sad teenage girls referencing suicide, eating disorders and self-harm,” the mother had said.

“The damage and sadness that this caused my family had been immense as we discovered that my daughter saw self-harm as a release from the pain she was suffering from the bullying through the information this app is openly allowing. Anti-bullying efforts by schools are of no use unless the social media platforms are held responsible for openly sharing all this hugely damaging content with children,” she said.

Ms Daly cited other examples of the ways in which severe damage had been done to individuals or entire groups of children and talked of the potential for matters to get far worse as access becomes far easier to technology allowing for the generation of deepfake images, often involving nudity.

She told the committee members the issue “leaves you as policymakers with an enormous and urgent challenge that is growing at pace. There are no quick fixes but a meaningful solution will involve legislation, regulation, education and innovative approaches.”

The experts generally agreed that there are a variety of useful tools available to regulators at present in relation to the issue of online safety but that enforcement needs to be far stronger.

Dr Johnny Ryan of the Irish Council for Civil Liberties said one key thing that politicians could and should do is to ensure that a rule being considered by regulator Coimisiún na Meán, requiring recommender systems be set to “off” as a default where children are concerned is formally adopted and implemented.

“Ireland can and should lead the world on this, he said. “That rule has to be binding.”

Professor Barry O’Sullivan of UCC said online age verification, or the lack of it, is a key challenge.

“If the online world was a nightclub then someone just rocks up and says ‘I’m all right boss,’ and they just say: ‘In you go,’ The problem is that there is no real way for the companies to verify that people are the age they say are. The other problem is that there is no technique for ensuring the content that children get is age appropriate. I think we really do need to try to look at that.

“So of course, the technology companies are a problem. But we also need to reflect on ourselves and ask the question: Where does the content come from? I agree with everything that’s been said about the recommender system challenges, but society as a whole in some senses is complicit in this kind of thing. Unfortunately, there are young people and older people generating content that’s really, really poisonous.”

  • Sign up for push alerts and have the best news, analysis and comment delivered directly to your phone
  • Find The Irish Times on WhatsApp and stay up to date
  • Our In The News podcast is now published daily – Find the latest episode here
Emmet Malone

Emmet Malone

Emmet Malone is Work Correspondent at The Irish Times