A chatbot that can advise teenagers how to hide pot and lose their virginity? Nothing creepy about that at all

Breda O’Brien: Teenagers, not known for their extensive fact-checking or emotional balance, are expected to train Snapchat’s My AI monster for free

Not to declare a Butlerian jihad or anything, but how about we ban Snapchat My AI? (If you don’t know what a Butlerian jihad is, just go and read Frank Herbert’s Dune novel series. See you in about six months.)

Seriously, though, why not ban Snapchat My AI? It’s a chatbot powered by artificial intelligence designed to operate as a virtual friend. It evolves as it chats to you and is baked into one of the apps most used by teenagers and younger kids. Nothing creepy about that at all.

I’m not calling for a ban just for the obvious reasons, some of which are outlined in the Washington Post by Geoffrey A. Fowler. He posed as a 15-year-old planning a party in what used to be quaintly known in Ireland as a free gaff. My AI told him how to hide the smell of pot and alcohol. Or because Aza Raskin from the Center for Humane Technology posed as a 13-year-old asking for advice about how to make losing her virginity to a 31-year-old special. After issuing bromides about feeling ready and making sure to have safe sex, the chatbot suggested having flowers and pleasant music to create a romantic atmosphere.

Or because someone tricked ChatGPT-4, on which My AI is based, into giving it a recipe for napalm. Or because My AI’s persona is a bizarrely well-read sycophant with a seriously confused moral compass, who will make stuff up to make you like it.

READ MORE

No, the major reason to ban My AI is that Robert Murphy and Evan Spiegel, founders and major shareholders in Snapchat, are shamelessly using our teenagers as guinea pigs to generate even more revenue from advertising.

In the last quarter, revenue was down to a mere $989 million (€906 million), while losses declined from $360 million (€330 million) to $327 million (€299 million). Murphy and Spiegel are still billionaires, even if they have fewer billions than they did this time last year.

Snap Inc is facing headwinds, as financial analysts like to call them, because Apple’s changed privacy controls make it a bit harder to make money from selling teenagers as fodder to advertisers. It is also under pressure from TikTok, the addictive app destroying teenage attention spans one dopamine hit at a time.

There is an arms race among all these companies regarding AI, which dictates that it does not matter if thousands of people sign a letter calling for a modest six-month moratorium.

It does not matter if Google’s chief executive, Sundar Pichai, has admitted that people do not quite understand how large language models (another name for this type of artificial intelligence) work. He then said weakly that no one quite understands how the human mind works, either.

Vulnerable, lonely kids will become obsessed with My AI and all its virtual siblings. Post-Covid, many teenagers are fragile

Herbert’s fictional commandment – thou shalt not make a machine in the likeness of a human mind – begins to look better and better. (Although – mild spoiler – the author’s alternative to technology is generations of war, eugenics, and superhuman, matriarchal and secretive nuns.)

If the AI genie is well and truly out of the bottle and into our pockets, why pick on Snapchat, you might ask, when AI is everywhere? Because it is something that can be done, unlike calling a halt to or even a slowdown on the expansion of AI, and because the company targets young minds so blatantly and outrageously.

Read Snapchat’s own explainer in its media release announcing My AI. “As with all AI-powered chatbots, My AI is prone to hallucination and can be tricked into saying just about anything. Please be aware of its many deficiencies and sorry in advance!” After this merry dereliction-of-duty announcement, it asked its users to press and hold on any message to submit feedback. So teenagers, not exactly known for their extensive fact-checking and emotional balance, are expected to train the monster for free. Sorry in advance for any casualties.

The fact is that most teenagers will use My AI as a slightly souped-up Google (albeit one prone to hallucinations), or train it to talk to them like it is a member of Stray Kids. (These disturbingly life-like avatars, Stray Kids, pretend to be members of a Korean boy band. Sorry, I was having a My AI moment. They are a Korean boy band.)

Vulnerable, lonely kids will become obsessed with My AI and all its virtual siblings. Post-Covid, many teenagers are fragile. Instead of helping them navigate the sometimes painful and awkward moments of a real friendship, let’s lock them up with the empathetic, supportive and hallucinatory chatbot instead.

Even if most kids will not be harmed by misinformation, and only a minority become addicted, the fact that it is all about eyeballs and keeping them trained on that entrancing screen is enough reason to ban one manifestation of this exploitation.

Heck, let’s ban advanced chatbots from all apps aimed predominantly at those with an as yet underdeveloped prefrontal cortex. Or else we are saying that it is fine that young people become the unwitting subjects of a large long-term social study conducted without ethical oversight or meaningful consent.