Special Reports
A special report is content that is edited and produced by the special reports unit within The Irish Times Content Studio. It is supported by advertisers who may contribute to the report but do not have editorial control.

Seeing double digits: What happens when AI goes bananas

An unfruitful experiment involving art-generating AI illustrates why artificial intelligence applications need ‘health warnings’

An experiment involving the Midjourney art-generating tool showed AI isn't always reliable in real-world settings
An experiment involving the Midjourney art-generating tool showed AI isn't always reliable in real-world settings

Trust is at the heart of the EU AI Act which is set to become law in 2026. The act will apply to every organisation which uses AI for any purpose and will require them to inform people if they are interacting with AI in any way.

For example, if elements of a piece of advertising copy were generated using AI, consumers must be informed of the fact. Similarly, if an online chat agent on a website is AI powered, customers must be informed. If a newspaper account of sports results has been compiled using AI, readers must be informed. And if an image has been altered, enhanced or created using AI the people who see it must be informed.

This is extremely important for public trust, not just in AI but the organisations using it. Because, as we all know, AI can sometimes get things spectacularly wrong and people need to know if it has been involved in the generation of any information or other content.

An example of AI getting things very wrong was highlighted recently by Dr Daniel Hook, chief executive of technology research specialist company Digital Science. Dr Hook exposed a basic failing of AI to grasp the real world by asking the Midjourney art-generating AI tool to draw a single banana on a plain background. The result was a drawing of two bananas. He conducted the same experiment over a period of weeks and the result was the same each time.

READ MORE

Digital Science subsequently launched its #MindTheTrustGap campaign which is aimed at raising awareness of global issues of trust and integrity in science, innovation and research.

“I asked for a single solitary banana, on its own, but none of the variants I received contained just one banana,” Dr Hook says.

Thinking he must have made an error, Dr Hook tried different instructions such as, “a perfect ripe banana on a pure grey background casting a light shadow, hyperrealistic”, or “a single perfect ripe banana alone on a pure grey background casting a light shadow, hyperrealistic photographic”, and “ONE perfect banana alone on a uniform light grey surface, shot from above, hyperrealistic photographic”. All produced images of two or more bananas.

Upping the ante somewhat, Dr Hook then asked the app to generate “an invisible monkey with a single banana”. That produced very visible monkeys holding two or more bananas.

Dr Hook eventually achieved some success with the instruction “a single banana on its own casting a shadow on a grey background”. Three of the four images generated were of a single banana, but the fourth still contained two bananas.

He has some advice for people using AI: “The use cases where we deploy AI have to be appropriate for the level at which we know the AI can perform and any functionality needs to come with a ‘health warning’ so that people know what they need to look for – when they can trust an AI and when they shouldn’t.”

Barry McCall

Barry McCall is a contributor to The Irish Times