Special Reports
A special report is content that is edited and produced by the special reports unit within The Irish Times Content Studio. It is supported by advertisers who may contribute to the report but do not have editorial control.

Before and after AI

Artificial intelligence brings many benefits - but also risks that must be mastered

Artificial intelligence should be treated as an assistant rather than a teacher. Photograph: iStock

The launch of ChatGPT in November 2022 was the fastest growing consumer software application uptake in history. A host of competing products were launched in its wake, and it is widely credited with spawning the AI boom which continues to this day.

Ken Anderson, CTO of Tashi Gaming, points to the importance of how we approach and use AI.

“AI as an assistant is fine, but some people are treating AI as a source of truth, or an educator or a teacher. AI is more like an assistant who has access to a lot of information quickly but isn’t necessarily as smart as a human would be with that same information,” he says.

There are a number of concerns about AI and one of them is hallucinations, where AI has been trained on bad data and is convinced that it’s correct and will give you the wrong answer. If people start to depend on those answers, they can very quickly be led astray.

READ MORE

Anderson is aware of how plausible AI can sound. “AI can give you a very reasonable answer and can also articulate how it came to that answer in a way that would convince a human. We call those hallucinations. There was one recently that was a forced hallucination that Reddit basically played a prank on open AI, and specifically Google’s AI, in their search.”

In this example, Reddit seeded a bunch of conversations with bad data that led to questions such as if you searched for “How long can you stay in the air when running off a cliff?”, the response was, “As long as you don’t look down”.

The other issue is the feedback loop of AI, as Anderson explains. “People are using AI to build content. Now, at what point is AI consuming its own output to train itself on the next generation of outputs? So, if AI is generating the content that we’re seeing today, and then AI is also listening to all of the social posts and consuming and processing, then at some point it’s just AI talking to itself and humans aren’t even involved.”

And that is not necessarily a good thing, she says. “Because AI needs more human input than AI input, to get better.”

Anderson gives the example of mature coders over newer ones. Mature coders can use AI to write code and specs but they would have the experience of having manually written code to spot mistakes and good code over bad.

“New coders don’t have that experience to be able to judge good or bad code.”

In terms of perspective Martin Duffy, head of generative AI (GenAI) at PwC Ireland believes the pace of AI and GenAI adoption is set to increase.

“According to our most recent GenAI Irish Business Leaders survey, there is significant innovation and activity afoot to enable a surge in AI adoption in the years ahead: for example, 86 per cent of survey respondents confirmed that they are either at the early stages of exploration, testing or partial implementation stages of AI adoption, up from 54 per cent just six months ago. Many organisations are realising the opportunities that AI and GenAI can bring and are looking to embed the technologies into their business operations but are also realising it takes time and can be a complex process.

“According to the survey, key uses for GenAI in the next 12 months will be cyberdefence, IT development, improving collaboration, sales and marketing, and enhancing supply chains,” says Duffy.

PwC’s latest GenAI Business Leaders survey highlights the threats of AI/GenAI: an overwhelming majority (91 per cent) of Irish business leaders believe that GenAI will increase cybersecurity risks in the year ahead; nearly three-quarters (74 per cent) of survey respondents are of the view that GenAI will not enhance their organisation’s ability to build trust with shareholders in the next 12 months, and less than three out of 10 ten (28 per cent) stated that they are confident that the processes and controls over GenAI in their organisation lend themselves to safe and secure outcomes.”

It is important to note that GenAI also has a role to play in as part of organisations’ cyber defences, David Lee, CTO of PwC Ireland. “Cyber security software providers are increasingly deploying GenAI-based capabilities in their products to help organisations protect themselves from external threats. Indeed, the use of GenAI as part of cyber defences was one of the most popular use cases highlighted in our recent GenAI Business Leaders Survey.”

Another issue facing AI is the centralised nature of the software. Richard Blythman, co-founder of NapthaAI, sees decentralisation as the key to reducing potential manipulation. He had worked for a number of large AI companies and was concerned about issues such as data surveillance, which led him to co-found a decentralised AI company, NapthaAI, which has recently raised more than US$6 million (€5.5 million) plus in pre-seed funding internationally.

“I use ChatGPT all the time and it definitely improves my productivity in terms of writing emails, code and even ideation and brainstorming. But on the down side, every time you use open AI your data becomes part of the system and the AI will eventually be able to do your job better than you can. The importance of decentralising AI lies in keeping your data and intellectual property private.

“Decentralised AI has its benefits, but it’s also not without its own risks. For example, some people fear that AI could become very powerful and act against human interests. If such an AI were decentralised, it would be much harder to shut down, similar to how Bitcoin operates. There’s no kill switch for Bitcoin, and a decentralised AI could be equally resilient. It’s crucial to consider both the positive and negative implications and to develop this technology thoughtfully,” says Blythman.