Special Reports
A special report is content that is edited and produced by the special reports unit within The Irish Times Content Studio. It is supported by advertisers who may contribute to the report but do not have editorial control.

AI threat: Stay alert and strategically adapt to outmatch attackers

GenAI makes things easier for cybercriminals but it also allows organisations to speed up detection and bolster their cyberdefences

Deploying AI to enhance cyberdefences is just part of protecting against AI-led attacks – the human element remains critically important. Photograph: iStock
Deploying AI to enhance cyberdefences is just part of protecting against AI-led attacks – the human element remains critically important. Photograph: iStock

Advances in AI – and generative AI, in particular – have placed new capabilities in the hands of cybercriminals. We look at this new trend and how the technology can be deployed for cyberdefence as well.

A new Gen(AI)

Generative AI (GenAI) – artificial intelligence that is capable of generating text, images, videos or other data using generative models, often in response to prompts – has the potential to transform industries by automating tasks, revolutionising problem solving and creating new opportunities, says Leonard McAuliffe, partner, cybersecurity practice, PwC Ireland.

“Companies that harness its power gain a competitive edge and are better equipped to navigate the challenges of the modern business landscape. A very powerful technology, GenAI can accelerate the discovery of IT vulnerabilities,” he says.

A force for good or evil?

The use of AI can greatly change many areas of global society and the economy, says Vaibhav Malik, cybersecurity and resilience partner at Deloitte.

READ MORE

“Many people see AI as a force that drives productivity and progress by making it easier to make decisions, creating new products and industries, and making things run more smoothly. It is also seen as a driver for economic and financial sector growth,” he adds.

“Despite this, the implementation of AI applications gives rise to several concerns regarding the inherent risks of the technology. Privacy lapses, lack of transparency regarding the generation of outcomes, robustness concerns, cybersecurity vulnerabilities and the influence of AI on overall financial stability are among these issues.”

However, Malik says GenAI could also enhance cybersecurity defences through the implementation of predictive models that identify threats and facilitate incident response. Irish companies will need to be capable of embracing new technologies while ensuring dependable security in an ever-evolving threat environment.

Progressing in leaps and bounds

As AI is getting more sophisticated, so too are the fraudsters, says McAuliffe: “GenAI can lower the barrier of entry, making it easier and faster to commit fraud. For example, with GenAI it can take seconds to generate a tool that facilitates, previously taking hours. Now, the information to create, say, fraud bots, is at the fingertips of the fraudsters, without having to learn how to code.

“They can much more quickly generate code and create names and fake identities, making it much easier than before to get unauthorised access to a firm’s IT system. For example, ChatGPT can easily generate lists of fake identities that fraudsters can use for misrepresentation. Fraudsters can generate fake content much easier than before.”

Aiding criminals

We are seeing criminals using GenAI to become more effective and quicker, says Dani Michaux, EMA cyber leader, KPMG in Ireland.

“The ransomware threat is increasing and becoming faster,” she says. “This presents fundamental challenges for security teams and organisations in dealing with ransomware attacks faster than before. The risk is heightened but it is not new.”

Open for cyberattack

The attack landscape is reduced each time a patch is released or installed, so cybercriminals will try to launch their attacks as soon as possible, says McAuliffe.

“As GenAI offers huge potential for businesses in terms of ways of working, enhanced productivity and revenues, so too will GenAI offer the potential for cybercriminals to invade a company’s IT system and steal valuable information, while reducing the time it takes them to attack.”

In a recent survey, 53 per cent of Irish respondents expect GenAI to lead to catastrophic cyberattacks in the next 12 months, says McAuliffe.

“Concerningly, less than half (45 per cent) of Irish respondents reported understanding the cyber risks related to GenAI and have included it in their formal risk management plans – significantly less than their global counterparts, at 58 per cent.”

Learning the language of AI

Highly capable hacking groups are experimenting with AI but researchers have seen little evidence that they are generating major benefits yet, says Malik.

“There is growing interest amongst bad actors in using large language models (LLMs) for generating fake content, running social-engineering campaigns, assisting malware development, creating more sophisticated phishing emails and allowing criminal actors to assume the identities of individuals or organisations, raising the risk of identity theft,” he says.

“However, AI-based tools are not fully autonomous and require some kind of human intervention.”

As LLMs become increasingly interconnected with other digital and physical assets within the Irish financial sector, there is a possibility that hackers could devise novel methods to compromise them, Malik adds.

The key to reducing risk is to bring human critical thinking and scepticism to bear. There is no substitute for keeping humans in the loop

—  Dani Michaux, KPMG

“These include a technique known as ‘prompt injection attack’, which can get around generative AI’s rules and filters or even add harmful data or instructions.”

Protecting businesses against AI attacks

Part of the solution to protecting businesses from AI-led attacks is to deploy AI to bolster cyberdefences, but the human element of the equation remains critically important, says Michaux.

“GenAI is also being used to create more convincing phishing emails and produce new ones at an extremely rapid pace. This is not a new threat but it is potentially more potent,” she adds.

“It is not just a technology play. Humans are the ones who are best able to detect phishing emails and prevent ransomware attacks. Ultimately, the key to reducing risk is to bring human critical thinking and scepticism to bear. There is no substitute for keeping humans in the loop.”

As threat actors have started to use AI-powered tactics, cyberdefenders must stay aware of the changing threat landscape, strategically adapt to outmatch attackers and continue investing in the right technology and stealthier detection methods to increase cyber resilience, says Malik.