Out-Law / Your Daily Need-To-Know

Out-Law Analysis 10 min. read

How AI use in cyber attacks affects insurers’ risk exposure

Lloyds of London building

Photo by Vuk Valcic/SOPA Images/LightRocket via Getty Images


Lloyds of London, the insurance marketplace, is the latest institution to highlight how AI is changing the cyber risk landscape.

Its report chimes with a warning published earlier this year by the UK’s National Cyber Security Centre (NCSC), as well as public disclosures made by two prominent AI developers around how their AI models have been targeted for misuse by cyber criminals.

For insurers, unpicking how the evolving AI-based cyber threat impacts their own exposure to risk is challenging. In this article, we have collaborated with Dan Caplin of cybersecurity consultancy and corporate intelligence business S-RM to consider how AI is being used by cyber criminals now, how it might be used in future, and what it means for insurers and AI developers alike.

How AI is enabling cyber crime now

It is clear that cyber criminals are already using AI to enable cyber attacks. According to Caplin, “we know this is happening because the threat actors are disclosing as much on underground forums where they communicate – we have seen a massive spike in this type of discussion in recent times. We also know that there have been people who have developed their own malicious large language models (LLMs). Broadly, these have not been very good so far, but that might change in future”.

Caplin said much of the focus of discussion around use of AI in cyber attacks has been in the context of use of generative AI systems (gen-AI) to improve social engineering – techniques cyber criminals deploy to influence people’s behaviour.

In a blog published in February, Microsoft acknowledged the potential of the technology in this regard. It said: “Language support is a natural feature of LLMs and is attractive for threat actors with continuous focus on social engineering and other techniques relying on false, deceptive communications tailored to their targets’ jobs, professional networks, and other relationships.”

Caplin added: “We are seeing this with phishing emails – the language and grammar used is better; the email is better constructed; and it is less obvious that the sender is a non-native speaker. This increases the likelihood that recipients will take an action they want them to take, like disclosing data or clicking on a link to malware.”

The risk has also been recognised by UK authorities.

In January, the NCSC said gen-AI, in particular, “can already be used to enable convincing interaction with victims, including the creation of lure documents, without the translation, spelling and grammatical mistakes that often reveal phishing”.

Caplin said that he had seen gen-AI used to enhance targeted phishing attacks, known as ‘spear phishing’, in the corporate world.

“AI is also likely enabling more sophisticated spear phishing attacks on senior company directors,” Caplin said. “For example, cyber criminals can scour social media platforms for information about executives’ friends or business associates and prompt gen-AI tools to craft an email that reflects those people’s backgrounds for the attention of the executives – such as to make it look like it is a former colleague that is reaching out, for example. This is again something that previously required greater human intervention and creativity.”

The NCSC said AI also “lowers the barrier for novice cyber criminals, hackers-for-hire and hacktivists to carry out effective access and information gathering operations”. Caplin agrees.

“While the major AI developers have guardrails in place to prevent their AI models being used for obvious malicious purposes such as creating malware, cybercriminals are constantly trying to find new ways to ‘jailbreak’ these systems, with varying degrees of success. They are also using gen-AI systems to help write legitimate code they might use in attacks, as well as research new techniques much more efficiently,” Caplin said. “Putting in place the building blocks for such attacks previously took a level of expertise and a lot of work, but gen-AI makes this task easier for new entrants to get involved.”

The risk of AI-enhanced cyber attacks is very real for organisations – both Microsoft and OpenAI, the business behind gen-AI tool ChatGPT, have said they have taken action to disrupt the activities of cyber criminals using their AI systems, including by terminating accounts associated with state-affiliated threat actors.

However, Lloyds of London said that industry safety mechanisms and AI model governance – including controls placed on the training of and access to LLMs by AI developers – had so far “prevented widespread misuse [of AI systems] by threat actors”.

In a report published earlier this month, it said: “If users do not have full access to the models and all internal components, it is impossible to circumvent [access and use] restrictions in any meaningful way; while some ‘jailbreak prompts’ may allow a soft bypass, it is ineffective for very harmful requests. Likewise, users cannot bypass screening mechanisms if forced to interact with the models through online portals. The cloud-hosted solutions ChatGPT, Bing, and Google’s PaLM are examples of LLMs with extensive safety controls on them preventing misuse, and much of the public’s concern about the potential to use these models for harmful or illegal purposes has been mitigated by virtue of these controls.”

How AI might enable cyber crime in future

There is widespread expectation that the risk posed by AI-enabled cyber attacks will grow as the technology improves.

Lloyds of London predicted that “the frequency, severity, and diversity of smaller scale cyber attacks” will grow over the next year to two years, as a result of how gen-AI might be used by cyber criminals – before “plateauing as security and defensive technologies catch up to counterbalance their impacts”.

The NCSC has said that AI will “heighten the global ransomware threat”. Among other things, it predicted that cyber criminals’ social engineering capabilities would likely be further enhanced as gen-AI models evolve and their uptake increases, and it said AI “will almost certainly make cyber attacks against the UK more impactful because threat actors will be able to analyse exfiltrated data faster and more effectively, and use it to train AI models”. It also identified the potential for “the value and impact of cyber attacks” to be enhanced if cyber criminals are able to use AI to “identify high-value assets for examination and exfiltration”.

Caplin said there are increasing concerns within his client base about the potential for AI tools to generate ‘deep fakes’ of senior executives for use in sophisticated social engineering-based cyber attacks.

“Currently, if there is sufficient audio of the target individual’s voice online, it is relatively easy for cyber criminals to generate a fake new recording that mimics the victim. However, this is much more difficult with video, and only more sophisticated groups will be able to make convincing ‘deep fake’ videos. In respect of both audio and video, AI is not yet at the stage where it can realistically support deep fake interactions in real time, though this will likely change as the technology evolves,” Caplin said.

The NCSC said “human expertise” is likely to continue to be needed to perpetuate certain activities cyber criminals perform, in the short term at least. It cited “malware and exploit development, vulnerability research and lateral movement” as examples, though it conceded AI could make those activities “more efficient”.

The NCSC said: “More sophisticated uses of AI in cyber operations are highly likely to be restricted to threat actors with access to quality training data, significant expertise (in both AI and cyber), and resources. More advanced uses are unlikely to be realised before 2025.”

“Moving towards 2025 and beyond, commoditisation of AI-enabled capability in criminal and commercial markets will almost certainly make improved capability available to cyber crime and state actors,” it added.

Caplin said: “AI has some way to go before it can be used for the complete automation of cyber attacks – by that I mean a ‘button click’ deployment of an attack. This is because hacking into a network requires various different steps to be taken and flexibility – something that currently requires human intervention. That is not to say this is not something we could see as the technology evolves.”

Both Microsoft and OpenAI have pledged to learn from the way their systems are used – and misused – so as to, as Microsoft described, “shape the guardrails and safety mechanisms around our models”, and to publicly disclose “the nature and extent of malicious state-affiliated actors’ use of AI detected within our systems and the measures taken against them, when warranted”, as OpenAI put it.

How does AI impact on cyber insurance?

When a company is hit by a cyber attack, it can cause significant disruption to operations and have knock-on consequences – it can impact manufacturers’ production, causing a shortfall in stock promised to wholesalers and retailers under order contracts, or result in technology providers being unable to deliver promised service levels under their contracts with customers, for example.

In ransomware cases specifically, victims of cyber attacks can feel under pressure to pay a ransom to restore their access to systems and data.

As cyber risk has grown in recent years, we have seen an increasing number of companies seeking to transfer this risk to insurers, taking out specific cyber insurance policies. Where businesses fall victim to cyber attack, they will turn to these policies and consider the extent to which they can be indemnified by their insurers for losses which flow from the incident.

Increased claims?

Given the use of AI tools to improve social engineering and to shortcut the process for computer programming, as set out above, it would seem to follow that there will be an uptick in the volume of attacks and the number of those which are successful, if not already seen. As such, on the face of it, there is a potential for the use of AI tools in cyber attacks to create greater exposure for insurers as it would seem to follow that the volume of claims notifications will also increase. 

Changes to liability?

In terms of whether the use of AI tools in cyber attacks changes the liability position, by potentially allowing victims of cyber crime to seek to recover losses from the platform provider, opening up a new avenue for subrogated recovery in the context of cyber claims, the position is not settled.  There are a number of hurdles which would need to be overcome, starting with the ability to accurately identify whether an AI tool has been used to facilitate the cyber incident that the claim relates to.

At the moment, it is very difficult to identify whether an AI tool has been used to enable a cyber attack. Even if it could be identified that an AI tool had been employed, it would be hard to pinpoint the specific tool.

Caplin said: “With phishing emails that are in plain text, you may be able to infer the use of AI from suspiciously good language and sentence structure, which contrasts with the typos, poor grammar and broken English that we see from emails written by cyber criminals based in countries where English is not the first language, but there is almost no way to prove that an AI tool has been used to enable those attacks, never mind pinpoint the specific platform used. Evidence of this kind might only emerge through the recovery of a cyber criminal’s browsing history or other digital artefacts in criminal investigations run by law enforcement agencies”.

“The only way the specific AI tooling might be identified currently would be through metadata left behind on files created by the AI platform, for example in a fake voicemail attached to a phishing email. However, it is extremely easy for cyber criminals to remove this metadata before they send such files, so identifying whether and what AI tools have been used will largely rely on their sloppiness.”

If the hurdle of identifying the precise AI tool could be surmounted, there remains an issue of causation. In the context of a cyber attack, if a company that had been attacked sought to recover losses from the platform provider, the cause of the incident would still seem to be the threat actor’s actions, even with the use of gen-AI tools, since the tools themselves cannot carry out the attacks without human input – i.e. from the threat actor. There are likely to be material differences between this type of scenario and ones where, for instance, sellers of firearms have been held liable for subsequent illegal gun use. AI tools are not, arguably, designed to cause harm in the same way as firearms are, for instance. However, if an AI platform provider became aware of dangerous use cases and failed to implement appropriate safeguards, it is reasonable to conclude that the situations would be more comparable.

Coverage issues

For now, the use of AI tools does not seem to raise specific coverage issues. However, coverage issues may arise if insurers begin to seek to exclude AI-related cyber losses from their policies. If they do, it will of course become critical to be able to identify the use of AI tools in cyber incidents, which is currently challenging. The use of AI exclusions may not be considered commercial by insurers, however, particularly if the AI tools being utilised in cyber attacks becomes ever more prevalent as it would mean many claims were excluded. Questions for the insurance market will be how to define AI, or how claims which arise from the use of one particular AI tool might be aggregated. 

Insurers may also wish to consider how the use of AI to enable cyber attacks impacts more broadly across other insurance products they offer. For example, many technology companies take out technology errors and omissions (tech E&O) policies, which protect technology suppliers against the risk of third-party claims being raised in respect of alleged failures in the operation of their technology. If tech providers are sued for the use of their AI platforms in cyber attacks, it is reasonable to conclude that they may seek to recover any associated losses from their tech E&O insurers.

The practical effect of all this is that AI developers could find it more difficult to obtain tech E&O insurance if their technologies are being used by third parties in a way that gives rise to claims against them.

We are processing your request. \n Thank you for your patience. An error occurred. This could be due to inactivity on the page - please try again.