Skip to main content

Is AI negligence the next legal fault line for PI insurers?

David Pryce_Fenchurch Law

David Pryce, senior partner at Fenchurch Law, explains why unchecked reliance on artificial intelligence is already exposing legal professionals to negligence risks and potential gaps in insurance cover.

As generative artificial intelligence continues to revolutionise how businesses operate, the insurance industry is navigating a fast-changing landscape. 

AI’s potential to increase efficiency, while undeniable, also raises serious questions about risk, responsibility, and the value provided by legal professionals.

A year ago, if you’d asked me about AI, I would’ve said: “We’re a small firm, we’ll wait and see what the bigger players do.” But within about eight months, that completely changed.

Why we changed

What drove the shift? The realisation that clients are unlikely to continue paying for tasks that AI can perform quickly, cheaply, or even for free.

At Fenchurch Law, we began a firm-wide review of every task we perform, categorising our work into five broad types of data interaction: capturing, retrieving, processing, analysing, and creating. 

This framework helps us identify where AI can take on routine functions and where human expertise adds the greatest value.

The focus is simple: use AI to free up more time for meaningful, high-value client engagement. 

If a task isn’t central to what clients truly value, it should be automated wherever possible. 

For the work that is core, AI should be used to enhance how we operate, not to replace us. The ultimate aim is to spend more time applying judgment, creativity, and specialist knowledge, the qualities only humans can bring.

Recently developed cyber exclusions, which tend to be broader in scope due to the need to exclude silent cyber risks, may unintentionally exclude cover for professional indemnity claims arising from the use of AI.
David Pryce, senior partner at Fenchurch Law

The profession has already moved beyond the question of whether AI will have an impact; it already has. The real focus now is how to integrate AI into processes in ways that strengthen service and build deeper trust with clients.

But here lies the blind spot: unchecked reliance on AI-generated work by legal professionals is already being treated by the courts as amounting to at least negligence.

Legal blind spot

A recent Divisional Court decision concerning two separate cases brought the issue into sharp relief. 

In the Ayinde case, a barrister submitted AI-generated submissions riddled with fabricated citations. In another, the Al-Haroun case, a solicitor failed to detect AI-related errors in a witness statement. 

The consequences were serious, ranging from regulatory sanctions to possible contempt proceedings. 

The court’s language, in which the barrister’s conduct in the Ayinde case was noted as being “improper and unreasonable and negligent”, understates the gravity. 

Where professionals know that AI can hallucinate, sending out unchecked work arguably crosses the line from mere negligence into recklessness.

That distinction matters enormously. Negligence can still fall within the protective scope of professional indemnity cover. 

Recklessness, by contrast, risks leaving professionals uninsured. As I often say, when you delegate a task to a junior colleague, you don’t sign it off without reviewing it. 

It’s the same with AI. You must check the output; you’re still accountable.

Coverage under strain

A professional indemnity policy may cover claims arising out of AI-related errors provided reasonable care has been taken by the professional in the use of AI but there is a fine line. 

Sending AI-generated work to a client without proper review could be seen as reckless. If so, exclusions such as “reasonable precautions” clauses (which apply only to conduct that is reckless rather than negligent) come into play, potentially voiding cover.

Furthermore, recently developed cyber exclusions, which tend to be broader in scope due to the need to exclude silent cyber risks, may unintentionally exclude cover for professional indemnity claims arising from the use of AI.

Policyholders grappling with AI-related risk must be alert to these coverage gaps, because the risks are not hypothetical: they are already being tested in court.

Crucially, the Divisional Court also observed that responsibility does not rest solely with individual professionals; both the Ayinde and Al-Haroun cases raised concerns on whether those responsible for supervising the lawyers involved had done enough to ensure that professional and ethical responsibilities had been complied with when using AI. 

It follows that managing directors/partners of law firms and heads of chambers themselves must ensure that teams are trained to use AI safely, processes are in place to catch errors, and ethical standards are upheld. Simply blaming the tool will not suffice.

That is why responsible AI use is not just about risk management. It is about preserving trust, with clients, regulators, and insurers. 

If AI is used merely as a shortcut to reduce time and cost, mistakes will proliferate and confidence will erode. But if AI is harnessed to raise standards, reduce human error, and deliver better outcomes, it has the potential to strengthen trust across the sector.

Regulation by litigation

Regulatory frameworks may struggle to keep pace with the exponential speed of AI development. 

In my view, it is more likely that the boundaries surrounding the use of AI by legal professionals will be developed on a dispute-by-dispute basis through case law. 

And the trajectory is clear: unchecked reliance on AI will not be excused as a technical glitch. It will be judged as a professional failing, with all the liability and insurance implications that entail.

For insurers, this moment echoes the early days of cyber risk. The technology is here, and its risks are unavoidable. The question now is not whether to adapt, but how quickly (and how responsibly) we do so.

Only users who have a paid subscription or are part of a corporate subscription are able to print or copy content.

To access these options, along with all other subscription benefits, please contact info@postonline.co.uk or view our subscription options here: https://subscriptions.postonline.co.uk/subscribe

You are currently unable to copy this content. Please contact info@postonline.co.uk to find out more.

Q&A: Chris Methven, CyberCube

Chris Methven, the new CEO of CyberCube, reveals how fresh financial backing will allow his business to place bigger, longer-term bets on global expansion, deeper analytics and new capabilities.

Big Interview: Dan McNally, Tide

Dan McNally, SVP and CEO of Tide Insurance Services, tells Insurance Post what drew him to lead the business management platform’s foray into insurance and how he intends to make an impact in the market.

ChatGPT insurance race: UK’s Malcolm takes on WaniWani

A UK insurtech incubated in Founders Factory is looking to take on US ChatGPT insurance app trailblazer WaniWani by creating an equivalent to Stripe’s standards for e-commerce “to properly unlock” artificial intelligence distribution in personal and SME commercial lines.

Most read articles loading...

You need to sign in to use this feature. If you don’t have an Insurance Post account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account here