Why AI is creating D&O risks no-one would have imagined
View from the Top: Kate Lyes, head of specialty lines, CFC, questions whether the directors and officers insurance industry is ready for the complexities an AI-driven world will bring.
Artificial intelligence in executive roles sounds, to many, like something lifted from science fiction. It’s instinctive to dismiss this as hype or clever PR. But that view is already out of date.
Governments and businesses alike have begun putting AI into positions of authority. In Albania, the world’s first “AI minister” is overseeing public procurement, making decisions designed to remove human bias and discretion.
And, in the corporate world, companies are deploying AI executives with meaningful influence, whether it’s Dictador appointing an AI CEO or decentralised autonomous organisations run by rules written in code.
This reality raises an uncomfortable question, particularly for the insurance market. Industry attention has understandably focused on AI’s role in cyber risk, fraud, data misuse and new forms of coverage.
But what happens to directors’ and officers’ risk when decision-making is no longer purely human?
Accountability
D&O insurance was built for a world where authority, judgment and accountability sat with human beings. It rests on the assumption that decision-makers are natural persons who can owe duties, exercise judgment and be held responsible in law.
AI does none of those things. And yet, long before AI acquires legal personhood – if it ever does – it is already influencing, shaping and sometimes executing decisions at the very top of organisations.
Kate Lyes, CFC
AI does none of those things. And yet, long before AI acquires legal personhood – if it ever does – it is already influencing, shaping and sometimes executing decisions at the very top of organisations.
This creates an immediate tension. Boards may rely on AI for analysis, recommendations or automated action. But legal responsibility never shifts away from the humans around the table if something goes wrong.
From a D&O perspective, that matters because AI concentrates liability.
It’s tempting to assume that AI-supported decision-making should lower risk: better information, faster analysis, fewer human biases. But at board level, AI may be driving exposure in the opposite direction.
Digital twins
The emergence of executive avatars and digital twins takes these issues even further.
Boards are increasingly exploring AI replicas trained on executives’ communications, decision logic and behavioural patterns – capable of speaking, responding and even advising on behalf of a human leader.
If an AI twin issues incorrect guidance, makes a misleading statement or contributes to a financial error, where does responsibility lie? With the executive whose logic it reflects, the board that approved its use, or the developers who built it?
Greater risk
As AI becomes woven into executive processes, directors are expected to oversee systems they did not design, trained on data they did not curate, producing outputs that can be difficult to interpret and interrogate.
Yet the legal and fiduciary standard applied to directors does not shift. Oversight, judgment and challenge remain core duties of the board.
Regulators, shareholders and courts are unlikely to accept reliance on AI as a defence on its own. The question will not be whether AI was used, but whether its use was properly governed. That includes understanding its limitations, testing its outputs, putting guardrails in place, and retaining meaningful control over how it influences decisions.
Disclosure is another complication.
Where AI materially affects strategy or operations, boards must decide how that reliance is communicated. Offer too little transparency, and you risk criticism; too much can invite scrutiny of whether sufficient oversight exists.
With the AI landscape changing constantly, there is no settled framework, muddying the waters further.
Fit for this world?
Today’s D&O wordings are clear. Coverage was designed to protect individuals, applying to natural persons acting as directors or officers. For now, that framework still works, because accountability remains human.
However, as AI moves closer to executive authority, the assumptions underpinning D&O cover may become more fragile. Core concepts such as intent, misconduct and personal profit depend on human agency.
Extending them into a world of autonomous systems would require far more than technical amendments to policy wording. The market may eventually be forced to reconsider long-standing assumptions and remove “natural persons” from policy wording.
The future of leadership is likely to be hybrid. AI will increasingly act as a co-pilot at the top of organisations, with accountability remaining with the human. In other words, boards will spend less time making decisions and more time supervising systems that do.
For the insurance market, this is a significant inflection point. As AI impacts risk at board level, companies don’t need an AI director to face an AI-driven D&O exposure.
This is likely to become a widespread exposure, impacting governance risk and demanding change to what coverage looks like. The market needs to be ready.
Only users who have a paid subscription or are part of a corporate subscription are able to print or copy content.
To access these options, along with all other subscription benefits, please contact info@postonline.co.uk or view our subscription options here: https://subscriptions.postonline.co.uk/subscribe
You are currently unable to print this content. Please contact info@postonline.co.uk to find out more.
You are currently unable to copy this content. Please contact info@postonline.co.uk to find out more.
Copyright Infopro Digital Limited. All rights reserved.
As outlined in our terms and conditions, https://www.infopro-digital.com/terms-and-conditions/subscriptions/ (point 2.4), printing is limited to a single copy.
If you would like to purchase additional rights please email info@postonline.co.uk
Copyright Infopro Digital Limited. All rights reserved.
You may share this content using our article tools. As outlined in our terms and conditions, https://www.infopro-digital.com/terms-and-conditions/subscriptions/ (clause 2.4), an Authorised User may only make one copy of the materials for their own personal use. You must also comply with the restrictions in clause 2.5.
If you would like to purchase additional rights please email info@postonline.co.uk