This article was paid for by a contributing third party.
Giving AI arms and legs to move insurance forward
At a recent webinar hosted by Insurance Post in association with Hyland, insurers explored how agentic AI could reshape underwriting, claims and fraud – and the challenges of keeping humans in the loop.
Panellists:
Paul Hollands, chief data and AI officer, Axa UK
Tom Clay, chief data scientist, Covéa Insurance
Jon Whitear, speciality sales EMEA and APAC, Hyland
As insurers grow comfortable experimenting with generative AI, attention is already shifting toward the next evolutionary step: agentic AI. Where today’s tools create text, code, and images on command, agentic systems promise to act: to reason, plan, and execute tasks autonomously across underwriting, claims, and fraud. It is a tantalising prospect, but one that raises as many questions as it answers.
In a recent webinar, hosted by Insurance Post in association with Hyland, leading industry voices explored both the promise and the complexity of handing more agency to artificial intelligence. This article explores several key themes that emerged from the discussion.
Anatomy of an AI agent
Hyland’s Jon Whitear offered a concise definition of AI: “An AI agent is something that has autonomous capabilities within it. It’s not there to create new content, but it is able to take decisions that don’t necessarily rely on a human prompt.”
Tom Clay at Covéa paints a vivid picture: “It’s AI that’s got arms and legs, metaphorically – it can do things for you.”
Paul Hollands of Axa UK sees agentic AI as part of a continuum stretching from classic machine learning to automation, generative models and finally orchestration.
“We’re probably all at various stages applying AI and automation capabilities. The evolution from that is moving away from discrete tasks toward things that can take place autonomously,” he explained. “That collection of capabilities underneath is what makes up the agentic capability.”
In Hollands’ view, the journey is about shifting from processes “designed for humans with computers” to processes “designed for computers augmented by humans.” Designing around the machine rather than the human is what makes agentic AI so radical.
First steps
Axa is treating agentic AI not as a side project but as a pillar of its long-term transformation plan. “We’re setting out our AI vision through to 2029,” Hollands revealed.
We’re looking at whether you can reinvent the process so that agents run the tasks and orchestrate over the top of them.
Paul Hollands, chief data and AI officer, Axa UK
“Agentic is one of the major plays within that. The big assumption behind the vision needs to be tangible, believable and something we can prove quickly.”
That proof of concept is emerging in claims. Axa is rebuilding elements of its bodily injury settlement process through an agentic lens. “We’re looking at whether you can reinvent the process so that agents run the tasks and orchestrate over the top of them,” Hollands said.
Integration with Guidewire and scalability testing will determine whether this model can hold at enterprise scale.
Covéa’s Tom Clay sees parallels. His team has treated generative AI as a “foundation,” moving from tools that “inform and assist” to those that can “automate and decide.” Yet the cultural dimension remains as critical as the technical one. “We’re broker-driven, so we take very prudent steps,” he explained.
“We focus where we see the most friction – onboarding clients, customer interactions, places where there’s the most energy in the process. We started with generative assistants, and now we’re asking: if we gave those assistants arms and legs, what would they do?”
For Hyland, which works across markets and sectors, the picture is uneven. “The examples we’ve heard today are closer to the point of the spear,” Whitear said.
“Predominantly, customers we’re engaging with are still utilising generative AI to get more detail at the surface level. They see the agentic journey as steps three, four and five –something to move on to once they’re comfortable with the early stages.”
That caution is understandable. In financial services, the faster one moves, the more one stands to gain – but also the more one must adjust to the risks.
Finding its stride
The industry’s focus on efficiency is shifting to a broader conversation about augmentation. Clay admits Covéa’s first instinct was “heavily on the efficiency side – how can we tidy up friction-filled processes?”
But the company’s perspective is evolving: “What can we do that’s more value-add? AI shouldn’t just be seen as something to cut time or increase capacity – we want it to be understood as a real enhancer.”
He points to a growing appreciation of augmentation: “Tools like Copilot are good at most things but not necessarily great at all of them. It’s causing us as an AI team to rethink how we build products – I foresee more success in smaller models trained on specific data.”
Hollands argued that after the initial hype, generative AI has “come crashing down the other side of the hype cycle. It lives in splendid isolation” – impressive in proofs of concept, but largely disconnected from enterprise workflows for underwriting, claims, customer service etc.
“Copilot is a great augmentation tool,” he added. “It makes you more effective. But is that an efficiency play you can bank? Absolutely not.”
Instead, he frames agentic AI as a route to “transform service, transform process and give a significantly more effective customer and colleague experience.” The opportunity is as much about better outcomes as cost reduction – and about reshaping the insurer’s cost base to reinvest in growth and service.
Both insurers emphasise the cultural journey. Clay calls it “a really heavy cultural development curve,” while Hollands talks of an emerging skills frontier. “Our workforce in two, three, four years is going to look spectacularly different if agentic becomes what we believe it could be,” he said.
“What are the new roles? People who sit over these agentic processes and assure the output – that’s a skill set no one’s yet written down.”
Treading carefully
For an industry defined by its aversion to uncontrolled risk, the prospect of autonomous AI decision-making demands rigorous oversight.
Hollands acknowledges the irony and draws a distinction between the “informative” nature of generative AI and the “instructive” character of machine learning and agentic systems. His goal at Axa is consistency and explainability.
“If each panellist today got a medical report, how each of us would read it could give four different answers,” he explained. “What we hope for – and are starting to prove – is that the consistency you get is actually much higher.”
Axa keeps humans in the loop, halting the process at key points to verify recommendations, learning and iterating as confidence grows. Accuracy scores are already “in the high 70s and 80s,” he added, “which is good enough when you still have a human in the loop.”
Clay, meanwhile, starts with what he calls the first guardrail: “Teaching people how to prompt properly.” Frustration with tools like Copilot, he observed, often stems from poor prompting.
But whose responsibility is it? The business expert who understands the process, or the IT function who knows how to write the prompts? Matching those two skills in one person is almost a unicorn.
Jon Whitear, speciality sales EMEA and APAC, Hyland
“We really emphasise that training, which needs to be turbo-charged when you start thinking about agentic AI because it can potentially do so much more damage.”
Beyond that comes explainability. “If you get really complicated orchestration services, it’s hard to work out why it did what it did,” he said. “So building in as much explainability as possible – we’ve started by customising the agent to be verbose about why it’s done something.”
Whitear takes the analogy back to the dot-com boom. “Then, if you created a bad web page, it didn’t matter. Now, if you create a bad initial agentic AI, it could be catastrophic.” He sees prompt engineering as both the key skill and the new governance challenge.
“It takes a set of skills that can be learned. But whose responsibility is it? The business expert who understands the process, or the IT function who knows how to write the prompts? Matching those two skills in one person is almost a unicorn.”
Turning to regulation, Hollands notes that Axa maintains “an ongoing conversation with the regulator” focused primarily on consumer duty and data protection. “That’s absolutely the right thing to do,” he said. “It gives real clarity. But as capabilities evolve at pace, there will undoubtedly be a shift.”
Clay agrees: “Transparency is the cornerstone. Consumer duty is already a strong mechanism, as is data protection. It’s not the Wild West.”
Whitear expects regulation to tighten only after something goes wrong, which is a familiar pattern. “Because we haven’t yet seen real-world examples of agentic AI failing, it’s being left relatively open.”
Staying on your feet
We asked the hypothetical question – what happens when agentic systems fail? For Hollands, the answer is already tangible. Axa has rolled out generative summarisation tools to 800 staff, soon to double.
“When that goes down, frankly you’re back to pen and paper,” he admitted. “The contact-centre teams are up in arms when it isn’t working. They love it. It enhances their roles. They want more.”
That enthusiasm underscores how quickly AI tools can become embedded – and how fragile reliance can be. “Operational resilience is one,” Hollands warned, “and making sure the quality of the output is trustworthy is another.”
His teams mitigate both through close collaboration between business experts and prompt engineers, iterating constantly to improve quality while cutting token consumption by up to 50%.
Agentic shouldn’t be there to replace, because if it replaces people and it fails, you really are in trouble. Double down on really good operational resiliency plans. Have people understand what to do when it goes wrong on a very practical basis.
Tom Clay, chief data scientist, Covéa Insurance
Clay takes a similar stance: “Agentic shouldn’t be there to replace, because if it replaces people and it fails, you really are in trouble,” he cautioned. “Double down on really good operational resiliency plans. Have people understand what to do when it goes wrong on a very practical basis.”
Whitear agrees that while the technology may be revolutionary, regulation and resilience frameworks already exist. “It’s an iteration a powerful one, but still an iteration of what’s been regulated before,” he said.
If autonomy is rising, should insurers audit AI outputs as they once audited human processes? Hollands believes so. “If you’re pushing a model live, there’s a lifecycle around that: monitoring outputs, checking they do what they’re meant to as the business context changes,” he explained.
“We’re strengthening our ML ops and starting to think about AI ops and LLM ops – evolving what we’ve got rather than inventing something entirely new.”
Clay echoes that sentiment but starts with culture. “Audit for us begins with healthy scepticism about anything an agent produces,” he said.
Covéa has introduced an “AI champion” model, giving employees a clear route for escalation if something feels off. “Healthy scepticism has always been a phrase we push in our training,” he added.
Moving forward in step with AI
The discussion reflected the industry’s cautious optimism about agentic AI. As insurers move from generative tools that “inform and assist” to systems that can “reason, plan and execute tasks,” the challenge is one of balance: move too fast, and the risks multiply; move too slow, and competitive advantage can slip away. Agentic AI may soon have the “arms and legs” to run claims, underwrite risks, and chase fraud, but its adoption must be carefully managed, with human oversight, robust governance, and workforce readiness.
Both culture and operational resilience remain critical. Insurers must foster “healthy scepticism” among staff, provide training in prompting and oversight, and establish clear escalation paths. Explainability, auditing, and monitoring outputs will be key to ensuring autonomous systems remain reliable and aligned with business objectives.
For Axa’s Paul Hollands, the aim is to “transform service, transform process and give a significantly more effective customer and colleague experience.” For Covéa’s Tom Clay, it is about replacing friction, not people, and ensuring adoption is gradual and value-driven. If implemented thoughtfully, agentic AI can deliver not just efficiency, but improved outcomes, stronger customer experiences, and a reshaped workforce prepared for the next phase of AI in insurance.
Sign up to watch the webinar here.
Sponsored content
Copyright Infopro Digital Limited. All rights reserved.
As outlined in our terms and conditions, https://www.infopro-digital.com/terms-and-conditions/subscriptions/ (point 2.4), printing is limited to a single copy.
If you would like to purchase additional rights please email info@postonline.co.uk
Copyright Infopro Digital Limited. All rights reserved.
You may share this content using our article tools. As outlined in our terms and conditions, https://www.infopro-digital.com/terms-and-conditions/subscriptions/ (clause 2.4), an Authorised User may only make one copy of the materials for their own personal use. You must also comply with the restrictions in clause 2.5.
If you would like to purchase additional rights please email info@postonline.co.uk