Blog: Responsible robots?

Robot

  • It is easier to automate low-value personal injury claims where liability is clear cut than complex financial lines claims 
  • If the robo-risk software gives flawed advice, it becomes a product or professional liability for its developer
  • If the software is scrutinised in court, it threatens the intellectual property of the developer

Automated processes are feted to be the future of claims handling, but taking the process out of human hands is far from straightforward, as Ed Lewis, partner at Weightmans, explains.

Barely a week goes by without a news story about a groundbreaking advance in artificial intelligence. As a result, it’s easy to get the impression that every area of our lives and economy is on the verge of being transformed by automated decision making.

The same is true in the sphere of insurance claims handling, and the industry is investing a great deal of time and money in research into both machine learning and artificial intelligence.

As a result, some commentators have started to suggest that the death knell has been rung for claims-handling experts.

Yet, while automation clearly has exciting potential in terms of increased efficiency and speed of processing, a reality check is needed when we talk about the potential for AI to supplant human oversight of claims, especially at the more complex end of the spectrum.

First, it’s important to define the difference between machine learning, which we are already beginning to see being introduced in claims processing, and true AI, which remains an unproven technology in this arena.

The former involves deployment of technology to improve processes by automating their more formula-driven elements, while the latter looks to remove human beings from decision making altogether.

To date, the automation we have seen has focused heavily on applying machine learning to lower value personal injury claims, where liability is clear cut and the cost of resolving the claim represents a significant proportion of the total cost to the insurer paying out.

However, the Holy Grail for claims automation is software that will automate more complex financial lines claims, and this is a challenge on an entirely different scale.

In these cases, the nature of liability and causation is much harder to unpick, and the processing cost is insignificant next to the potential cost of claims.

Consider professional negligence cases. Establishing liability here currently requires input not only from legal professionals but also industry experts in the relevant field, be it architecture, engineering, medicine, accountancy or any other profession.

It’s hard to imagine a programme that would be able to accurately model this level of complexity.

Even if software development does progress that far, there are three equally fundamental issues that will need to be resolved if AI is to begin to replace expert legal oversight: liability, intellectual property and jurisprudence.

Currently, if a legal advisor advocates a strategy that is later found to be flawed, it’s the solicitor’s professional liability policy that is at stake. If those decisions start to be taken over by a piece of robo-risk software, who is liable? Is it the person running the programme, the programme itself or the designer who developed it?

Arguably it becomes a product liability or a professional liability for the software manufacturer. Of course, there is a potential opportunity for insurers here, as hybrid liability cover for developers of automation platforms could represent a whole new revenue stream. But, for those developing the systems, it is a significant challenge.

This brings me to the second point. If the process that delivered the decision in question is scrutinised in a legal action, it will become a matter of public record, threatening the intellectual property of the manufacturer. For tech companies, the viability of their business often depends on the integrity of their IP, so this is a significant stumbling block.

Finally, there is the question of jurisprudence, or judicial oversight. Currently, those developing automation software are guided by a need to replicate the results delivered by the current system, which is based on expert oversight and a web of checks and balances, including public scrutiny.

If legal decisions begin to be made by AI programmes instead, the common law precedents being used to decide cases will get further and further away from a judicial benchmark and, rightly, questions will be asked as to whether the software is still delivering the right results.

Ultimately, no matter how advanced the technology gets, these constraints mean that some form of human judgement will always need to remain a big factor in claims resolution. The robots won’t be taking over any time soon.

 

Insurance Post Claims Club

Claims Club

Join the next Claims Club meeting on Friday 24 November at etc venues Monument, London.

Attendance is exclusively reserved for Post Claims Club members.

Find out more at www.postclaimsclub.co.uk

  • LinkedIn  
  • Save this article
  • Print this page  

You need to sign in to use this feature. If you don’t have an Insurance Post account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an indvidual account here: