The question of liability for autonomous vehicle claims

An autonomous vehicle dial with the needle pointing more towards self-driving than driver assist

  • A team of Chinese technicians from Keen Security Lab, were able to hack into a moving Tesla S
  • Central to debate over who is responsible is a need for clarity and an understanding of the technology involved
  •  By knowing what the limitations of technology are and how this technology works insurers can better assess risk

Tesla’s recent hack has moved the liability spotlight onto software providers

Autonomous cars are no longer a futuristic idea and, as technology races forward, insurers need to ensure that they aren’t left stuck in the slow lane.

Companies like Ford, BMW, and Tesla have already released or began testing, vehicles with automated driving features on the road, and many cars already feature advanced driver assistance systems including lane and traffic assistance, and self-parking.

While these technologies are often portrayed as a disruptive force heralding a new era of transport and social inclusion, the coming automotive revolution presents new challenges for the insurance industry.

In recent months there have been several reports that Tesla automated vehicles have been involved in incidents in the US relating to the use, or misuse, of automated driving features, raising concerns as to the difficulties in establishing liability. If a partly automated vehicle crashes into another car, who is to blame: the ‘driver’ or the manufacturer?

The transfer of risk and liability
As control of these vehicles shifts from human to computer, it is possible that liability will follow that shift. There is, therefore, the potential for the vehicle manufacturer or even the software provider to become liable for an accident, as opposed to the driver, if the driver is unable to override a faulty system.

This debate became very real in September when a team of Chinese technicians from Keen Security Lab, were able to hack into a moving Tesla S, gaining control of the Tesla’s brakes, door locks, dashboard computer screen and several other electronically controlled features, effectively turning the vehicle into a moving ‘weapon’.

Kurt Rowe, associate solicitor for insurance market affairs, technology and emerging risks at Weightmans said: “What we see in this particular case is an individual being able to gain access to a vehicle’s safety-critical systems. In theory, once a hacker can access these systems, they can take full control of the vehicle.

“It raises the question for insurers as to what would have happened if the hacker had wanted to cause real harm and damage.”

Rowe added: “It will all depend on how the hacker was able to gain access to the vehicle.” The hacker is obviously responsible, but if the hack was in any way preventable, then it comes down to either the manufacturer or the software developer who is at fault for not adequately protecting and updating the affected systems.”

Who is in the driver’s seat?
Central to this debate is a need for clarity and an understanding of the technology involved, said Dan Freedman, director of motor development at Direct Line group.
“It has to be clear exactly what we’re talking about,” Freedman said. “It’s going to be a gradual change but this technology is coming. The industry has a huge role to play and as insurers we should all be promoting this technology.

“If insurers are to properly understand the risks involved with driverless cars then we need an understanding of exactly what technology is on board the vehicle.

“As technology moves away from ADAS and towards automated driving technology we see technology that has the capability to do the driving. By knowing what the limitations of this technology are and how this technology works we can better assess risk, something that is particularly crucial from an underwriting point of view.”

Freedman believes that at a very fundamental level, insurers need to know who or what is in control of the vehicle.

“We need an understanding of who is driving the car. Is it a computer and robot or is it a human?” said Freedman. “It is vital that insurers understand what technology is on board the vehicle. After all, we’re dealing with a question of liability and the extent to which the technology is at fault.”

For insurers to find the extent to which the technology was at fault would require vehicle manufacturers to share data with insurers. Just as with conventional traffic incidents today, information will be the deciding factor in the determination of liability. In the case of automated driving systems, that information will be collected by the car itself.

Manufacturers already receive data through automated systems and connected devices installed into their cars. It is this data that Freedman believes insurers need access to and which will increasingly be key in understanding the cause of an accident, and in determining liability.

When knowledge is power
Freedman commented that it is essential that manufacturers and insurers work together to introduce cost-effective access to the data, to allow insurers to accurately and quickly determine liability.

Taking this view further, Freedman said that action is needed at a European, if not global, level to create a legislative framework to support data sharing.

“Very simply, insurers are looking for the basics such as if a robot or human is driving the car. It’s a very simple request, but is a crucial one when determining liability,” said Freedman.

“As insurers, we would obviously like to know everything and have access to all of the data available. However, in reality, this is both unrealistic and unnecessary.”

Freedman believes the information received through the car will be the most valuable asset in determining liability. This information would effectively place the vehicle in a position where it could act as a ‘witness’, detailing the events of an incident using the data stored in the car.

“The car should be treated as a witness and be expected to provide a kind of ‘statement’. We shouldn’t lose sight of the fact that a car’s data has more to give. But at the very bottom level, insurers just need the basics.”

David Williams, technical director at Axa, agrees insurers need to have access to the basic levels of information to determine liability. However, Williams foresees potential issues regarding data access and data ownership.

“Some of the German manufacturers say they own the data and that’s that. As far as we are concerned, the data belongs to the customer,” said Williams.

“This data would not only be needed by insurers but the police and other emergency services will need access to the data so that they can determine what happened in the event of an incident.

“We’re dealing with an extremely subjective subject. But it is one that needs clarity and answers.”

Without access to the data collected by automotive systems, insurers have little hope of being able to understand the cause of an accident, and determining liability could be almost impossible, Williams said.

He acknowledged that some manufacturers have said they will accept liability for accidents where their autonomous systems are at fault, but said that others have not.

“At the moment, we’re getting a differing message from the motor manufacturers,” said Williams.

"Volvo is one manufacturer that has come out and said it will take responsibility for any incidents caused by technical failures in its cars. On the other side, we have companies like Tesla that have said that all it will do is provide technical systems and its responsibility ends there.”

Insurers, government and the RTA
Williams’s call for clarity is echoed by the contents of a briefing paper that the British Insurance Brokers’ Association presented to government last month.

The paper was issued as a response to a major consultation into the use of autonomous cars by the Department for Transport, which stated that motor insurance will remain compulsory but will be extended to cover product liability for automated vehicles.

“When a motorist has handed control to their vehicle, they can be reassured that their insurance will be there if anything goes wrong,” said the DfT once the consultation was published.

In the briefing paper, Biba asked government for insurers to be granted “immediate and unrestricted access to all data from the autonomous vehicle manufacturer and that the data should be provided in a standard, clear, accessible format.”

The document also set out proposals for “a single seamless motor insurance policy” which will cover against failure of an autonomous vehicle, including by cyber attack or failure of the autonomous driving mode.

Biba also said that it agreed with the consultation recommendation for amendments to the Road Traffic Act, which would allow a driver to claim on their insurance policy to receive compensation should they sustain injuries that are caused by a vehicle defect and sustained while it is in fully autonomous mode.

Williams said that this revision would not only benefit the drivers and passengers of autonomous cars, but would also ensure that insurers remain part of the picture.

“The revision to the RTA would place the responsibility on the insurer to pay out the claim in the first instance,” he added.

“This keeps insurers included,” Williams said. “Insurers will need to make adjustments but shouldn’t fear the changes. Autonomous vehicles have the potential to make the roads safer, cars safer and may bring customer premiums down in the long term.”

Commenting on the DfT consultation, Williams said that he believes that government will still require insurers to pay out a claim if the incident is caused as a result of hacking or a cyber attack on an autonomous car and this needs to be specified in the RTA.

“When it comes to hacking, where government is concerned, it will still want people protected by the RTA, no matter what the cause of the issue. Cover against hacking or other cyber crime will be required as standard.”

While this may be the case for an individual and isolated hack such as the incident involving the Chinese Keen security lab and the Tesla, Williams said that insurers would not be required to pay out in the case of a mass hack.

“Nobody will expect insurers to pay out for a huge ‘Armageddon style’ mass hacking incident. Over the next few years we will see a debate on how best to cover against this kind of cyber risk.”

It is evident that there is currently no clear answer as to what shape the relationship between insurers and driverless cars will take. Williams concluded that there is still much to be decided and that government, insurers and motor manufacturers need to continue the current debate if there is to be a definitive answer to the question on who would be liable in the event of an incident involving a driverless vehicle.

“As insurers, we often overestimate what will happen in the next two years and then underestimate the next 10 years,” said Williams. “This is likely to be the case with driverless cars.”

  • LinkedIn  
  • Save this article
  • Print this page  

Only users who have a paid subscription or are part of a corporate subscription are able to print or copy content.

To access these options, along with all other subscription benefits, please contact [email protected] or view our subscription options here:

You are currently unable to copy this content. Please contact [email protected] to find out more.

You need to sign in to use this feature. If you don’t have an Insurance Post account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account here: