Sponsored by: ?

This article was paid for by a contributing third party.

Roundtable: Advanced underwriting in the digital age

Marklogic RT

Pictured, standing, left to right: David Greaves, head of commercial SME, QBE; Stuart Jeffery, head of pricing, Markerstudy; Neil Clutterbuck, chief underwriting officer, Allianz; Martin Richards, head of underwriting, Regis Mutual Management; Gavin Dollings, head of commercial underwriting, Covéa; Hugh Kenyon, pricing director, LV; and Joe Brown, deputy chief underwriting officer, Hiscox. Pictured, seated, left to right: Frederic Valluet, solutions director, Marklogic; Roberto Nard, chief data officer, AIG Europe, Middle East and Africa; David McKenzie, IT plan director, CNA Hardy; Stephen Smyth, head of UK regional marine, Beazley; James Garratt, head of digital underwriting strategy, Talbot Underwriting; and Rodrigo DeCossio, UK insurance lead, Marklogic


Underwriters are increasingly using the growing prevalence of available data to improve risk segmentation and enhance pricing models. However, underwriting can still be time-consuming, complex and manual.

Against this backdrop, Post, in association with Marklogic, sat down with some of the industry’s data leaders to discuss how underwriters can get the most out of the data available to them.

The group began by discussing what tools underwriters need to be equipped with in order to use data to its maximum potential.

“It is important that the organisation has the tools, and the resources. So coherent structured data is an important part of the appropriate tooling for underwriters,” said Neil Clutterbuck, chief underwriting officer at Allianz.

“Also it’s important to understand and know how to approach and tackle unstructured data, and having the appropriate tooling that can read across that unstructured data and provide insight.

“There are analytical, modelling and data capture tools that feed both of those aspects, the unstructured and the structured data areas, and it’s important that we provide underwriters with access to all three.”

Hugh Kenyon, LV pricing director, highlighted the need for underwriters to embrace the changes occurring around them.

“This is as much around training as the tools and it’s really important that underwriters don’t run away from the changes that are happening and get engaged with data science and the new methodologies that are being developed,” he said.

QBE head of commercial SME David Greaves agreed: “Looking at traditional underwriters who have many years on their journey, what they did five or 10 years ago is now not necessarily what we’re looking for in the future.

“So the elements of education and training are important, and also simplification, so the access to self-help tools is there as opposed to constantly going back and forwards to data sites.”

The right questions

AIG Europe, Middle East and Africa chief data officer Roberto Nard said: “To me, the first thing is what is the right question to ask? If you don’t know what the question is, it’s difficult to answer.

“In the past, many companies just tried to get to an answer without maybe knowing what the question was, so they invested in Big Data or analytics without knowing the fundamental question they wanted to ask.

“In terms of underwriting, if you want to understand the quality of the risk, you need to understand what the data means and understand the risk. If it’s about pricing, it might be a different risk.”

Clutterbuck answered: “Part of that goes to the question of whether you’ve got the right tools that enable the analysis to be undertaken quickly and if you know the question you’re trying to answer in the first instance.

“Then it becomes an efficiency enabler and actually at that point you can then invest more time in the decision-making process.

“There’s always going to be this balance between expert judgement and what the data is telling you, and it varies across the business in terms of how much data you have available, how reliable that can be in terms of giving you insight, whether it be enriched data externally or local data internally.

“But it’s finding that balance to get the right result between the expert judgements; what the data is implying and what you think is actually going on behind the data.

“It’s really important that organisations have clarity around their data strategy. What they call structured, what they call unstructured, where they house it, the governance case that’s already been written, it’s massively important.

“They need to encompass the environment where you hold it and the culture that you want to establish within your organisation around the management of the data and then what you actually gather up – the content and how you then choose to deploy that content.”

Commercial v personal lines

The conversation moved on to customer expectations and whether there was a difference between commercial and personal lines.

Beazley’s head of UK regional marine Stephen Smyth said: “Everybody can go onto Compare the Market and get a quote for their motor insurance from a dozen different insurers in the space of five or 10 minutes.

“In the commercial world it’s not as easy as that because there are variable risk factors. There’s a wealth of data there, and it all depends on how much data you want to capture.”

Covéa’s director of commercial underwriting Gavin Dollings agreed: “The future in terms of the insurance space is very exciting in terms of what we can do to get more data, and more sophisticated data.

“But we need to recognise as well that we are in a commercial space at the moment and it’s about making sure that in a seamless fashion we get the current data we’ve got quickly and efficiently so that they can make good decisions with the level of information they’ve got on time.”

Martin Richards, head of underwriting at Regis Mutual Management, considered the differences between direct and intermediated business.

“If you’re in the intermediary world, you’re not always in control of the data that’s coming to you,” he said. “In the other world, there is a risk of data overload.

“If you get a whole lot of data and you haven’t asked for the right information then you’re not applying the right wisdom to the answer; you spend a lot of money and enrich somebody but you haven’t actually put yourself in an informed place to make an underwriting judgement.

“And you see it across the market, and all the folks are making whizz bang entries into the new field and they think they’ve got all the data.

“They may have all the data, but if you’ve got the data for the wrong set of questions and you apply the wrong wisdom to the answers, you may undercut the market somewhat foolishly.”

Joe Brown, deputy chief underwriting officer Hiscox, said: “The problem is, it changes very quickly as you move up the spectrum. So when it becomes more than a one man band, then absolutely you can use the data to sort out the policy in terms of you needing to know complex risks.”

“That’s the fundamental difference between the commercial and personalised markets,” said Clutterbuck. “In an SME environment we’re talking about 3.5 million small businesses in the UK, compared with a personalised market that’s many times that number.

“It’s becomes more precise data and questions, you’re fishing insight from a smaller pool, and your reliance on it, therefore, becomes slightly different.”

Rodrigo DeCossio, UK insurance lead at Marklogic, said the problem was not exclusive to the insurance industry.

“That problem of having too much data is something we see across multiple industries, not insurance only,” he said. “A lot of companies have Big Data and they find that they become data swamped, and there is a lot of information but they can’t really make sense of it.”

Working in silos

In addition to the potential dangers of “too much data”, the group discussed whether or not there was a risk of creating silos when integrating data from different sources.

David McKenzie, IT plan director at CNA Hardy said: “What we’re seeing is the data itself being more compartmentalised than when you traditionally had all this stuff in-house, be it on bits of paper or whatever.

“Now you can pull in all the sources but when you look at the Internet of Things there are no standards necessarily around those data streams. So you could potentially consume it but you’ve got a myriad of things to bring in and you may not actually own that data, though you may actually have access to it.”

Kenyon added: “There is an issue with silos, isn’t there? There are so many different options, so many different things you can be doing that you can’t really control, across the organisation.

“You’re going to have people going off and doing this over there, and people going off and doing that over here. I don’t know quite what the solution is.”

James Garratt, head of digital underwriting strategy at Talbot Underwriting, added: “You can both develop partnerships and develop in-house.

“You need that skill set for people internally to really engage with data, but where there are opportunities to find someone quicker and better than you as an organisation, then a partnership is a good option.”

GDPR challenges

The discussion moved onto the General Data Protection Regulation due to come into force in May next year and whether it will hinder or help underwriters in their approach to data.

In May, You Gov found that fewer than a third of businesses had started preparing for the incoming GDPR despite the fast approaching deadline.

For businesses that rely heavily on large amounts of personal data, the risks could be significantly higher.

Stuart Jeffery, head of pricing at Markerstudy said: “We’re considering it all the time. The issue for us is around data capture so we can get a profile of how companies vary their details and how they quote over the course of a year, and if they are continually changing their minds.

“Then it’s how you can use that, how you can give that information to the underwriters and actually price by it.”

McKenzie said: “That’s obviously a big issue at the moment; what do we do about our existing data. And also territorial issues. A lot of our companies have been through a big period of change with consolidation and mergers. From a data, regulatory and territorial restrictions standpoint you may have to take a non-optimal model in order to satisfy these things.”

Smyth added: “On data that’s forgotten, yes you might lose a particular person’s name or something, but you don’t need to lose the data itself. You can anonymise the data – why wait for them to say that they want to be forgotten to anonymise that data?

“If you do a statistical analysis across the book that doesn’t need to report down to an individual customer, why not give the analyst anonymised data to start with?”

Frederic Valluet, solutions director at Marklogic, believed companies could reap potential commercial benefits from GDPR that they may not have under the old regime.

“We could get some good insights from doing it,” he said. “It depends on the way you are seeing things.”

McKenzie agreed: “GDPR is a bit of a pain, and it’s causing us a lot of effort to do all this but when you think about what we were talking about, about making use of the data, in a way it’s forcing us to look at all of our data and now we’ll know all our data and start using it.”

Kenyon concluded, however: “The principles of GDPR are fundamentally correct, about protecting people, about making sure that people use data in a positive way and it almost feels that the timing is right.”

You need to sign in to use this feature. If you don’t have an Insurance Post account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account here