The final blow

Significant discrepancies between predicted and actual losses suffered by insurers after hurricane Katrina hit in 2005 were a key driver behind subsequent improvements to catastrophe models. Ana Paula Nacif details developments, explains inevitable limitations and outlines new areas of demand

Hurricane Katrina was the costliest and one of the five deadliest hurricanes in US history. With economic losses estimated at $125bn (£64.1bn) - of which $45bn was insured in the private market, according to Munich Re - Katrina raised questions about the accuracy and effectiveness of catastrophe modelling systems.

In fact, a white paper published by Readvisory, Carvill Group's analytical team, said in 2005: "No other natural catastrophe had been underestimated by the industry by such a wide margin."

Since then a lot has changed. Not only has technology improved but large scale natural events have also provided the insurance industry and modelling firms with more accurate data. This, in turn, has translated into more calibrated models.

Although Katrina was a major influence, it was not the sole driver behind accepting further development was needed within catastrophe models. "The drive originated from the previous 2004 storms of Charley, Frances and Jeanne," explains Cassy Brand, senior catastrophe modeller at Aon Re. "After the US hurricanes of 2004 and 2005, pressure was put on the modelling vendors to make changes to reflect the difference in actual versus modelled loss. This, plus the scientific consensus that frequency of Atlantic hurricanes was increasing, resulted in the model changes."

Learning curve

According to Andrew Castaldi, Swiss Re senior vice-president and head of catastrophe perils for the Americas, the events during 2004 and 2005 allowed the industry and model developers to capture enough information to evaluate existing models and, more importantly, recognise the strengths and weaknesses of how these are used.

As an example of weakness, he cites construction: "Some modellers learned construction is not always the best way to determine the vulnerability of a building and in many instances the occupancy of the buildings is a better indication of potential loss," he says. "Two concrete block buildings, one being a bank with limited openings and the other being an auto repair shop with many openings, will react differently despite both being concrete constructions."

Poor data was, in fact, part of the reason why catastrophe model performance was less than satisfactory when Katrina hit, according to Claire Souch, senior director of model management at Risk Management Solutions.

In 2005, AIR Worldwide conducted an analysis of exposure data from companies representing more than 50% of the total US property market to find out why some companies' modelled losses for Katrina differed from actual losses. The analysis revealed significant problems with data quality and completeness, especially for commercial properties.

Jayanta Guin, senior vice-president of research and modelling at AIR, explains: "Property replacement values - a model input with considerable impact on catastrophe loss estimates - in particular were of questionable accuracy for about 90% of the companies analysed."

After the massive losses from Katrina, the industry started to pay more attention to data exposure and to how to use catastrophe models. "What Katrina and Wilma did was to provide a more expanded view of the damage that can be caused by a hurricane," says Dr Steve Smith, president of property solutions at Carvill's Readvisory. "These changes also highlight the need for a better view of exposure and how people use exposure in the model."

He adds: "Before 2005 people were not inputting accurate exposure data so models were not performing to their best ability. On top of that, models were not used in the right way as exposure values were not being updated as often as they should."

Data may well be paramount, but it cannot be forgotten that models are built to tackle specific scenarios. In the case of Katrina, they were designed to predict wind-related damage and direct storm from the sea - whereas most of the damage came from flooding and rainfall.

"We need to think about what they are designed to do, what they can capture and also their limitations," Ms Souch emphasises.

She also points out that some structures' performance in reality can be quite different from the predictions of engineering studies. "For example, a roof that was pulled off in a category two to three wind was, according to officials, able to withstand a category five," says Ms Souch. "So you always learn a lot. We use the wealth of knowledge from the engineering community to build our models but sometimes reality turns out to be different. After Katrina, we calibrated our model based on $25bn worth of claims data that was analysed."

Apart from more accurate data, such calibration has also resulted in RMS rethinking its hurricane model, which is now being upgraded to include flood-related information.

In terms of business interruption losses, - another surprise to the insurance industry following Katrina - AIR last year incorporated an enhanced methodology for estimating such losses that accounts for both building and business characteristics when estimating total BI downtime. This includes indirect losses from sources other than physical damage to the insured building.

But August Probstl, head of accumulation risks in the corporate underwriting unit at Munich Re, explains that while some issues, such as the failure probability of protection systems and direct BI, can and have been addressed by the probabilistic catastrophe risk models, other problems remain. "Contingent BI and coverage expansion, for example, cannot be explicitly addressed in such models and have to be dealt with separately in the underwriting and risk management process."

It seems that although many model weaknesses have been identified and addressed, there is still room for improvement. And according to Mr Castaldi, it remains debatable as to whether catastrophe models are reflecting the expected heightened hurricane activity.

Tropical Storm Risk, the consortium of experts on insurance, risk management and seasonal climate forecasting led by the Benfield Hazard Research Centre at University College London, has predicted that Atlantic basin and US landfalling hurricane activity in 2008 will be 20% above the long-term (1950 to 2007) norm. The first hurricane of the season, Dolly, which made landfall in Texas last month, is expected to cause losses between $350m and $700m in the US and Mexico, according to AIR.

"Many insurers and reinsurers are taking into account that we are in a period of increased hurricane activity due to the convergence of a number of hurricane-friendly North Atlantic oscillations, including higher sea surface temperature," explains Mr Castaldi. "This more active period suggests to many that the current catastrophe models should reflect the increased activity over the short term. But opinions differ as to the extent of this increase."

Mr Probstl says all major model vendors now give users the opportunity to either use the long-term average activity rates in their models for Atlantic hurricane - thus ignoring fluctuations in the sea surface temperatures - or to use short-term activity rates, intended to account for the current warm phase in the Atlantic (since 1995), which also shows a likely impact of climate change. "We strongly recommend our clients use the short-term view because we believe it better represents the current risk environment reflected in higher-than-average sea surface temperatures," he says.

Limiting loss

Accurate prediction of aggregated losses is important but the industry also recognises that risk management, both in terms of mitigating factors as well as earlier warning capabilities, is also key.

"Advanced warnings are always welcome but equally welcome is the truly advanced warning of not to build in vulnerable areas and teaching others about the added risk associated with building in these areas," says Mr Castaldi.

Francesco Nazzini, managing consultant in the modelling analysis and design team at Marsh, agrees: "Risk management and business continuity plans are important; using catastrophe models can help companies to spread their exposure in different sites and facilities. It helps them to identify more exposed locations and, therefore, prioritise their risk mitigation effort."

That is perhaps why catastrophe models are no longer solely used by the insurance industry. According to Mr Guin there has been an increase in demand for them in new regions and for new perils.

For example, in July, AIR released its US hurricane model for offshore assets. The model has a single catalogue of simulated events that enables companies to assess potential losses from individual hurricanes impacting onshore and offshore properties in the US, the Gulf of Mexico and the Caribbean. "The energy industry can directly take advantage of this model by estimating the likelihood of damage to and loss of oil and gas production in the Gulf of Mexico," Mr Guin explains.

But it is not only corporations that are interested in catastrophe models. Ms Souch says she has seen heightened interest from the investment community, particularly those with a portfolio connected with property, such as hedge funds and investment banks.

"Also some governments and states are increasingly using them," she adds. "We are currently working with China to help it structure its insurance industry, but this is by no means the same in other countries."

Powerful technology has been an important driver behind this trend. "Clients are able to run the models much faster than three or five years ago," explains Ms Souch. "We release model upgrades every year and we are able to provide far greater resolution and granularity in overall risks. Although the risk hasn't changed dramatically, we are able to differentiate risks to a far greater degree."

She adds that having quicker and more refined models also means that this tool can now be used at the point of underwriting, whereas before it was only used for reinsurance purposes.

Escalating exposure

Exposure and losses are expected to continue their upward trend. There has been a significant increase in the number and value of exposed US properties over the last decade, which will continue to contribute to increasing hurricane losses for insurers, according to AIR. A recent report by the catastrophe risk modelling firm estimates that over the past three years the insured value of properties in US coastal areas has risen at a compound annual growth rate of just over 7%.

"Despite the recent weakening of the real estate market in many areas, the insured value - or the cost to rebuild properties - has maintained an annual growth rate that will lead to a doubling of the total value every decade," explains Mr Guin. "These and other factors are incorporated into the annual update to AIR's industry exposure database."

It seems that regardless of how much catastrophe models improve, the fact remains they are not infallible. "Models are a very good tool but they are just that - a tool," concludes Dr Smith. "They are subject to uncertainty. For example, a loss of say $1bn has a range that could go to minus or plus $500m. However, we are seeing the mean value inside that range change as science develops. And although more people are using models, they are also more sceptical and understand these tools can give you an answer but not the absolute truth."

FLORIDA BILL

The Florida Property Insurance Reform Bill, which became effective on 1 July, has changed the regulation of hurricane models and their use in ratemaking.

The central change to the rating law is that now insurers may include "net costs of reinsurance" for coverage up to a 250-year event (0.4% occurrence exceedance probability) in rate filings and such costs may not be the sole basis for disapproval by regulators.

However, the reinsurance amount, and all other provisions determined by hurricane models in the rate filing, must be based on models certified for use in ratemaking by the Florida Commission. The current rating law also clarifies that the amount for the net cost of reinsurance must be tested against the return period standard by the accepted model.

The law also directs the Florida Office of Insurance Regulation to allow open access to the "public" hurricane model when it is most needed by insurer actuaries, when rate indications are set and prior to rate filing submission.

Source: AIR Worldwide.

  • LinkedIn  
  • Save this article
  • Print this page  

You need to sign in to use this feature. If you don’t have an Insurance Post account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an indvidual account here: