The Loss Forecasting Business

12 July 2018

For major events, loss estimates are further refined once post-landfall hazard data are made available from the National Hurricane Center (NHC) and incorporated into the models to create bespoke wind and storm surge footprints. During this entire process, crucial judgements are made by catastrophe modelling experts when examining the models’ statistical outcomes and determining where in the distribution losses are likely to occur. To help inform the debate, JLT Re has undertaken an exercise to explore the precision of modelled market loss estimates for significant hurricanes since 2004 by comparing them to fully incurred losses for each respective event1. The purpose of this study is to gauge the performance of loss estimates during the lifespan of hurricane events and assess whether any trends or lessons can be gleaned for future reference.

The parameters of the analysis have been restricted to North Atlantic hurricanes, given they are the world’s most comprehensively analysed region and peril. Ultimately, if US hurricane estimates do not stand up to scrutiny, they will not do so anywhere else. Loss estimates provided by AIR and RMS have been used in the exercise2.

METHODS AND TIMELINES

Figure 4 shows the timeline that catastrophe modelling firms typically work towards when releasing market loss estimates for significant hurricane events. Whilst this, of course, only applies to storms that develop out at sea with sufficient lead time before US landfall (i.e. three to four days), it illustrates the rigorous steps these companies undertake when compiling market loss estimates.

In the lead up to, and immediately after, landfall, meticulous work goes into modelling unique scenarios for each event by selecting tracks from hundreds of thousands of stochastic events that closely resemble forecasted path and intensity. The number of tracks dwindles quickly as landfall nears as factors such as location, forward speed and windfield size are almost impossible to replicate in combination. After all, every storm is unique and whilst the imminent release of highdefinition models will help provide more clarity going forward, this will continue to be a major source of uncertainty.

Once a range of simulated events has been selected, they are then applied to industry exposure databases (IEDs) to calculate market loss estimates. This is a crucial step in the process as recent events have highlighted how differing exposure assumptions can lead to hugely divergent views.

For major events, loss estimates are further refined once post-landfall hazard data are made available from the National Hurricane Center (NHC) and incorporated into the models to create bespoke wind and storm surge footprints. During this entire process, crucial judgements are made by catastrophe modelling experts when examining the models’ statistical outcomes and determining where in the distribution losses are likely to occur.

Figure 4: Timelines for US Hurricane Loss Estimation (Source: JLT Re)

hurricane loss estimation

MODELLED LOSS (IN)ACCURACY

The culmination of these efforts is shown in Figure 5, which provides a helicopter view of how the modelled industry loss estimates collected in our study evolved during the loss estimation period. The line in the middle of both graphics represents the final, total insured loss for each respective hurricane as per Munich Re (including flood losses) and the bars show how AIR’s (dark blue) and RMS’s (light blue) estimates compared as a percentage of this total. Pre-landfall industry loss estimates, which AIR provides to its clients up to 48 hours or 24 hours before US mainland landfall, and which will be shown in the individual case studies that follow, have not been included in Figure 5 as the huge permutations around track trajectories and storm parameters at landfall typically result in ranges that deviate massively from the actual insured loss. RMS, meanwhile, does not provide any predictions before or immediately after landfall, focusing instead on postlandfall industry loss estimates. Whilst the uncertainty in vendor modelled loss estimates decreases significantly after landfall, this has not always translated into increased accuracy relative to the ultimate loss. It is important to note here that the ultimate loss data used in our study will occasionally include loss components – such as flood, loss adjustment expenses (LAE) and contingent business interruption (CBI) – that are un-modelled by AIR and RMS and are therefore not included in their loss estimates.

 

Figure 5: Evolution of Modelled Loss Estimates for Select US Hurricanes - 2004 to 2017* (Source: JLT Re, AIR, RMS, Munich Re)

hurricane loss estimates

*Please note that for Harvey, AIR's estimates (both 'first post-landfall and 'final') do not include National Flood Insurance Program (NFIP) losses whilst RMS's and Munich Re's figures do.)

 

Nevertheless, it is equally important to acknowledge that market participants look to the catastrophe modelling firms to provide comprehensive loss estimates and any restrictions in what they capture are often viewed as significant limitations that need to be addressed.

Ultimately, Figure 5 shows a trend towards loss underestimation for the majority of initial post-landfall estimates, with HIM the clear exception (left of chart). Additionally, subsequent revisions made to both AIR’s and RMS’s ranges, which often attempted to account for significant un-modelled losses, frequently continued to miss on the downside. Overall, the majority of final estimates settled on a range that fell outside the ultimate insured loss window (right of chart).

This snapshot goes some way to explaining why the industry loss estimates provided by catastrophe modelling firms have led to general scepticism within the (re)insurance market over the last 15 years or so.

There is an overriding trend towards significant loss underestimation, and it is not even immediately apparent that the range of loss estimates narrows during the lifespan of storms, or that they always become more accurate.

As alluded to earlier, this has important consequences in today’s data-hungry world. With real-time loss information playing an increasingly crucial role in setting initial loss guidance, several carriers have been forced to make significant revisions to their own loss estimates as claims develop unexpectedly. Investor confidence and carriers’ share prices can suffer in instances where expected losses develop adversely (as they did for most of the devastating storms in the recent past), fuelling unfavourable perceptions of modelling companies and other loss forecasters within the market. This, over time, has encouraged a general predisposition within the market to discount the bottom-end of ranges and expect losses to settle at, or above, the top.

HISTORICAL COMPARISONS

But are these perceptions justified? Breaking down industry loss estimates into groups of storms with similar characteristics reveals some interesting insights about model performance prior to 2017. Figure 6 shows that vendor models have performed relatively well for wind events that incurred moderate losses, regardless of landfall location. Results for hurricanes Charley (FL, 2004), Gustav (TX, 2008) and Irene (NC & NJ, 2011) suggest credible levels of accuracy (post-landfall) when loss components are anticipated and contained. 

Or, in other words, conventional hurricane events that do not assume super-cat characteristics are captured adequately by vendor catastrophe models, and this is reflected in the loss estimates provided for such events.

 

Figure 6: Favourable Performance of Modelled Losses (Source: JLT Re, AIR, RMS, Munich Re)

charleygustavirene

 

All three hurricanes had different intensities and landfall regions. Whilst Charley was a major hurricane when it came ashore along Florida’s western coastline, Gustav was a category 2 storm when it made landfall in Texas (having moved through the Gulf of Mexico and caused damage to offshore oil assets) and Irene was a category 1 hurricane when it hit North Carolina. Each of these events generated insured losses of less than USD 10 billion, demonstrating that the market can expect modelled post-landfall estimates to be within a reasonable range of the fully developed figure for losses that are both wind driven and moderate in magnitude.

The models, however, have not performed as well for hurricane events where losses extend beyond wind into areas that are not modelled or well understood. Katrina, Ike and Sandy are three examples of such storms, and the evolutions of AIR’s and RMS’s modelled loss estimates for each are shown in Figure 7.

Despite a number of revisions being made in the days and weeks after Katrina’s landfall, both AIR and RMS consistently underestimated the magnitude of the ultimate insured loss. This can mostly be explained by the flooding of New Orleans, a secondary consequence that virtually eclipsed the original catastrophe. Indeed, the models’ limitations were laid bare by the extent of the flood damage, as well as other non-modelled factors such as loss amplification (which includes demand surge and claims inflation) and wind versus flood disputes.

Such unique super-cat effects are extremely challenging to model and go a long way to explaining the huge divergence between AIR’s and RMS’s final Katrina estimates (which, unsurprisingly, is the largest of the entire sample in our study). AIR’s much narrower range was in line with previous estimates but ultimately proved to be less than half of the ultimate insured loss. RMS, meanwhile, significantly increased its final projection, albeit with a wide margin for error (i.e. a USD 20 billion difference between the high and low end), and even this proved insufficient.

Ike and Sandy provide other, albeit less exaggerated, examples of loss underestimation around the time of landfall. Both storms had unforeseen attributes, which again helps to account for the sub-par accuracy of the modelled loss estimates. Although neither event was classified as a major hurricane, they still packed a punch as Ike caused more damage inland than modellers expected and Sandy was largely a surge and flood event after it made landfall in New Jersey on an unusual trajectory.

All this highlights the inherent difficulties modelling companies face in predicting losses when tropical cyclones strike highly populated urban areas. These types of events often bring unforeseen (and often un-modelled) consequences that cause losses to spiral. The results for Katrina, Ike and Sandy show that catastrophe models have struggled to generate accurate loss ranges in such circumstances. Beyond these three storms, there have been other significant hurricanes, including Ivan and Wilma, where both AIR and RMS significantly underestimated the cost to the sector.

HIM: A NEW BENCHMARK?

On the face of it, the estimates released by the modelling companies in the days and weeks after HIM made landfall in 2017 seemed to reinforce market perceptions that catastrophe models cannot be relied on to predict accurately industry loss estimates. After all, the loss ranges were both vast and diverse. But whilst there is no denying that the accuracy of the modelled losses released in 2017 was mixed, closer analysis reveals that important differences emerged last year.

Figure 8 on page 14 shows the ranges released by AIR and RMS for Hurricane Irma. Despite the magnitude of the catastrophe (insured losses are currently expected to exceed USD 30 billion), neither AIR nor RMS underestimated the total and their post-landfall estimates remained largely consistent. In addition, whilst the loss estimates released by both modelling companies were initially deemed high, there is still significant uncertainty associated with Irma’s loss and there is some evidence that claims development in the US may yet move Irma’s ultimate insured cost into the lower end of AIR’s and RMS’s final estimates. Notwithstanding criticisms over the range of the estimates, this is a reasonable performance given the complexities associated with the event.

Figure 8: Evolution of Modelling Companies' Market Loss Estimates for Hurricane Irma (Source: JLT Re AIR, RMS, Munich Re)

market loss

This initial consensus was short-lived, however, as Maria split opinion as never before (see Figure 9 on page 14). AIR’s original top-end Maria estimate was nearly three times that of RMS and there was no overlap between its lower-end and RMS’s top. The gulf stemmed in large part from differing judgements made over Maria’s windfield size at landfall, ground-up exposures, repair costs and insurance coverages and terms (for business interruption especially) in Puerto Rico. Whilst RMS maintained its view, AIR substantially revised its estimate downwards as it altered assumptions around modelled wind speeds, insurance take-up rates in Puerto Rico and loss distributions across each line of business (industrial lines in particular).

Figure 9: Evolution of Modelling Companies' Market Loss Estimates for Hurricane Maria (Source: JLT Re AIR, RMS, Munich Re)

Market Loss

The second, frequently overlooked by industry participants, including the media in particular, is the need for catastrophe modelling firms to balance any incentive of being first to market with accuracy. The requirements and expectations of real-time information will only increase and it is important that catastrophe modelling companies strengthen their authority in this area: accuracy needs to be the focus so that decision-makers can be confident in the numbers. Reducing core components of uncertainty in the real-time loss estimation process, particularly for hazard and exposure assessments, will augment accuracy and reduce the need for large ranges.

Unfortunately for catastrophe modelling firms, failures endure far longer in the memory than successes and a significant credibility gap remains, justified or not. Progress is being made (as supported by the results of our study) but perhaps the market can further assist the catastrophe modelling firms by refraining from the call for immediate estimates and waiting for a more considered view.

Post-Maria, it is evident that the increasing sophistication of windfield generation during and after the event, along with timely event reconnaissance trips to the most heavily impacted areas, influenced both the evolution of loss estimates and the range of uncertainty. Although the range of AIR’s final estimate was subsequently narrowed to USD 21 billion (from USD 45 billion originally), it remains the largest in the entirety of this study and raises questions about how large modelled market estimates can be, given a mid-point range of expectations, before they lose utility and credibility.

Two important points should not be lost in all of this, however. The first is that RMS deserves credit for the precision of its one and only loss estimate, especially given the high amount of uncertainty that was associated with Maria (see Figure 10).

Figure 10: Complex Loss Profile of Hurricane Maria (Source: JLT Re, RMS)

Loss profile


1 The sample of hurricanes used in the study included: Charley, Frances, Ivan, Jeanne, Katrina, Rita, Wilma, Gustav, Ike, Irene, Sandy, Harvey, Irma and Maria.

2 Catastrophe modelling firms’ loss data points have been compiled from a variety of sources, including firms’ websites, press releases and media reports.

 

Download the full report here:

Download

video

Natural Disaster Resources

Read more
video

CAT Activity Zone

Read more
video

Cat Modelling & Exposure Management

Read more