Catastrophe modelling was a concept born in the late 1980s to assist insurers and reinsurers in analysing, pricing and underwriting natural catastrophe risk. Up to this point, risk carriers typically relied on actuarial models to help estimate losses, the focus of which was on hurricanes in the United States, given the loss potential from the peril. Whilst these statistical models enabled carriers to make loss projections based on historical event frequency and claims data, they did not consider changing demographics such as new building codes or shifting meteorological conditions.
Lulled into a false sense of security by relatively quiet hurricane activity in the United States during the two preceding decades (with the exception of Hugo), most (re)insurers were grossly underestimating the full loss potential of hurricane risk in the country.
A MODEL BREAKTHROUGH
This became painfully clear when Hurricane Andrew made landfall as a category 5 storm in southern Florida in August 1992. With sustained winds in excess of 150 miles per hour, more than 63,000 houses were destroyed and another 125,000 were damaged. Andrew’s intensity and landfall location meant that the magnitude of the loss was well beyond market expectations, exposing the limitations of using past experience alone as a basis for estimating future losses.
Aside from the devastating costs Andrew caused, the storm was also instrumental in bringing about a sea change to the (re)insurance market as the industry moved quickly to embrace scientifically-derived models. Prior to 1992, start-up modelling companies such as AIR and RMS had struggled to persuade skeptical (re)insurers of the value catastrophe models could bring in informing risk management decisions. Indeed, the sector responded with incredulity when AIR estimated shortly after Andrew’s landfall that total insured losses would reach approximately USD 13 billion.
Attitudes quickly changed, however, as claims mounted. Andrew ultimately cost the (re)insurance market USD 17 billion (at original values), discrediting figures projected by actuarial models at the time, which typically pointed to a mid-single digit loss. Given this vast disparity, a number of carriers were unable to pay claims, leading to several bankruptcies and a Florida property market in dire need of reconstruction. It also brought about a widespread recognition in the post-Andrew world that a more scientific approach was needed for natural catastrophe risk, particularly low frequency, high-severity hurricane events. As a result, catastrophe models soon became fundamental to carriers’ underwriting and capital management processes. Both AIR and RMS benefitted as they quickly established themselves as major players in the catastrophe market.
Figure 1: Top 10 Most Costly North Atlantic Topical Cyclones (USD million) (Source: JLT Re, Munich Re)
In the 25 years since Andrew came ashore, catastrophe models’ theoretical framework has remained essentially the same. Models still consist of three basic components: hazard, vulnerability and loss. They also still simulate the impacts hazards have on built environments in order to estimate costs to insurable assets.
The sophistication and range of modelling products have nevertheless changed during this time, due in large part to increasing computer processing power and the growing availability of high-resolution hazard data. And new generations of models have been created on the back of lessons learned from recent events. The wealth of claims data post-event has enabled modelling companies to significantly refine the damageability curves for specific aspects of exposure such as occupancy, construction, year of construction, number of stories, as well as a host of secondary characteristics. This is especially true for North Atlantic hurricane models after successive storms have caused significant insured losses this century (see Figure 1 for the top 10 most costly hurricanes on record).
Katrina, Ike and Sandy in particular brought about significant revisions to hurricane models as each storm’s distinct characteristics exposed their limitations and weaknesses. The unexpected levee failure in New Orleans after Katrina made landfall, for example, showed that the models did not capture adequately the impacts from flooding and storm surge. And the costs associated with loss amplification, event clustering (after Rita and Wilma quickly followed) and other ‘super-cat’ characteristics (such as civil unrest, evacuations and National Guard deployment) were likewise not anticipated. Both AIR and RMS responded to these developments by recalibrating their models. Similar updates followed Ike and Sandy as new lessons were learned about inland damage and building code adherence (Ike), as well as storm surge along the US Northeast coast (Sandy).
After experiencing disruption from some of these revisions, the industry will be closely monitoring how the modelling companies respond to last year’s successive landfalls of HIM, all three of which rank in the top five most expensive hurricanes on record (in terms of inflation-adjusted insured losses). Both AIR and RMS have already indicated that insights obtained from HIM will be important factors in future hurricane model releases.
Using history as a guide, these recalibrations could have an important bearing on the property-catastrophe market. Figure 2 illustrates how the evolution of catastrophe modelling has been crucial to the development of the property market over the last 25 years. During this time, catastrophe models have become integral to the property underwriting process by assisting decision-making on exposure management, risk aggregation, pricing and reinsurance buying.
Hurricane Katrina was a watershed moment for the market for two key reasons. First, catastrophe modeling became embedded into carriers’ risk management strategies as metrics made readily available by the probabilistic vendor models were used to satisfy new rating agency requirements around capital allocation for catastrophe risks. Second, catastrophe models facilitated the rapid expansion of the insurance-linked securities (ILS) market as institutional investors utilised recalibrated models (post-Katrina and Ike) to price catastrophe risks. It is no exaggeration to say that the ILS market in its current form would not exist today without catastrophe modelling.
The impact alternative capital has had on the reinsurance sector is difficult to overstate. It has brought about a structural change in how capital is provided to the market and in how much capital can enter (and exit) the sector.
And, as JLT Re’s Risk-Adjusted Global Property-Catastrophe Reinsurance Rate-on-Line (ROL) Index in Figure 2 shows, it has played a leading role in driving pricing down to levels last seen in the early 2000s. Whilst traditional capital levels have essentially remained flat since 2012, alternative capital (which is overwhelmingly focused on US wind risks) has doubled. Investor confidence in the current suite of catastrophe modelling applications has underpinned this growth.
Figure 2: Key Developments in the Property-Catastrophe Market (Source: JLT Re)
Of course, these new capital inflows coincided with an unusual lull in hurricane activity, meaning recent model recalibrations had gone largely untested. Indeed, the period of no major US hurricane landfalls in the decade between 2005 (Wilma) and 2016 was historically unprecedented. But then the 2017 hurricane season happened, bringing three massive hurricane strikes to US territories and causing widespread devastation across the Caribbean as hurricanes Harvey, Irma and Maria formed in quick succession.
UNDER THE SPOTLIGHT
After every large-loss year, it seems questions are asked about the accuracy of vendor market loss estimates, and the value they bring. This was the case in 2005, 2008, 2011 and 2012. And 2017 was no different. Figure 3 shows the high and low post-landfall estimates provided by different modelling firms for HIM. Subsequent (and significant) revisions made to estimates are also captured in the chart, with the patterned and filled (combined) entries of the same colour representing initial estimates and the filled entries showing most recent updates. The fact that such significant ranges were generated for HIM has raised questions over whether modelling tools can be relied upon to produce credible information for catastrophes in real time.
Figure 3: Loss Estimates for HIM by Catastrophe Modelling Company (Source: JLT Re, AIR, CoreLogic, KCC, RMS)
But are these charges fair? After all, catastrophe models are built to provide probabilistic outcomes for a wide range of scenarios rather than predict the monetary cost of any single event in real time. And, in doing the latter, catastrophe modelling firms are responding to intense pressure from various market participants, including carriers, brokers, investors and the media, to release market loss estimates as quickly as possible. In fact, HIM reinforced the need for real-time loss information as modeled estimates were used to inform traditional carriers’ loss guidance and post-event capital deployment strategies. Additionally, pressing reporting requirements saw many ILS funds rely on modelled loss estimates to provide initial loss evaluations to investors.
But with greater reliance comes greater scrutiny. Given the large divergences of loss estimates for HIM, catastrophe models are once again under the spotlight. And as the peak months of this year’s hurricane season approach, several questions remain unanswered. Are unfavourable perceptions with regard to historical modelled loss estimates justified? Did 2017 mark a deterioration in accuracy compared to previous large-loss years? How did the modeling companies perform last year when compared to other significant hurricane events? And can the market expect modelled loss estimates to become more accurate and narrow as techniques and technologies mature? All these points will be explored in the following pages.
Download the full report here: