It may not be perfect, but applied modelling remains the key to risk management.
Modelling has become a mainstay of the 21st century (re)insurance industry.
Chances are that your company consults various models to influence important decisions, such as which risks to underwrite, or what investment strategy to pursue.
Many basic insurance processes are, in fact, mathematical models. Your annual plan is one example. The pro forma financial projections involved are in effect an annual modelling exercise – one that’s done by companies across the insurance industry.
While the best models are usually the simplest, the sophistication of mathematical abstraction available to insurers has increased significantly. The more advanced expertise may be beyond the resources of some companies, and external specialists may be needed to build specific models.
So given the time and costs involved, it is worth stopping to consider the main reasons why we model risks in the insurance industry.
The growing prevalence of applied modelling extends far beyond the insurance industry. The frequency of use of the term ‘financial model’ in English-language publications has increased 50-fold since 1960.
But despite the increased awareness of modelling as a concept, its nature and purpose are less clearly understood.
There is often a perception bias against the accuracy of models, as severe and low-probability events seem to occur more than they predict – witness the one-in-200-year mega-flood in California.
But ‘unlikely’ events will occur somewhere in the world every year. It’s not necessarily that probabilities have been underestimated. The number of independent threats is so large, some will probably occur in any given time period.
In addition, there is now is much more reporting of what’s happening in the world, which makes the public more aware of distant catastrophes than in the past.
Multiplying diverse probabilities across a slew of individual risks does not necessarily feature in public opinion. So it’s unsurprising if model-based decision-making and strategy formulation is a tough sell for some organisations. If we can expose the perception bias, the cost-benefit analysis when it comes to modelling may be slightly more objective.
Another good reason for modelling has to do with society in general.
Some risks are so large that government and private enterprise both play a role in planning and response. Society will not allocate resources to this without a strong common belief in the potential for a disaster to occur. But how well informed is society’s common belief in the likelihood of it happening?
Modelling can play a key role here, as it condenses scientific inquiry to create realistic ‘what-ifs’.
For example, the New Madrid fault line in the central United States has enormous destructive power – yet there hasn’t been a significant earthquake in the area in recent history. Modelling the potential severity of this risk can support the allocation of available capital to the reinsurance that property insurers purchase against their exposure.
The response to extreme disasters is also improved when visual intelligence is supplemented with modelling, to provide more accurate damage estimates.
Clearly, more accurate estimates offer many benefits to insurers, from more efficient claims processing to greater financial flexibility. A higher-resolution understanding of possible and actual destructive events would lead to the creation of financial backstops and better response procedures. But models aren’t always enough to motivate robust preparedness.
For this reason, it’s important for modelling practitioners to promote what they know, as the aftermath of Hurricane Katrina illustrates. It wasn’t until after Katrina had exposed weaknesses in the Federal Emergency Management Agency’s response capabilities that FEMA was reorganised, and granted more autonomy within the Department of Homeland Security.
We know models have blind spots. For example, the 2011 Halloween snowstorm in the north-eastern US demonstrated that snowfall on trees that have not yet shed their leaves can pose a serious concern.
Had this type of event been foreseen, the surge in demand for response crews may have been anticipated, and managed more efficiently.
The tremendous variety of potential disastrous events requires holistic risk management, which should involve a learning process. Models continually get better at identifying gaps, and can feature prominently in this process.
When reality reveals a blind spot, existing models can provide context. While forever imperfect, modelling will only continue to become more relevant in understanding both intrinsic and emerging risk.
Following an event, back-testing using a model can help explain what actually occurred, and why a portfolio proved more or less vulnerable than predicted. And modelling historical natural catastrophes against new and current insured exposure can shed light a company’s risk profile in a uniquely tangible way.
Assumption vs. reality
We cannot imagine the world without the power grid, airline schedules or traffic management systems. These basic engines of modern life are built on mathematical models.
These began as relatively simple prototypes, since when they have continuously incorporated new science in a changing world. Expansion should keep pace with need, and models are a hands-on experiment in testing our understanding of the world in which we operate.
The insurance industry has its own evolving needs, and significant efforts have been made to develop distinct modelling disciplines, including natural catastrophe, economic capital and predictive modelling.
These mathematical abstractions are not just theoretical fancy. They bridge the gap between assumption and reality. In a very practical way, models provide and promote robustness in our risk management systems.
Please contact Micah Woolstenhulme, Head of Risk and Economic Advisory on +1 215 309 4637 or email firstname.lastname@example.org