The objective of this report is to encourage insurers to think about risk in a different way by providing a systematic and in-depth framework for including downward counterfactual analysis in risk assessment. (page 6)
What is counterfactual analysis?
Counterfactual analysis: what would the result have been if “x” had happened instead of “y”?
Whenever an event occurs that takes the insurance market by surprise, questions are asked how the loss might have been averted or what additional risk mitigation measures might have reduced the loss. It is also useful for insurers and other interested parties to ask how the loss might have been worse. This is known as downward counterfactual analysis
the objective of this report is to encourage more systematic and profound downward counterfactual thinking in all lines of insurance. This encouragement is needed, because this kind of thinking goes against the grain of human nature. Counterfactual disaster risk analysis is rooted in fundamental concepts as basic as claims analysis, yet this is a subject absent from professional insurance education or training. (8)
Downward counterfactual analysis can provide insurers with the ability to search for and analyse data that may not be collected by historical real-world event research, and therefore can assist with the identification of unlikely but possible events (known as Black Swans).
Learning more from history
We have a limited history of data available to estimate the likelihood and cost of low frequency events (e.g. volcanic eruptions). Downward counterfactual analysis allows us to get more out of the historical data by building “what if” scenarios of events that could have happened if things had turned out slightly differently.
The analysis of near misses
Near misses - events that almost happened - can be used to understand risk better. Hurricane Irene - a near miss - in 2011 should have alerted people to the risk of the NYC subway system flooding during a hurricane; but it didn’t. Then Superstorm Sandy came through the next year and did flood the subway system. Same for Hurricane Ivan in 2004.
If things had turned for the worse
There is an outcome bias in reviewing losses. We should be analyzing near miss losses in a similar manner as we review actual losses. Additionally, for actual events, we can ask what could have caused it to be worse and what impact would that have had. What if Hurricane Irma took the path right through Miami like it appeared to be a few days before landfall?
Bias induced by historical calibration
If everyone only uses the same historical data to calibrate models, then we will all be biased in the same direction. This could cause blind spots and delude us into thinking we know it all. For example, seismologists had enough information prior to the Tohoku earthquake to know that a magnitude 9 quake could occur on that fault line, but they didn’t properly interpret the historical data.
Stochastic modelling of the past
We can use stochastic modeling to estimate what could have happened if history had turned out differently. For example, repeatedly simulating 110 year histories of hurricane losses and calculating the average annual loss of each simulation, provides a view of the uncertainty due to small sample size of hurricane modeling.
Counterfactual disaster scenarios
Counterfactual disaster scenarios (CDS) can be used alongside Lloyd’s Realistic Disaster Scenarios (RDS), especially for risks with little historical data.
- New Zealand earthquake
- UK flooding
- Caribbean / US windstorm clash
- Terrorism accumulations other than Manhattan
Practical applications in modelling activities for P&C (re)insurance
- Pricing non-modelled catastrophe risk
- Pricing catastrophe risk
- Capacity management
- Capital calibration