A surge of investment and competition has transformed catastrophe modeling for secondary perils, pushing the industry toward sharper risk insights and broader insurance solutions. “There’s been great demand in the market to have solutions for these perils from the catastrophe modeling world,” said Adam Miron (pictured), head of catastrophe analytics at Juniper Re. The focus has shifted to perils like hail, wildfire, and flood – long considered difficult to model.
Major players Verisk and Moody’s released new severe convective storm and wildfire models, while new vendors entered the space. “Catastrophe models have gotten significantly better in recent years, and I think they will only continue to get better because of the investment and the competition in that space,” Miron said.
The push for better models is not just about technical progress; it is about meeting the market’s demand for actionable, quantitative risk assessments in areas that have historically been underserved. “Whenever an insurer is looking at insuring in an area or a peril, they want to have some sort of quantitative sense about what the risk is,” Miron said. The ability to price risk accurately opens the door for insurers to enter new markets, particularly those outside traditional high-risk zones. “The more competition that you have in a certain area or peril, the more likely it is to drive down the price,” he said.
This evolution in modeling has also fueled innovation in insurance products. Parametric solutions, which pay out based on a triggering event rather than loss assessment, have gained traction.
“Catastrophe models could also be used heavily for parametric products,” Miron said. “To the extent a community or area want to purchase insurance through a very transparent and quick-paying method, catastrophe models could help facilitate a parametric type of insurance product.” The transparency and speed of these products offers a compelling value proposition for communities seeking rapid recovery after disaster.
Climate change and the limits of prediction As models became more sophisticated, the challenge of uncertainty grew. Climate change will alter baselines, making historical data less reliable as a predictor of future risk. “The models were generally built on historical data to help inform the current view of risk. It depends on the peril, but for hurricanes, there are views that are designed to try and capture the current climate state,” Miron said.
Some models offer users the ability to select views based on projected climate scenarios for 2030, 2050, and beyond. But the limits of forecasting remain stark. “We don’t know what the climate will be like in the future, necessarily. And so I think we just need to be careful about how much we try to factor in future climate states, just because there is a lot we don’t know,” he said.
The uncertainty is not just academic. It has real-world consequences for how insurers and reinsurers assess risk, price products, and manage portfolios. “Could there be more impact in the future? Yes, potentially. And the models do try and capture some of that with the views that they could provide the user. But there is still a tremendous amount of uncertainty around those estimates,” Miron said. The industry grappled with how to balance the need for forward-looking models with the reality that future climate conditions might diverge sharply from the past.
This tension was mirrored in the debate over transparency in catastrophe modeling. Historically, catastrophe models operated as black boxes, with proprietary algorithms and limited visibility for users. That is changing, but not fast enough for many in the market. “I think transparency and openness should be a demand that we have for catastrophe models,” Miron said. “Historically, it’s been a black box operation, where it’s hard to know the underpinnings and the assumptions of the model. I do think that is changing. People are demanding more transparency and access to adjust the model, potentially to better reflect their custom view of risk.”
The ability to adjust models is especially critical for insurers with unique portfolios or niche exposures. “If they have a specific line of business that they are writing, or a very niche type of risk, there might be certain components of the model that don’t represent that insurer well. So the ability for the user to be able to adjust and make tweaks is very important,” he said. But with greater transparency comes greater responsibility. “Of course, greater flexibility demands from the user time and energy to understand the model, and not just rely purely on the output. So I think that there’s a little bit of a trade-off: you want more flexibility and transparency, but you have to be able to understand the consequences. If you did change the model, understand what you were changing and what sort of impact that change has on the results,” Miron said.
Model vendors do seek feedback and claims data from the market, aiming to refine their products with real-world experience. “I think the model vendors do a good job of trying to elicit feedback from the market, asking for claims data, and trying to incorporate lessons from recent events,” Miron said. “So I think that’s a good aspect, and I think the industry continues to get better in terms of the openness and transparency of the models, and I hope that continues.”