The National Association of Insurance Commissioners (NAIC) is targeting a March launch for a pilot program that will evaluate an AI assessment tool developed to give regulators clearer insight into insurers’ AI governance practices. At least nine states are expected to take part in the pilot, which will be overseen by the NAIC’s Big Data and AI Working Group.
According to the working group, the pilot is intended to test whether the tool can effectively explain how insurers manage AI systems and apply governance standards in practice. The results are expected to inform future approaches to market conduct reviews and financial risk assessments, as well as highlight any areas where regulators may require additional training.
The evaluation tool was first released in draft form by the working group in July 2025 then was subject to a 60-day public comment period that closed on September 5, 2025, providing the basis for revisions ahead of the planned pilot.
Industry trade groups signaled general support for testing the tool during a February 9 working group meeting, but raised several operational concerns. Among them were how insurers would be selected for participation, whether the tool could trigger compliance actions, and how sensitive company data would be protected. Organizations representing both life insurers and property and casualty carriers emphasized confidentiality as a primary issue.
Participation in the pilot is not expected to be voluntary for companies chosen by regulators. Nathan Houdek said the participating states will determine which insurers are included, adding that coordination among states is planned to avoid duplicative requests.
“Essentially, the pilot states that are participating will determine which companies they want to focus on. We do intend to coordinate among the pilot states to ensure that companies are not receiving a lot of different inquiries and correspondence from different states,” Houdek said.
Separately, the National Association of Mutual Insurance Companies questioned how the evaluation tool defines and categorizes predictive models and generalized linear models. An earlier draft of the tool included a definition of GLMs, but that language was removed because those models were outside the tool’s scoring scope, said Lindsey Stephani, NAMIC’s policy vice president for data science, AI, machine learning and cybersecurity.
Stephani said predictive models remain within scope, which has raised concerns among insurers.
“From our perspective, predictive models can be simple code- and rules-based models that have been used for years and would greatly expand the scope of this tool beyond AI,” Stephani said, adding that the same concern applies to GLMs.
NAMIC is urging regulators to explicitly state that GLMs and predictive models are not considered AI and to restore the GLM definition in the tool’s framework.