A widening "AI proof gap" is emerging between board-level enthusiasm for artificial intelligence and the governance needed to prove it is safe, effective and delivering value, according to a new survey from Grant Thornton.
"Companies are making tremendous investments into AI and yet, we're not seeing that correlate with an increase in AI accountability," said Tom Puthiyamadam, managing partner of Advisory Services for Grant Thornton Advisors LLC. "Our report found that while most organizations have implemented AI solutions, many teams cannot measure its impact or respond effectively when initiatives fail. That is a critical gap: in our view, the companies that win tomorrow will leverage AI effectively and adapt in real time to ever-evolving business trends."
The AI Impact Survey, based on responses from nearly 1,000 senior US business leaders across multiple industries in early 2026, found that more than three-quarters (78%) lack full confidence their organization could pass an independent AI governance audit within 90 days. Half (50%) of oeprations leaders said they need a formal AI strategy or governance plan in place within the next six months to improve performance.
“AI deployment is simply outpacing the infrastructure that supports it,” Puthiyamadam said. “We see this pattern repeatedly with new technology: guardrails come after an incident occurs - not before - and by then there may be significant organisational and operational consequences.”
The survey suggests the main drag on AI performance is governance rather than the underlying tools. Nearly half of leaders (46%) said AI underperforms because controls and compliance are not working, yet only 11% said organizations should focus primarily on risk and compliance to enable AI success.
The report argued that scaling AI before proving it is safe or effective is less innovation than unmanaged risk, particularly as regulators increase their scrutiny. In the US insurance sector, for example, the National Association of Insurance Commissioners has adopted guidance spelling out expectations for AI systems programs, while in Europe the EU's AI Act and separate supervisory opinions are pushing insurers towards more structure AI governance.
Grant Thornton's findings also pointed to weak ownership and strategy at board level.
While three in four boards have approved major AI investments, only 52% have set clear AI governance expectations, and just 54% have integrated AI risk and opportunity into ongoing board or committee oversight.
“Most governance models weren’t designed for AI,” Puthiyamadam said. “Centralized review bodies become overwhelmed, creating bottlenecks that slow execution without reducing risk. The fix is to set policy and risk criteria centrally, then delegate assessments to trained reviewers at the division or regional level - aligning the depth of review to the level of risk.”
Strategy is another fault line. More than half of executives (51%) said strategy is the biggest driver of AI return on investment, yet only 22% of operations leaders reported having a fully developed and implemented AI strategy.
“Organizations are expanding AI across more pilots, use cases and functions, but without consistent measurement, feedback loops or clarity on where value is created,” said Sumeet Mahajan, lead partner, AI and Data for Advisory Services at Grant Thornton Advisors LLC. “You have to apply discipline - set measurement targets, build governance infrastructure and curtail initiatives that do not deliver results.”
As organizations grant AI systems more autonomy, many are doing so without tested safeguards. Nearly three in four organizations are piloting, scaling or running autonomous AI, yet only one in five has tested a response plan for AI failures, the survey found.
While most organizations (95%) do not permit agents to make fully autonomous, high‑stakes decisions without human review, exposure at moderate risk levels remains significant. More than four in 10 (43%) list regulatory and compliance uncertainty among their top concerns about implementing “agentic” AI.
The report argued that the real risk as AI gains autonomy is not failure itself, but being unprepared when it fails. Many companies have incident playbooks, but have not adapted them for AI‑specific issues such as model drift, hallucinations or biased outputs, making failures harder to detect, explain and remediate.
The survey pointed to a clear divide between organizations experimenting with AI and those capturing measurable value.
Companies with fully integrated AI are almost four times more likely to report AI‑driven revenue growth than those still piloting (58% versus 15%), and are far more likely to say they could pass an independent AI governance audit.
“The organizations pulling ahead in AI are the ones with governance in place,” Puthiyamadam concluded. “They train their people, measure results and focus on scaling what works. Governance isn’t slowing AI leaders down - it’s correlated with stronger and more sustainable AI outcomes.”