As artificial intelligence moves from pilot projects into day-to-day operations, most organisations are investing in AI skills. However, many are still deploying the technology without robust risk controls.
Gallagher’s third annual AI Adoption and Risk Survey, based on responses from more than 1,200 global businesses, found that nearly two‑thirds (62%) have delivered AI training to employees in the last year, and more than half (55%) have hired for AI‑focused roles.
Governance is climbing the agenda, with 56% of organisations already communicating an AI strategy to employees. However, 43% are yet to introduce a formal AI risk management framework, and only 44% have conducted AI impact assessments. That leaves a significant proportion of firms exposed to operational, legal and reputational risks as AI use accelerates.
This gap between adoption and oversight is likely to feed into more complex technology E&O, cyber, D&O and employment practices exposures as AI-driven tools are embedded in underwriting, pricing, claims, HR and customer interactions.
Gallagher’s research points to a shift from experimentation to enablement.
Almost half of businesses (47%) now offer training to help employees use AI tools, up seven percentage points from 2024. Four in 10 (40%) have created new roles where AI is a core part of the remit, reflecting growing demand for data, automation and AI governance skills.
The vast majority of respondents (86%) said AI has improved employee productivity, supporting the view that AI is mainly automating repetitive tasks and augmenting, rather than replacing, knowledgeable workers. This is consistent with current usage that AI is being applied to document handling, triage, underwriting workbenches and customer service, rather than fully automating complex underwriting or claims decisions.
"For many global companies, AI is no longer in the test phase. It’s in the workplace, shaping strategy and powering productivity,” said Ben Warren, managing director of People Data, AI and Innovation at Gallagher. “Training programs are on the rise, equipping employees for a future where human ingenuity and AI agents will work hand in hand. We know what AI can do, and the potential is undeniable. It can handle repetitive and manual tasks, freeing employees to spend less on menial work and more on what really matters: creative ideation and meeting clients.”
Despite rapid adoption, respondents continue to see human capability as central. The most frequently cited reason for protecting employee roles was the need to retain and promote creativity within the business.
A desire to preserve the human touch in client interactions was cited by 34% of respondents, while 31% highlighted the need to keep people in place to solve complex problems that technology cannot yet address independently.
This reinforces the expectation that underwriting judgment, relationship management, claims negotiation and complex risk engineering will remain human-led, even as AI handles more routine analysis and workflow. It also underlined that talen attraction and development will continue to be a differentiator, especially in specialty, complex commercial and large-corporate lines.
Gallagher’s analysis also found that many organisations see the most effective path forward as one where human creativity and judgment work alongside AI‑driven efficiency.
The survey findings come as regulators and courts move from principles to more concrete expectations around AI use.
In the EU, the AI Act is being phased in from 2024 to 2026, with stricter obligations for “high‑risk” applications, including some financial and insurance uses. In North America, supervisors such as the NAIC in the US and OSFI in Canada have published guidance on model risk management, data governance and the use of AI and big data in underwriting and claims. That trend points to closer scrutiny of how insurers and their clients document AI models, test for bias and explain automated decisions.
On the product side, AI‑driven issues are already emerging across multiple lines. Technology E&O and cyber policies are being tested by disputes over algorithmic failures, data leakage from training sets and AI‑assisted social engineering. D&O underwriters are beginning to ask how boards oversee AI strategy and risk, particularly where models influence pricing, lending or HR outcomes that could attract regulatory or class‑action attention. Employment practices and professional liability carriers are monitoring how AI is used in hiring, performance management, credit decisions and medical triage.
Meanwhile, the governance gap highlighted by Gallagher – with 43% of firms lacking formal AI risk frameworks and fewer than half conducting AI impact assessments – suggests many insureds are not yet at the maturity level underwriters would prefer. This means that those that can pair coverage with practical guidance on AI policies, impact assessments and incident response may be bettern placed to distinguish stronger risks from weaker ones.
Gallagher's findings also indicate a growing advisory role for intermediaries.
As clients expand AI training and hiring without equivalent investment in governance, brokers and risk consultants are well positioned to help design AI risk frameworks, map existing controls to new use cases and prepare for emerging disclosure expectations.
Over time, that is likely to translate into more granular AI-specific questions at placement and renewal, including which processes are algorithm-driven, how models are validated, how vendors are managed and how compliants or errors are handled.
“As organisations scale their use of AI, risk oversight and clear policies will become increasingly important,” Warren said. “Overall, the long‑term value of AI will depend on combining technological efficiency with human creativity, judgment and trust.”