Artificial intelligence adoption across the Lloyd’s market has surged over the past year, with firms increasingly pairing deployment with formal governance structures as the technology moves beyond experimentation.
New survey findings published by the Lloyd’s Market Association in collaboration with Barnett Waddingham show that 93% of responding firms now have, or are developing, formal AI governance frameworks, underlining how quickly the market is moving to institutionalise oversight of emerging AI use cases.
The survey was based on 39 responses from firms representing more than 60% of Lloyd’s market stamp capacity.
According to the report, around 50% of firms said in 2025 that they had limited or no AI implementation. Twelve months later, adoption has accelerated sharply, driven primarily by generative AI tools such as ChatGPT and Microsoft Copilot, as well as internal productivity applications including summarisation, reporting and data processing.
However, deployment remains focused largely on operational efficiency rather than frontline underwriting or claims decision-making, suggesting the market is still taking a cautious approach to more material insurance applications.
The findings indicate that governance has become a central priority as adoption expands. Of surveyed firms, 72% already have AI frameworks in place and a further 21% are developing them, while more than 60% said human oversight of AI-generated outputs is mandatory.
Responsibility for AI governance remains split across the market, with 44% of firms assigning oversight to the chief technology officer and 33% establishing dedicated AI governance committees.
Sanjiv Sharma, head of actuarial and exposure management at the LMA, said the pace of adoption is being matched by a growing emphasis on controls.
“AI adoption across the Lloyd’s market has accelerated quickly over the past 12 months, but what’s encouraging is that governance is being built alongside it, rather than after the fact,” Sharma said.
The survey also points to a shift in perceived AI-related risks. Data privacy, cybersecurity and third-party risk have now emerged as the leading concerns among respondents, overtaking broader regulatory uncertainty that dominated market discussion a year earlier.
That evolution reflects a growing recognition that as AI deployment scales, operational and data governance risks may become more immediate than strategic or regulatory concerns.
At the same time, the survey found that around one in four firms still rely on general third-party risk frameworks rather than AI-specific controls, suggesting governance maturity remains uneven across the market.
Wan Heah, partner and head of general insurance at Barnett Waddingham, said the market’s challenge now is ensuring governance frameworks evolve in step with more advanced use cases.
“The market is moving past experimentation and towards a more disciplined use of AI, with governance, data protection and validation now firmly in focus,” Heah said.
The findings align with a broader trend across the insurance sector, where carriers and brokers are increasingly shifting from pilot AI projects to enterprise-wide implementation while regulators and boards demand stronger governance, accountability and model validation frameworks.