For Manulife’s Matt Gabriel (pictured), the future of AI in insurance is not just about smarter models – it’s about autonomous collaboration between humans and intelligent systems.
The company’s head of group functions AI and global AI model validation said the next big leap in the industry will come from agentic AI, an emerging class of artificial intelligence that can make decisions and execute tasks on its own within a set of guardrails.
“I think the answer to all three – what’s next for AI in insurance, what’s next for Manulife, and what excites me personally – is going to be agentic AI,” Gabriel told Insurance Business. “It’s this concept of building AI agents that can take in data, identify a next action, and potentially even execute that action through a set of tools.”
Gabriel said agentic AI will redefine how insurers handle both customer engagement and back-office operations.
He added that it’s going to give companies the ability to provide access to information and insights much quicker for some of their teams, to advance their research capabilities, and to streamline operational processes. It will also support personalization of marketing and sales experiences in a way that generative AI hasn’t been able to so far, he said.
He added that agentic systems could also help solve long-standing pain points – such as complex onboarding, underwriting, and claims processes – by linking data, automating reasoning, and surfacing insights in real time.
But the shift will also demand new skills across the workforce. “It requires us to really think through how we empower and upskill our employees to work hand in hand with these agentic capabilities,” Gabriel said. “That’s where a lot of our investment focus is right now – making sure people are resilient through this time of rapid change and innovation.”
Manulife’s approach, he added, is to make AI solutions “reusable and scalable,” so that tools developed in one market can be adapted to others. “It’s not just sharing code,” he said.
Gabriel said the industry is at what he calls an inflection point for AI adoption – a moment where the technology’s capabilities are finally meeting long-standing needs in the life insurance business.
AI is creating new ways to tackle some of the biggest challenges facing the life insurance industry, he said. “The life insurance purchase process has been ripe for innovation for years… collecting and processing all that information efficiently has always been difficult.”
Newer AI capabilities, including generative and agentic systems, are helping to address those inefficiencies, Gabriel said. Combined with behavioral insurance models and early diagnostic tools, the technology is beginning to reshape how policies are designed, sold, and managed, he added.
“I think we’re really at an inflection point within the industry,” he said.
Gabriel stopped short of commenting on competitors but said Canada offers a uniquely strong foundation for AI-driven innovation.
“We’re finding that Canada offers a very deep talent pool in the AI space,” he said. “We have strong and engaged partners – from technology providers to consulting firms – and the ability to engage directly with regulators and legislators.”
He praised Canada’s “principle-based” regulatory approach, which focuses on maintaining dialogue rather than imposing rigid rules. “It’s a strong, talented, engaged regulator that is focused on principle-based oversight,” Gabriel said. In a time of rapid innovation, that kind of two-way dialogue is far better than prescriptive rules that might just put a stop to things, he added.
He also credited Canadian universities for nurturing a steady pipeline of AI talent and research partnerships. “It’s a terrific space for innovation,” he said.
When asked how Manulife balances AI’s rapid growth with responsible governance, Gabriel described a risk-based and cultural approach.
“We’ve taken a very pragmatic approach to responsible AI,” he said. “We have strong frameworks in place to ensure proper oversight, risk assessment, and validation of our AI use cases. But it’s a risk-based framework – we’re not applying the same rigor to a low-risk solution as we are to a high-risk one.”
He emphasized that responsible AI is not just about compliance, but culture.
“We spend a lot of time and effort on educating our colleagues around the responsible use of AI – helping them understand what can go wrong, so they can bring that awareness into their day-to-day work.”
Gabriel also noted that Manulife has publicly shared its responsible AI principles, including detailed reporting in its ESG disclosures.
“The more we can bring responsible AI into an a priori process – embedding it in the way you develop AI rather than after-the-fact validation – the better off we are,” he said.
The company is now exploring how to design development systems that include that governance “from the start.”