Hollard Insurance Australia is piloting an artificial intelligence tool in its claims operations to change how claim file information is reviewed and summarised for consultants.
The privately owned insurer – which distributes through partners including Commonwealth Bank, Woolworths, and Australian Seniors – is trialling an AI-based summarisation system that condenses claim notes into short, structured outputs for staff. The pilot was outlined by claims value owner Daniel Dearsley at a Guidewire event in Sydney. Dearsley said the tool is aimed at consultants who handle large amounts of documentation on open claims, particularly where multiple contacts, assessments, and internal decisions are recorded over time. He said consultants can currently spend up to 15 minutes reviewing historical notes or relying on the most recent entry before speaking with a policyholder. “[It’s] a bit of a conundrum to work through, both operationally and certainly from a customer experience point of view,” Dearsley said, as reported by IT News.
The AI system processes material from the claim file and produces a concise narrative that can be used as a briefing. “To address this, we now have the ability to synthesise an entire claim into an easily digestible paragraph with meaningful information. In a practical setting, what would have taken a consultant through via dozens of pages of notes is now available in seconds,” Dearsley said. According to early pilot results, the largest time reductions are occurring on more complex files with higher documentation volume. “We’re seeing 70% reduction in the time it takes for an individual to review these claims,” Dearsley said, with some cases showing “up to 25 to 35 minutes saved in terms of the capability.”
Hollard is still “refining” the summariser but is considering whether the same approach could be applied to customer correspondence and other unstructured content linked to claims. Dearsley said a key area of interest is how the tool could operate during natural catastrophe events, when claim portfolios can increase rapidly and place pressure on turnaround times. “If we deployed this tomorrow, even in the current format it would provide a tremendous upside, especially during catastrophe times. The reality of a fast return when you’re dealing with a portfolio of claims that is exploded based on a catastrophe that has come through is seriously viable,” Dearsley said.
The insurer has been monitoring the pilot for issues such as hallucinated content or misinterpretation of claim facts. So far, Dearsley said Hollard has not observed hallucination problems and, in some situations, the tool has flagged potential leakage. “In a few cases we’ve identified leakage where an excess wasn’t applied or potentially should have and vice versa – a customer had an excess applied that shouldn’t have,” Dearsley said, describing these as “almost unintended benefits.” He said the system’s limits in reading tone and context remain a concern. “With no hallucinated content or incorrect information, the accuracy is there, but on the usefulness side, there’s still some opportunity [for improvement]. The inability to pick up on sentiment is probably the biggest piece of feedback we have identified leading to, in some cases, missing potential vulnerabilities,” he said. The pilot points to a model where AI-generated summaries sit alongside existing case management processes, supporting file review, and leakage detection while human staff continue to handle judgments about vulnerability and complex customer interactions.
Hollard’s trial is taking place as AI use spreads across global financial services, influencing how Australian insurers may plan technology, risk, and security programs. Finastra’s Financial Services State of the Nation 2026 report found AI is now present in almost all surveyed financial institutions. Only 2% of respondents said they had not deployed AI in any form, and 43% identified AI as their main route for introducing new or improved products and processes.
The report indicates that AI is being used widely in risk and control functions. Risk management, fraud detection, data analytics, and reporting were each cited by 71% of institutions as current AI applications, suggesting that models and automation are integrated into decision-support and oversight activities. Customer-facing work is another major area of deployment. According to Finastra, 69% of institutions use AI to support customer service teams or manage documents and related workflows. In Australia, the Commonwealth Bank’s use of AI tools in its customer service operations is one example of how large banks are incorporating automation into frontline channels, which insurers are watching as they review their own service models.
Survey respondents said customer expectations are influencing AI priorities. About 38% reported that customers mainly want more personalised interactions and improved service, and only 4% said they do not provide any personalisation. Over the past 12 months, six in 10 firms said they had upgraded or extended their AI capabilities while working to manage security, compliance, and profitability implications across lending, payments, and customer engagement. Security spending is increasing in line with this expansion. Institutions surveyed said they plan to raise security budgets by an average of around 40% this year as AI is integrated more deeply into critical processes, reflecting concerns about cyber risk, data protection, and operational resilience.
In insurance, separate research from Accenture’s Pulse of Change survey suggests that carriers intend to increase AI spending in 2026 while managing capability and organisational constraints. The survey, conducted between November and December 2025 across 20 industries and 20 countries, included 218 senior insurance executives within a broader group of 3,650 C-suite leaders. Ninety percent of insurance respondents said they plan to increase AI investment this year. Eighty-five percent said they expect more benefit from AI in generating revenue than in reducing costs. This emphasis points to use cases in product development, pricing, distribution, and customer engagement, as well as efficiency in underwriting and claims.
At the same time, the survey highlighted several barriers. A quarter of insurance executives cited shortages of skilled talent as the main factor limiting AI value, and 24% pointed to weak alignment between AI initiatives and core business strategy. Only 24% said their organisations have continuous learning programs focused on AI, and 5% reported redesigning job roles to reflect new ways of working with AI tools, indicating that structural responses are still limited.
Despite these gaps, operational adoption appears to be growing. Accenture reported that 34% of insurance respondents are deploying AI agents across multiple business functions. Nearly one-third of insurance C-suite leaders said they use generative AI tools daily, and 57% said they use such tools at least once a week. In addition, 29% of organisations surveyed said they are redesigning end-to-end processes with AI embedded in key steps. This includes underwriting workflows, claims triage and assessment, fraud checks, back-office processing, and internal support.
The survey also looked at how executives might respond to a correction in AI-related valuations. If an AI “bubble” were to burst, 47% of insurance respondents said they would increase AI investment and 37% said they would increase hiring, indicating an expectation that AI-related work would continue regardless of market cycles. Two-thirds of executives said they are prioritising AI and other digital technologies in response to ongoing change. While 67% reported feeling prepared for technology disruption, fewer said they felt prepared for environmental disruption (39%) or geopolitical disruption (44%). Hollard’s claims pilot and the findings from Finastra and Accenture describe a market in which AI is moving into core operations. The results also highlight practical considerations for local insurers, including integrating tools into claims and catastrophe response, aligning AI projects with strategy and risk appetite, and developing talent, learning programs, and security controls to support continued adoption.