Across Australia and NZ's insurance sector, artificial intelligence is rapidly becoming part of the corporate script.
Boards are asking for AI strategies. IT teams are piloting tools. Staff are being pushed through “prompting” workshops and e‑learning on responsible use.
On paper, it looks like progress.
But a new report on AI proficiency by Section AI, based on a survey of 5,000 knowledge workers at companies with more than 1,000 employees across the US, UK and Canada, should give insurance employers pause. It suggests that while AI is now widely available and frequently used, it is rarely embedded in the kind of work that actually drives productivity, loss ratios or customer experience.
The short version: most workers are playing with AI, not putting it to work.
For insurers, whose business models depend on data‑heavy processes, scale and tight margins, that is a strategic problem.
As the report’s authors put it, in 2025 “AI proficiency” meant something fairly simple: do your people know how to use AI safely and write a decent prompt?
Many organisations, including insurers, have spent the past year attacking this basics problem. Staff now broadly understand what AI is. They know not to paste sensitive customer data into public chatbots. They can ask a model to rewrite an email or summarise meeting notes.
That effort has delivered predictable results: most employees now know what AI is and how to use it “responsibly”.
But in 2026, the bar has shifted. The report argues that AI proficiency will mean something much tougher: incorporating AI into meaningful, value‑adding work tasks every week – the day‑in, day‑out work where premiums are priced, claims are assessed, fraud is detected, policies are serviced and regulatory obligations are met.
This is the “gap” the authors say organisations must cross to realise enterprise‑level return on investment from AI.
Right now, that gap is wide.
Viewed from a distance, AI’s uptake looks like a success story. ChatGPT reports nearly 900 million monthly users. Fifty‑six% of Americans say they use AI. Yet, according to the report, 85% of the workforce does not have a value‑driving AI use case and 25% do not use AI for work at all.
Even in places you would expect to be ahead – technology companies and language‑intensive functions like marketing – most AI use remains surface‑level.
Insurance is not explicitly broken out in the report, but the industry sits in the same risk‑regulated, process‑heavy bucket as finance, healthcare and other lagging sectors. Those sectors are described as less likely to have a robust AI strategy, clear policy and good access to tools – and more likely to be missing them altogether.
For insurers, who trade in probabilities and risk models, this mismatch between availability and value creation should feel deeply uncomfortable.
Three years after ChatGPT’s launch, the report finds that most people are still beginners.
Seventy% of the workforce are what the authors call “AI experimenters”: people who use AI for very basic tasks – summarising meeting notes, rewriting emails, getting quick answers. The second‑largest group are “AI novices” at 28%: those who don’t use AI, or have tried it a few times before giving up.
Only a sliver of the workforce sits where insurers actually need them:
In total, less than 3% of workers are putting AI to use in their workflows and seeing significant productivity gains.
The report’s “at a glance” summary is stark:
So while 55% of surveyed workers say they use AI at least weekly, the overwhelming majority are doing so in ways that barely graze their organisation’s cost base or risk profile.
The single biggest barrier, according to the report, is not a lack of knowledge about how to prompt. It is a lack of clarity about what to use AI for.
Employees, the authors write, are in a “use case desert”.
Across thousands of workers:
In total, 85% of knowledge workers have beginner or no AI use cases. A quarter say they never use AI for work.
For insurers, the implications are obvious. The most promising AI opportunities – triaging claims, automating document intake, supporting underwriters with pattern detection, flagging potential fraud, generating first‑draft advice and disclosure for complex products – sit in messy, multi‑step workflows. They are not easily discovered by employees told to “go experiment” with generic tools.
When staff can’t see how AI maps to their specific role – a case manager in life insurance, a motor claims assessor, a reinsurance analyst, a broker support officer – they tend to fall back to the same safe ground: summarise a document, tidy an email, ask a quick question.
That doesn’t change the economics of an insurance operation.
The report’s analysis of 4,500 work‑related AI use cases paints a clear picture: the overwhelming majority of current use is unlikely to move key business metrics.
Among the top 10 work use cases, by proportion of knowledge workers:
Roll that up and you get some sobering statistics:
When use cases are grouped by category, research (19.6%) and writing (18.1%) are by far the most popular – but both are being used at beginner level, generating one‑off copy suggestions and basic informational searches.
For insurance executives, this should sound alarm bells. AI is being treated as a productivity “booster” around the edges rather than as an engine for redesigning core value chains: underwriting, pricing, claims, distribution, customer service and compliance.
Because most AI use is trivial, the impact on productivity is minimal.
The report’s breakdown of time saved by using AI looks like this:
Less than a third of knowledge workers report saving four or more hours a week with AI, when – in the authors’ view – most organisations should be targeting at least ten hours per employee to generate meaningful ROI.
Where workers are more proficient, the picture improves. AI practitioners are 1.8 times more likely than experimenters to save over four hours a week, and 20 times more likely than novices to do so.
But with practitioners and experts together making up less than 3% of the workforce, that upside is limited.
Training and tools are up. Proficiency is not.
Insurers might reasonably object that they have not been idle. And the report backs that up.
According to the latest survey:
Those investments do make a difference:
And companies are accelerating their support for AI adoption. Since March 2025:
The problem is what comes next. Even in these “higher proficiency” groups, workers are still mostly “AI experimenters” – people who understand how LLMs work and have a few basic use cases, but haven’t moved into intermediate or advanced applications.
On average, employees who have undergone AI training score 40 out of 100 in AI proficiency.
The most logical reason, the report suggests, is that most companies remain focused on access, safety and prompting. In other words: give people an LLM, tell them the guardrails and give them a framework to write a good prompt.
That is necessary, but it doesn’t close the gap between usage and value.
Execs think AI is going brilliantly. Staff don’t.
Perhaps the most politically fraught part of the report is the gap between what leaders think is happening with AI and what the rest of the company experiences.
C‑suite respondents overwhelmingly believe their AI deployments are going well. Across several dimensions, the numbers reveal big perception gaps between executives and individual contributors (ICs), who don’t manage a team and do much of the day‑to‑day work.
On whether there is a clear, actionable policy that effectively guides AI use:
Executives also tend to feel overwhelmingly positive about AI. Seventy‑five per cent (75%) are excited about its implications. Ninety‑four per cent (94%) say they trust its contributions. The majority – 57% – use AI for work daily; only 2% do not use it for work at all.
For insurance leaders getting upbeat briefings on AI from vendors, consultants and their own innovation teams, this optimism will be familiar. But if the people handling claims, underwriting referrals, customer complaints and regulatory reporting experience AI very differently, there is more than a communications problem. There is a blind spot in how success is being measured.
Individual contributors: the forgotten majority
The report is explicit that individual contributors – the knowledge workers who don’t manage others – are being left behind.
They benefit the least from their company’s AI resources. They are:
As a result, ICs are more likely to be anxious or overwhelmed by AI, less likely to trust it, and least likely to say it is having a transformative impact on their work.
Manager support is heading in the wrong direction. Support for AI use among ICs is down 11% since May 2025. Only 7% of ICs say their managers expect daily AI use, and only around one‑third receive encouragement to use it.
In an insurance context, those ICs are often the very people buried under repetitive, rule‑based work: claims handlers, policy administration staff, contact‑centre agents, back‑office operations teams. They are also where AI could most obviously relieve pressure.
The report ranks industries by AI proficiency out of 100.
Technology leads with a score of 42. Finance sits at 36. Consulting follows at 35. Manufacturing scores 34, media 33 and real estate 32. At the bottom end, food and beverage and education both score 29, healthcare 28 and retail 27.
The pattern is clear. Leading sectors – tech, finance, consulting – are more likely to have a company AI strategy, policy and access to tools. Lagging sectors – healthcare, education, retail – are more likely to be missing them.
Insurance is usually grouped alongside finance, but frequently behaves more like healthcare: heavily regulated, conservative, document‑heavy and slow to change.
Within functions, engineering or tech comes out on top with a proficiency score of 41, followed by strategy (39), business development or sales (37), human resources (37), marketing (36), finance or legal (35), product (34), operations (32) and customer service/support last at 27.
The most startling findings are about obvious missed opportunities. According to the report:
If people in those roles are not using AI for their most obvious, high‑value use cases, it is a safe bet that many underwriters, actuaries, claims leaders and distribution managers are not either.
The report ends with six “leadership imperatives” for 2026 that read like a to‑do list for insurance executives.
For an industry built on modelling risk, the biggest AI risk for our industry in 2026 may be more mundane than the headlines suggest.
It is not that AI will suddenly misprice entire books of business or hallucinate policies into existence. It is that insurers will spend years investing in tools, licenses and training – and still find that most of their people are using AI like a slightly smarter spell‑checker.
In a world where claims costs are under pressure, weather events are intensifying, fraud is evolving and customers expect faster, clearer service, standing still is its own form of risk.
The technology is already on insurance desktops. The question now is whether the sector can do the harder, slower work of changing how work is designed, measured and led – so AI moves from novelty to necessity in the parts of the business that matter most.