Are your staff actually doing anything useful with AI?

In the race to adopt AI, the insurance industry is still jogging on the spot

Are your staff actually doing anything useful with AI?

Transformation

By Matthew Sellers

Across Australia and NZ's insurance sector, artificial intelligence is rapidly becoming part of the corporate script.

Boards are asking for AI strategies. IT teams are piloting tools. Staff are being pushed through “prompting” workshops and e‑learning on responsible use.

On paper, it looks like progress.

But a new report on AI proficiency by Section AI, based on a survey of 5,000 knowledge workers at companies with more than 1,000 employees across the US, UK and Canada, should give insurance employers pause. It suggests that while AI is now widely available and frequently used, it is rarely embedded in the kind of work that actually drives productivity, loss ratios or customer experience.

The short version: most workers are playing with AI, not putting it to work.

For insurers, whose business models depend on data‑heavy processes, scale and tight margins, that is a strategic problem.

The new definition of “AI proficiency”

As the report’s authors put it, in 2025 “AI proficiency” meant something fairly simple: do your people know how to use AI safely and write a decent prompt?

Many organisations, including insurers, have spent the past year attacking this basics problem. Staff now broadly understand what AI is. They know not to paste sensitive customer data into public chatbots. They can ask a model to rewrite an email or summarise meeting notes.

That effort has delivered predictable results: most employees now know what AI is and how to use it “responsibly”.

But in 2026, the bar has shifted. The report argues that AI proficiency will mean something much tougher: incorporating AI into meaningful, value‑adding work tasks every week – the day‑in, day‑out work where premiums are priced, claims are assessed, fraud is detected, policies are serviced and regulatory obligations are met.

This is the “gap” the authors say organisations must cross to realise enterprise‑level return on investment from AI.

Right now, that gap is wide.

The paradox of high usage, low value

Viewed from a distance, AI’s uptake looks like a success story. ChatGPT reports nearly 900 million monthly users. Fifty‑six% of Americans say they use AI. Yet, according to the report, 85% of the workforce does not have a value‑driving AI use case and 25% do not use AI for work at all.

Even in places you would expect to be ahead – technology companies and language‑intensive functions like marketing – most AI use remains surface‑level.

Insurance is not explicitly broken out in the report, but the industry sits in the same risk‑regulated, process‑heavy bucket as finance, healthcare and other lagging sectors. Those sectors are described as less likely to have a robust AI strategy, clear policy and good access to tools – and more likely to be missing them altogether.

For insurers, who trade in probabilities and risk models, this mismatch between availability and value creation should feel deeply uncomfortable.

A workforce of “experimenters”

Three years after ChatGPT’s launch, the report finds that most people are still beginners.

Seventy% of the workforce are what the authors call “AI experimenters”: people who use AI for very basic tasks – summarising meeting notes, rewriting emails, getting quick answers. The second‑largest group are “AI novices” at 28%: those who don’t use AI, or have tried it a few times before giving up.

Only a sliver of the workforce sits where insurers actually need them:

  • AI practitioners: 2.7%
  • AI experts: 0.08%

In total, less than 3% of workers are putting AI to use in their workflows and seeing significant productivity gains.

The report’s “at a glance” summary is stark:

  • 97% of the workforce are using AI poorly or not at all
  • 25% say they save no time with AI
  • 40% say they would be fine never using AI again

So while 55% of surveyed workers say they use AI at least weekly, the overwhelming majority are doing so in ways that barely graze their organisation’s cost base or risk profile.

The “use case desert” hits insurance hard

The single biggest barrier, according to the report, is not a lack of knowledge about how to prompt. It is a lack of clarity about what to use AI for.

Employees, the authors write, are in a “use case desert”.

Across thousands of workers:

  • 26% say they don’t have a work‑related AI use case
  • 60% say their use cases are beginner‑level
  • Only 15% of reported use cases are judged likely to generate ROI for the business

In total, 85% of knowledge workers have beginner or no AI use cases. A quarter say they never use AI for work.

For insurers, the implications are obvious. The most promising AI opportunities – triaging claims, automating document intake, supporting underwriters with pattern detection, flagging potential fraud, generating first‑draft advice and disclosure for complex products – sit in messy, multi‑step workflows. They are not easily discovered by employees told to “go experiment” with generic tools.

When staff can’t see how AI maps to their specific role – a case manager in life insurance, a motor claims assessor, a reinsurance analyst, a broker support officer – they tend to fall back to the same safe ground: summarise a document, tidy an email, ask a quick question.

That doesn’t change the economics of an insurance operation.

Most AI use cases will never show up in the P&L

The report’s analysis of 4,500 work‑related AI use cases paints a clear picture: the overwhelming majority of current use is unlikely to move key business metrics.

Among the top 10 work use cases, by proportion of knowledge workers:

  1. Google search replacement – 14.1%
  2. Draft generation – 9.6%
  3. Grammar and tone editing – 5.7%
  4. Basic data analysis – 3.8%
  5. Code generation – 3.3%
  6. Ideation and brainstorming – 3.2%
  7. Meeting support (such as notes) – 2.7%
  8. Document summarisation – 2.0%
  9. Learning and skill development – 1.6%
  10. Task and process automation – 1.6%

Roll that up and you get some sobering statistics:

  • 59% of reported AI use cases are basic task assistance
  • More than 25% have no relevant use in larger processes or workflows
  • Only 2% are judged to be advanced use cases

When use cases are grouped by category, research (19.6%) and writing (18.1%) are by far the most popular – but both are being used at beginner level, generating one‑off copy suggestions and basic informational searches.

For insurance executives, this should sound alarm bells. AI is being treated as a productivity “booster” around the edges rather than as an engine for redesigning core value chains: underwriting, pricing, claims, distribution, customer service and compliance.

Time saved: too little, for too many

Because most AI use is trivial, the impact on productivity is minimal.

The report’s breakdown of time saved by using AI looks like this:

  • 24% of workers say they save no time
  • 21% save less than two hours a week
  • 23% save two to four hours
  • 18% save four to eight hours
  • 8% save eight to twelve hours
  • 6% save more than twelve hours

Less than a third of knowledge workers report saving four or more hours a week with AI, when – in the authors’ view – most organisations should be targeting at least ten hours per employee to generate meaningful ROI.

Where workers are more proficient, the picture improves. AI practitioners are 1.8 times more likely than experimenters to save over four hours a week, and 20 times more likely than novices to do so.

But with practitioners and experts together making up less than 3% of the workforce, that upside is limited.

Training and tools are up. Proficiency is not.

Insurers might reasonably object that they have not been idle. And the report backs that up.

According to the latest survey:

  • 63% of respondents say their company has an AI policy
  • 50% have access to an AI tool
  • 44% receive AI training from their company

Those investments do make a difference:

  • Employees with a company AI strategy are 1.6 times more proficient than those without one
  • Employees with access to AI tools are 1.5 times more proficient than those with no access
  • Employees who have been trained on AI are 1.5 times more proficient than those who haven’t
  • Employees whose managers expect AI usage are 2.6 times more proficient than those whose managers discourage it

And companies are accelerating their support for AI adoption. Since March 2025:

  • Access to a formal AI policy is up 17%
  • Clear guidelines for AI usage are up 16%
  • Investment in AI tools and platforms is up 2%

The problem is what comes next. Even in these “higher proficiency” groups, workers are still mostly “AI experimenters” – people who understand how LLMs work and have a few basic use cases, but haven’t moved into intermediate or advanced applications.

On average, employees who have undergone AI training score 40 out of 100 in AI proficiency.

The most logical reason, the report suggests, is that most companies remain focused on access, safety and prompting. In other words: give people an LLM, tell them the guardrails and give them a framework to write a good prompt.

That is necessary, but it doesn’t close the gap between usage and value.

Execs think AI is going brilliantly. Staff don’t.

Perhaps the most politically fraught part of the report is the gap between what leaders think is happening with AI and what the rest of the company experiences.

C‑suite respondents overwhelmingly believe their AI deployments are going well. Across several dimensions, the numbers reveal big perception gaps between executives and individual contributors (ICs), who don’t manage a team and do much of the day‑to‑day work.

On whether there is a clear, actionable policy that effectively guides AI use:

  • 81% of C‑suite agree
  • 28% of ICs agree

Executives also tend to feel overwhelmingly positive about AI. Seventy‑five per cent (75%) are excited about its implications. Ninety‑four per cent (94%) say they trust its contributions. The majority – 57% – use AI for work daily; only 2% do not use it for work at all.

For insurance leaders getting upbeat briefings on AI from vendors, consultants and their own innovation teams, this optimism will be familiar. But if the people handling claims, underwriting referrals, customer complaints and regulatory reporting experience AI very differently, there is more than a communications problem. There is a blind spot in how success is being measured.

Individual contributors: the forgotten majority

The report is explicit that individual contributors – the knowledge workers who don’t manage others – are being left behind.

They benefit the least from their company’s AI resources. They are:

  • The least likely to have clear access to an AI tool
  • The least likely to receive company AI training
  • The least likely to be reimbursed for AI tools

As a result, ICs are more likely to be anxious or overwhelmed by AI, less likely to trust it, and least likely to say it is having a transformative impact on their work.

Manager support is heading in the wrong direction. Support for AI use among ICs is down 11% since May 2025. Only 7% of ICs say their managers expect daily AI use, and only around one‑third receive encouragement to use it.

In an insurance context, those ICs are often the very people buried under repetitive, rule‑based work: claims handlers, policy administration staff, contact‑centre agents, back‑office operations teams. They are also where AI could most obviously relieve pressure.

Leading and lagging sectors – and where insurance sits

The report ranks industries by AI proficiency out of 100.

Technology leads with a score of 42. Finance sits at 36. Consulting follows at 35. Manufacturing scores 34, media 33 and real estate 32. At the bottom end, food and beverage and education both score 29, healthcare 28 and retail 27.

The pattern is clear. Leading sectors – tech, finance, consulting – are more likely to have a company AI strategy, policy and access to tools. Lagging sectors – healthcare, education, retail – are more likely to be missing them.

Insurance is usually grouped alongside finance, but frequently behaves more like healthcare: heavily regulated, conservative, document‑heavy and slow to change.

Within functions, engineering or tech comes out on top with a proficiency score of 41, followed by strategy (39), business development or sales (37), human resources (37), marketing (36), finance or legal (35), product (34), operations (32) and customer service/support last at 27.

The most startling findings are about obvious missed opportunities. According to the report:

  • 54% of engineers don’t use AI for writing or debugging code, scripts or formulas
  • 56% of marketers don’t use AI for creating first drafts of content
  • 87% of product managers don’t use AI for creating prototypes

If people in those roles are not using AI for their most obvious, high‑value use cases, it is a safe bet that many underwriters, actuaries, claims leaders and distribution managers are not either.

So what should insurance employers do?

The report ends with six “leadership imperatives” for 2026 that read like a to‑do list for insurance executives.

  1. Stop measuring AI success by access and adoption rates.
    If 55% of your workforce uses AI weekly but only 15% have value‑driving use cases, your adoption metrics are lying to you. For insurers, the real scorecard should focus on time saved per claim, per policy or per quote; reduction in manual checks; faster resolution times; and improvements in accuracy, loss ratios or customer satisfaction.
  2. Treat use case development as a core competency, not a personal responsibility.
    The workforce isn’t stuck because people can’t prompt. They are stuck because they don’t know what problems AI can solve in their specific role. Insurers need to build function‑specific use case libraries – for underwriting, claims, pricing, fraud, customer service, operations – create role‑based playbooks and make use case development a measured responsibility for team leads.
  3. Bridge the individual contributor gap immediately.
    Your ICs – the people doing the most repetitive, automatable work – have the least access to tools, training and manager support. This is backwards. Prioritise IC enablement, standardise access to approved tools, and mandate that every manager identify and track at least three AI use cases for each direct report.
  4. Recognise that training got you to the starting line, not the finish line.
    A 40/100 proficiency score after training means your current programs are teaching the wrong things. Shift the focus from “how to use AI safely” to “how to identify workflow bottlenecks AI can eliminate” – for example, where in a claims journey AI could take over first‑line triage or document classification.
  5. Close the executive awareness gap.
    If C‑suite members believe deployments are succeeding while ICs report minimal impact, you have a data visibility problem – and likely a morale issue. Implement regular skip‑level conversations focused specifically on AI adoption barriers and require executives to shadow employees as they use (or try to use) AI in their daily work.
  6. Accept that the proficiency bar will keep rising.
    The gap between experimenter and practitioner will only widen as AI capabilities advance. Build continuous learning infrastructure now – not one‑off training – and create clear progression paths from basic to intermediate to advanced use cases within each function.

The real risk for insurance

For an industry built on modelling risk, the biggest AI risk for our industry in 2026 may be more mundane than the headlines suggest.

It is not that AI will suddenly misprice entire books of business or hallucinate policies into existence. It is that insurers will spend years investing in tools, licenses and training – and still find that most of their people are using AI like a slightly smarter spell‑checker.

In a world where claims costs are under pressure, weather events are intensifying, fraud is evolving and customers expect faster, clearer service, standing still is its own form of risk.

The technology is already on insurance desktops. The question now is whether the sector can do the harder, slower work of changing how work is designed, measured and led – so AI moves from novelty to necessity in the parts of the business that matter most.

Related Stories

Keep up with the latest news and events

Join our mailing list, it’s free!