Are your staff any good ‘at’ AI?

Corporate AI ‘success’ story unravels as report finds 97% of workforce using it poorly or not at all

Are your staff any good ‘at’ AI?

Transformation

By Matthew Sellers

Corporate leaders convinced they are winning the race to adopt artificial intelligence are being misled by their own metrics, according to a new report that should trouble boards across the insurance sector.

A survey of 5,000 knowledge workers in large firms in the US, UK and Canada by sectionai.com has found that, three years after the launch of ChatGPT, only 2.7% of the workforce can be classed as “AI practitioners” – people who have embedded AI into their workflows and are seeing meaningful productivity gains. A vanishingly small 0.08% qualify as “AI experts”.

In total, 97% of workers are either using AI poorly or not at all. A quarter (25%) say they save no time with AI, and 40% say they would be perfectly happy never to use it again.

The study, produced by AI transformation firm Section, concludes that most organizations are stuck on what it calls the wrong side of the “proficiency gap”: employees have learnt to prompt chatbots, but have not learnt to turn them into engines of real operational value.

For insurance carriers and intermediaries now investing heavily in underwriting copilots, claims triage tools and AI‑enabled customer service, the findings pose an awkward question. If this is what AI proficiency looks like after a year of intense corporate effort, how much of the industry’s AI narrative is currently confined to PowerPoint slides and proof‑of‑concepts?

From prompt literacy to real work

The report argues that the definition of “AI proficiency” shifted under employers’ feet during 2025.

Last year’s rush was about basic literacy: do staff know what generative AI is, the risks of data leakage, and how to write a reasonable prompt? By those measures, many organizations can now pat themselves on the back. Employees can ask AI to summarise emails, tweak the tone of a client note, or extract bullet points from a report.

But in 2026, the authors say, proficiency means something more demanding: incorporating AI into meaningful, value‑adding work tasks every week. Not an occasional experiment, but routine use inside core workflows – underwriting, pricing, fraud detection, reserving, claims, compliance and distribution.

This is where the gap opens.

The survey finds that 70% of workers are “AI experimenters”: they use AI for simple, low‑stakes tasks such as summarising meeting notes, rewriting emails and getting quick answers. A further 28% are “AI novices”: they do not use AI at all, or tried it briefly and abandoned it.

Almost nobody is moving beyond that. Since May 2025, more people have migrated from novice to experimenter as they start to “play around” with AI, supported by ChatGPT’s addition of more than 100 million weekly users over that period and the fact that 55% of respondents now say they use AI at least weekly. Yet in the last six months, hardly anyone has uplevelled their skills beyond basic prompting.

The result is an adoption paradox: lots of activity, little impact. Across the workforce, 24% report saving no time with AI; another 44% save less than four hours a week. Only 6% say they save more than 12 hours a week.

For an industry such as insurance, facing rising claims severity, climate‑driven catastrophe losses and intensifying regulatory scrutiny, that gap between the promise of AI and the reality of a couple of hours shaved off email work should give leaders pause.

The “use case desert”

The report identifies a central bottleneck it calls the “use case desert”.

Contrary to a common managerial assumption, the main barrier is not that people cannot prompt. Rather, it is that they do not know what to use AI for in their specific role.

Across thousands of respondents:

– 26% say they do not have a single work‑related AI use case.
– 60% say the use cases they do have are beginner‑level.
– When researchers analysed 4,500 reported work use cases, only 15% were judged likely to generate a return on investment for the business.

In total, 85% of knowledge workers have beginner or no AI use cases, and 25% never use AI for work at all.

The most common “most valuable” use case is using AI as a replacement for Google search, cited by 14.1% of workers. Draft generation comes next at 9.6%, followed by grammar and tone editing at 5.7%. Basic data analysis is reported by 3.8%, code generation by 3.3%, and task and process automation – the holy grail for many executives – by just 1.6%.

Overall, 59% of reported use cases are basic task assistance, more than a quarter have no meaningful role in broader processes or workflows, and only 2% are judged to be advanced.

The pattern should be uncomfortably familiar to insurers. Staff know how to ask AI to “make this customer email clearer”, but not how to redesign a claims journey so that AI handles document ingestion, triages severity, flags potential fraud and drafts first‑pass settlement recommendations under human oversight.

Executives in the dark

The most politically sensitive conclusion is that leadership has little idea how shallow AI use actually is.

The survey finds a yawning perception gap between C‑suite respondents and individual contributors (ICs) – those without direct reports, who do most of the day‑to‑day work.

– 81% of C‑suite members say their company has “a clear, actionable policy that effectively guides AI use”; only 28% of ICs agree – a 53‑point gap.
– 80% of executives say “tools exist with clear access process”, versus 39% of ICs.
– 71% of the C‑suite say there is a formal AI strategy, compared with 32% of ICs.
– 66% of executives feel “encouraged to experiment and create my own AI solution”, versus 25% of ICs.
– 48% of leaders believe there is “widespread adoption with open sharing of use cases and best practices”; only 8% of ICs concur.

Senior leaders themselves are overwhelmingly positive: 75% say they are excited about AI’s implications, 94% say they trust its contributions, and 57% use AI for work daily. Only 2% of C‑suite respondents do not use AI at work at all.

Individual contributors tell a different story. Only 32% say they have clear access to AI tools (versus 80% of executives), 27% have received company AI training (versus 81% of the C‑suite), and just 7% are reimbursed for AI tools (compared with 63% of executives). Manager support is declining: IC reports of managers expecting daily AI use are at 7%; encouragement to use AI has fallen 11 percentage points since May 2025.

For insurance firms, where many of the most automatable tasks sit in claims centres, back‑office operations and call centres staffed by ICs, the skew is stark. The people doing the most repetitive, AI‑suitable work are the last to get tools, training and expectations.

Industry and function league tables

The report assigns AI proficiency scores out of 100 across industries. Technology leads at 42, followed by finance at 36 and consulting at 35. Manufacturing (34), media (33), real estate (32), food and beverage (29), education (29), healthcare (28) and retail (27) follow.

Finance’s position near the top is hardly a reason for complacency; a score of 36 suggests that even the “leaders” are still at an early stage.

By function, engineering or tech roles top the table with a score of 41, followed by strategy (39), business development and sales (37), and human resources (37). Marketing scores 36, finance and legal 35, product 34, operations 32 and customer service/support 27.

Even here, evidently obvious use cases are being missed. The report notes that 54% of engineers do not use AI for writing or debugging code, scripts or formulas; 56% of marketers do not use it to create first drafts of content; and 87% of product managers do not use AI for prototyping.

In insurance terms, few underwriters are using AI to generate first‑pass narratives or referral notes at scale; many actuaries and analysts are not using it for scenario exploration; and a large share of claims handlers are keeping AI firmly at arm’s length when deciding liability and quantum.

Why training is not fixing it

The study does not deny that companies are investing. According to respondents:

– 63% say their company has an AI policy.
– 50% have access to an AI tool.
– 44% receive AI training from their employer.

These investments do have measurable effects. Employees at firms with a company AI strategy are 1.6 times more proficient than those without one. Those with access to AI tools are 1.5 times more proficient than those without access. Those who have been trained are 1.5 times more proficient than the untrained. Employees whose managers expect AI usage are 2.6 times more proficient than those whose managers discourage it.

Since March 2025, access to a formal AI policy is up 17 percentage points, clear guidelines for AI usage up 16 points and investment in tools and platforms up 2 points.

Yet the result of all this effort is underwhelming. Employees who have undergone AI training score, on average, 40 out of 100 in AI proficiency. Most remain squarely in the “experimenter” category – they know what an LLM is and have a handful of low‑stakes use cases, but have not begun to explore intermediate and advanced applications.

The authors’ conclusion is that most corporate programmes have been aimed at the wrong target. They have taught access, safety and prompting – how to use AI – but not how to identify and redesign workflows where AI can actually remove work.

Implications for insurance

For boards and executive teams in insurance, the report reads less like a technology briefing and more like a management audit.

It suggests that:

– AI “success” measures based on access and adoption are deeply misleading. If 55% of staff use AI weekly but only 15% have value‑driving use cases, the KPIs are lying.
– Use case development cannot be left to enthusiasts. It needs to be treated as a core competency, with function‑specific libraries and role‑based playbooks, and made a formal responsibility for team leaders.
– Individual contributors, who do much of the industry’s repetitive, rules‑based work, need to be prioritized rather than left behind.
– Training must evolve from “how to prompt” to “how to map and redesign claims, underwriting and servicing workflows with AI”.
– The executive awareness gap must be closed with more honest reporting and direct exposure to frontline barriers.

Above all, the report implies that the hard work of AI in insurance is no longer about tools. It is about operating model, skills, incentives and governance. Generative models can now write, summarize and classify with ease. The question is whether insurers can redesign their organizations quickly enough to turn that raw capability into lower loss‑adjustment expenses, better risk selection and genuinely improved customer outcomes – rather than into yet another layer of digital gloss on unchanged processes.

Related Stories

Keep up with the latest news and events

Join our mailing list, it’s free!