Brokers are embracing AI – but confidentiality risks are quietly growing

As AI becomes routine in broker workflows, experts warn that client data, privacy obligations and governance controls are struggling to keep pace

Brokers are embracing AI – but confidentiality risks are quietly growing

Survey data showing that most brokers now use AI in some capacity confirms the technology has moved into everyday workflows. But behind the enthusiasm for tools like ChatGPT sits a harder question: what happens to the client information that brokers feed into them?

For Jonathan Weekes (pictured), president, Canada, at BOXX Insurance, that is where the real risk lies.

“General-purpose models weren’t designed with insurance confidentiality, privilege or data residency in mind,” he said. “Feeding identifiable client details into public models can introduce unintended data exposure and regulatory risk.”

The issue isn’t just whether the output is accurate. It’s what brokers may be giving up to get it.

Where brokers are with AI

Weekes admits that, during his years as a broker, he likely underused AI compared to many peers, which made recent survey results striking.

“Almost everyone is using it,” he said.

Most use cases are practical and low risk: drafting emails, refining summaries, structuring renewal reports or internal memos. The danger lies in how easily those uses drift into sensitive territory. A few extra keystrokes can turn a generic prompt into pasted loss runs, financials or claim narratives.

“The key questions are where the data goes, how it’s retained, and whether it can be used to retrain models outside your organisation,” Weekes said. “Those questions matter just as much as the response you get back.”

The hidden risk in ‘free’ AI

Public AI tools are attractive because they’re powerful, accessible and often free. The trade-off is loss of control.

Client data fed into public models may be processed or stored outside Canada, retained for monitoring or improvement purposes, or exposed to organisations clients never agreed to share it with. That creates privacy and regulatory risk, as brokers are expected to safeguard sensitive information and be transparent about how it’s processed. It also creates contractual risk, because confidentiality or data-handling obligations in client agreements may be quietly breached. Even if no formal claim arises, there is reputational risk if clients discover their information has been fed into consumer AI tools.

“We may not be more at risk than other industries,” Weekes said, “but the volume and sensitivity of the data we handle puts us at risk in a unique way.”

What responsible AI use looks like for brokers

Rather than avoiding AI, the challenge for brokerages is governance.

At a minimum, firms should understand which AI tools are in use – public, paid, or embedded – and how those tools handle data. Informal or “shadow” AI use by individuals is itself a risk. Clear boundaries also matter: brokers should explicitly prohibit feeding identifiable client information, financial data, legally privileged communications, or incident details into external models. Where AI is used, stripping out names and unique identifiers should be standard practice.

Validation is equally critical. AI-generated summaries or descriptions should never be accepted at face value, particularly in submissions or client-facing material. As Weekes notes, AI-written content often shows tell-tale patterns, and its use tends to attract closer scrutiny from insurers and counterparties.

Finally, brokers should be prepared to explain their AI use. Regulators, courts, and clients are increasingly interested not just in whether AI was used, but how. Firms should be able to show what tools were used, for which tasks, under what internal policies, and with what human oversight. Documentation does not need to be perfect, but it does need to exist.

How insurers are responding

On the carrier side, Weekes says some insurers and MGAs are already adapting their own processes to the new reality.

“I think one of the great things about BOXX is that we're just as much a tech company as we are an insurance provider,” he said. “We educate our brokers around best practices when it comes to AI, where appropriate.”

That guidance often focuses on the basics: understanding “validation requirements, especially if they're using it in placements or to build their submissions to submit insurance,” and reminding intermediaries to be cautious about what they paste into tools. “We do advise our brokers to be very careful about it,” Weekes added.

Underwriters are also learning to recognize when AI has been doing too much of the talking.

AI is a very useful technology, he said, adding that it is still fairly easy to identify when someone has used AI to form a response or build a report.

“If we pick up on any indication that AI was used as part of the submission, our underwriters are trained to actually ask to confirm it," he said.

At the same time, Weekes is clear that there are limits to how far insurers should go.

“We give folks the baseline understanding, or at least some thinking about the potential risks of using the tools and give little nuggets of information to support them in that," he said.

Related Stories

Keep up with the latest news and events

Join our mailing list, it’s free!