Brokers are being pulled in two directions on AI. On one side, there’s pressure to adopt new tools that promise speed, efficiency and sharper insights. On the other, there’s a growing sense that every AI prompt is another potential E&O, privacy or reputational landmine.
Somewhere between the hype and the anxiety, Jonathan Weekes (pictured), president, Canada at BOXX Insurance, offers a simple test that cuts through a lot of the noise.
“If you wouldn't stand behind the output of what AI tool recommends in front of a client, a regulator or a judge,” he said, “don't accept that initial output from that AI as your final response.”
It’s a rule of thumb that sounds almost old-fashioned in an industry obsessed with the new. But precisely because it is grounded in professional accountability, it may be the most useful filter brokers can apply in 2026.
For Weekes, the problem with AI isn’t the technology itself, but the way it can quietly slide from assistant to decision‑maker.
“AI becomes a professional risk the moment it starts to substitute for judgment, rather than supporting judgment or a recommendation,” he said. Using AI to summarise information or surface considerations is fine. Relying on it to “actually generate advice without human validation” is not.
His distinction is subtle but important. If AI helps a broker structure a renewal submission, pull out key themes from claims notes or tidy up a client email, human judgment remains in the loop. The professional is still the one deciding what matters, what’s missing and what should go in front of a client.
The risk escalates when that loop breaks: when a model’s draft of a coverage explanation is pasted into a proposal without being checked against the policy form, or when a chatbot’s confident answer on “what cover should my client buy?” is passed on without being interrogated.
In those moments, the real shift is psychological, and not technical. The tool stops being a second reader and starts being a silent co‑author.
Weekes’ rule of thumb brings the focus back to where it belongs: accountability stays with the broker, no matter how many tools are in the stack.
“AI should accelerate thinking,” he said. “It shouldn't outsource accountability of thought. So if you plan to use it, use it to bring efficiency into your life. Don't rely on it to think for you… It should be a tool to enhance your thought process, not a tool to replace it.”
Framed that way, the “client, regulator, judge” test becomes less about technology and more about professional comfort:
If the honest answer is no – or even “I’m not sure” – then the problem isn’t the model. It’s the way it’s being used.
Weekes also expects AI to shape the legal and regulatory baseline over time. As tools become embedded in risk analysis, he believes “regulators and courts will begin to not just ask whether AI was used, but whether or not it was used responsibly.” The definition of a “responsible broker,” in other words, will evolve to reflect the tools available.
That doesn’t mean every broker needs a technical manual on large language models. It does mean they will increasingly be judged on how they govern, validate and document their use of AI - in much the same way the market once adjusted to word processing, internal automation and electronic records.
Against that backdrop, the appeal of a single, stringent rule is obvious. Weekes’ test doesn’t require a policy document or a risk committee. It asks one, uncomfortable question at the moment that matters most:
If this goes wrong, am I prepared to own it?
If the answer is yes – if a broker has read, understood, challenged and genuinely stands behind the AI‑touched work - then the tool has done what it should: accelerate thinking, not replace it. If the answer is no, then no amount of speed or convenience is worth the exposure.
In an environment where AI capabilities will only grow and expectations will only rise, that may be the clearest line brokers have.