Artificial intelligence is remaking insurance faster than any other technological wave in decades – from claims automation and telematics-based pricing to data-driven underwriting. But amid the rush to digitize, SPG Canada’s Nathan Tjandrawinata (pictured), executive vice president of personal lines, says the industry risks forgetting something fundamental: empathy.
For him, the question isn’t whether AI will transform insurance – it already has – but whether insurers will balance efficiency with humanity. “As long as we’re insuring people, not entities, personal insurance has to stay personal,” he said.
Tjandrawinata doesn’t see AI as an existential threat to jobs. In fact, he views it as an opportunity to free humans from repetitive, low-value work so they can focus on tasks that require judgment and empathy.
“Claims handling tools that turn speech to text, read documents, and sort claims by severity are already being used widely,” he said. “It saves us a lot of time.”
He also points to the growing sophistication of fraud detection algorithms and usage-based pricing models. “AI can spot unusual patterns or altered images, helping insurers catch more fraud,” he said. “And telematics data is creating fairer, behavior-based pricing models.”
During a recent conversation with Insurance Business Canada, he described the transformation in simple terms: “AI is best for tasks that are admin-based and repetitive – where there’s no opinion involved. That’s where it shines. You can reroll people into work that requires humans.”
To him, the misconception that AI will replace people misses the point entirely. “Humans are always needed,” he said. “You still need people to tell AI what to do, to program it, to give it a blueprint. You can’t just open an AI and say, ‘Fix this.’ You still need the human brain to define the problem.”
But even as he embraces the benefits of automation, Tjandrawinata warns that the industry is already seeing the dark side of overreliance on AI.
“People underestimate the power of AI and the negative impact,” he said. “We see the positive real quick – people want to be done faster. I do too.”
That temptation to cut corners, he said, can easily lead to blind spots and errors. “When you know your field enough, you know if something sounds wrong,” he said. “You review it and catch it. But not everyone does.”
He pointed to recent high-profile incidents where professionals relied too heavily on AI-generated output, from law firms citing fabricated court cases to consultants using unverified data in client work. For insurance, where trust and precision are everything, that kind of misstep could have serious consequences.
“It’s about discipline,” he said. “If you don’t double-check, you risk losing credibility. AI is a tool, not a shortcut.”
Fraud remains his biggest external concern. “You can see pictures that look exactly like the real thing, voices, documents,” he said. “You’ll see an increase in fraud because of AI – it’s already happening.”
Beyond fraud, Tjandrawinata believes one of the biggest emerging risks is how companies manage their own use of AI.
At SPG Canada, he says, the focus was on strict data governance. “We have an agreement with our vendor,” he explained. “Whatever data we dump in there disappears. If there’s a breach, it’s a breach of contract, and we can seek legal action.”
SPG Canada was one of the The Top MGAs in Canada. Read the 5-Star Brokers on MGAs special report here.
The challenge, he said, is cultural as much as technical. “We have to monitor what data goes in and what staff use AI for,” he said. “That’s how you protect both your business and your clients.”
In his view, every insurer and brokerage should now have clear, enforceable policies for AI use – not just to protect client data, but to preserve public trust. “People forget that uploading sensitive documents into open AI systems is basically publishing them,” he warned. “We can’t take that risk.”
If automation strips too much human contact out of the process, Tjandrawinata said, the industry could lose its moral compass.
“Do you want a robot to come to your house that has been totalled and say, ‘This is your policy, you’re not covered’? Or a person who says, ‘I’m sorry, here’s what we can do’?” he asked.
For him, claims empathy is non-negotiable – especially after catastrophic losses. “Your house just burned down,” he said. “A human is required for that. A human will sit with you, maybe cry with you, hand you a coffee, write you a handwritten note. That’s what makes this business human.”
He extended that logic to underwriting. Some risks simply can’t be assessed through data alone – like older rural properties or unusual homes far from fire protection.
“AI might decline because the fire hall is too far,” he said. “But a human underwriter knows that some communities, even remote ones, have priority protection or unique circumstances that reduce the true level of risk. It’s not the same.”
That kind of insight, he explained, comes from understanding geography, local infrastructure, and government response patterns – factors that rarely show up cleanly in the data. “Sometimes you have to apply local knowledge and context,” he said. “Machines can’t do that.”
Another area where human judgment remains irreplaceable is ethical decision-making. “Machines can’t contextualize fairness,” Tjandrawinata said. “They’ll calculate what’s efficient. Humans decide what’s right.”
He believes empathy must even factor into rating and pricing. “You don’t just price because that’s the right way to do it,” he said. “Sometimes you have to use empathy – understand the family situation, their background, what they can afford – and figure out what you can do to make sure they’re insured.”
That balance between risk and compassion, he said, is what differentiates responsible underwriting from pure automation. “We have to price risk with genuine affordability,” he said. “That’s human work.”
As AI grows more capable, the biggest competitive advantage may not be the technology itself, but how people use it to deepen human connections.
“Everybody talks about portals,” he said. “But when there’s an issue, people still want to talk to brokers. The human connection is still there.”
Tjandrawinata said that the right approach is to focus on relationship-based growth – visiting brokers in person, hosting job fairs, and creating spaces for genuine dialogue. “We have to start doing training in person, hire in person, visit our brokers that give us business,” he said.
That philosophy also drives how SPG targets market gaps. “We service people that can’t be serviced because nobody else wants to go in,” he said. “We understand the peril and rate it based on risk, not fear.”