Canada’s insurance industry may be waiting for comprehensive AI legislation, but legal exposure is already mounting across existing frameworks, warned Nathalie David (pictured right), partner at Clyde & Co, during a panel at the National Insurance Conference of Canada (NICC).
“The legal landscape for AI is rapidly growing, with significant implications for the insurance industry,” David said.
What is particularly of note for insurers is that not only do they have to be diligent when they include AI systems or tools within their own business operations, these advancements in technology also heighten potential exposures linked to the activities they insure, she warned.
While Canada has yet to pass a comprehensive AI framework, David said the Artificial Intelligence and Data Act (AIDA) – introduced as part of Bill C-27 – has already “set the stage” for responsible AI regulation.
“Even though it died … with the prorogation of Parliament earlier this year, after four years in the works… it still remains a priority for Parliament. We’ll have to see if we are lucky enough to see legislation reintroduced this year,” she said.
In the meantime, insurers remain bound by federal and provincial privacy laws that already affect how companies collect and use data for AI training and automated decision-making. Quebec’s privacy legislation, she noted, adds even stricter rules.
“For the insurance industry, this could include approving or denying a claim, adjusting premiums based on risk profiling, or fighting potential fraud – all areas where AI is already being used,” David said.
Ontario’s Bill 194, which applies to the public sector, also signals “a growing trend toward accountability frameworks and risk management that could – or probably will – eventually extend to the private sector,” she added.
Internationally, the EU’s Artificial Intelligence Act has set the benchmark for AI regulation, David said. The law classifies AI systems by risk level – from prohibited uses like real-time biometric identification by law enforcement, to “high-risk” systems in critical infrastructure, employment, or education.
“These are subject to higher compliance obligations,” she said, noting that Canada’s signature on the Council of Europe’s first binding AI treaty underscores its own commitment to “human rights, transparency, and accountability.”
For insurers, that means preparing early for the same expectations that regulators are prioritizing globally, she said.
“Transparency involves clarity and openness about AI models – how they function, how they make decisions,” David said. “We can’t be in the situation where there’s information being put in a black box, there’s information coming out, and no one can understand what happened between A and B."
She also highlighted accountability as a growing legal requirement, referencing the now-famous Air Canada chatbot case.
“Air Canada was basically saying, ‘Our chatbot is not our responsibility,’” she said. “The judge actually says this is a remarkable submission (with a little bit of irony)… and it’s still difficult to understand how Air Canada presented this case before the court.”
Asked which legal frameworks already pose the biggest compliance risks, David said insurers must look across the entire spectrum of existing law – contract, tort, product liability, professional liability, employment, consumer protection, human rights, property, and copyright.
“These all continue to apply to AI unless legislation emerges to say otherwise,” she said. “They are relevant to those who develop AI, who make it available, but also to those who are licensees or who purchase or use AI products.”
In practical terms, that means insurers and their clients need to carefully consider how these frameworks intersect when deploying or underwriting AI.
“You and your insureds need to be mindful of the warranties, the disclaimers, the wordings of these contracts to ensure that proper risks and responsibilities are allocated.”
Professional liability, she added, intertwines with contract and tort schemes – errors and omissions of AI developers, but also the professional liability regimes of lawyers, accountants, and doctors.
AI’s use of public data is quickly becoming the next battleground. Publicly accessible information on the internet is not automatically free to use, David warned.
“Using these for AI training without authorization or consent may amount to copyright infringement.”
She cited the ongoing lawsuit by Canadian news publishers against OpenAI for using news content without permission, and the Fairview AI decision in Alberta, which “somewhat opened the door” to using public data for model training – though, she cautioned, “we’re still in the very early stages.”
In the United States, she noted, the generative AI firm Anthropic has agreed to pay $1.5 billion to settle a class action lawsuit by authors who allege their works were copied to train chatbots. “It could be a landmark settlement if it’s approved by a judge,” David said, calling it “a turning point in legal battles between AI companies and the writers, visual artists, and other creative professionals accusing them of copyright infringement.”