Ontario’s AI Law is an ‘empty shell’, privacy watchdog warns

Ontario’s AI law lacks enforceable safeguards, leaving gaps as AI risks accelerate and organizations navigate challenges

Ontario’s AI Law is an ‘empty shell’, privacy watchdog warns

Cyber

By Branislav Urosevic

Ontario’s flagship artificial intelligence law is “no more than an empty shell,” the province’s privacy watchdog has warned, saying the real safeguards on how public‑sector bodies use AI will depend on regulations that don’t yet exist.

Speaking at a NetDiligence cyber conference, Christopher Parsons (pictured), director of research and technology at the Office of the Information and Privacy Commissioner (IPC) of Ontario, said the Enhancing Digital Security and Trust Act (EDSTA) – passed in November 2024 as part of Bill 194, the Strengthening Cybersecurity and Building Trust in the Public Sector Act – remains largely a framework on paper.

“This is a highly laudable effort by the government, and it’s timely,” Parsons told delegates. “However, the IPC’s ongoing concern is that EDSTA is no more than an empty shell.”

EDSTA is designed to let the province regulate mandatory cybersecurity programs, the “responsible” use of AI systems, and digital systems affecting children and youth across public‑sector entities. But Parsons stressed that the substance of those obligations has not yet been defined.

“The key protections must come from standards or regulation emerging from the legislation,” he said. “The legislation itself doesn’t have those protections built in.”

That leaves a gap between political signalling and operational reality for ministries, agencies and broader public‑sector institutions already experimenting with tools such as generative AI and agentic systems.

“For the time being, until those regulations are drafted, it’s unclear how the law will be specifically applied regarding potential misuses of AI systems,” Parsons said.

He also warned that, even with the legal framework still hollow, AI is already reshaping cyber risk in ways many organizations – and insurers – are not prepared for. He highlighted three specific AI‑driven threats.

First is prompt injection – malicious or hidden instructions embedded into inputs to manipulate an AI assistant. In practice, that means attackers can trick systems into bypassing safeguards or disclosing information they were never meant to reveal. “Such injections involve embedding malicious or hidden instructions into inputs to manipulate an AI system into ignoring safeguards or revealing unintended information,” Parsons said. From a privacy perspective, that can trigger unauthorized disclosure of personal information even when no traditional “hack” has occurred, raising difficult questions about how incidents are classified and covered.

The second is data or model poisoning, where training data or inputs are manipulated to corrupt how models behave. “This refers to the manipulation in training data or inputs to corrupt model behaviour and outputs,” he said, noting it is an area “where [NATO] are increasingly attuned and concerned for how all levels could be affected by various threat actors.” For insurers, poisoned models can mean bad automated decisions, corrupted records and cascading operational failures – but without the clear perimeter breach that typical cyber wordings are built around.

The third is excessive agency – giving AI systems too much autonomy without proper oversight. “This is a real challenge with agentic systems in particular,” Parsons said, warning it can lead to uncontrolled collection, sharing or use of personal information, and even AI agents modifying or updating data “in contravention of organizational restrictions.” Once AI tools are wired into email, databases or benefits systems, an overly powerful agent can create privacy violations and data integrity issues at scale.

Parsons said that, in the absence of detailed rules under EDSTA, organizations should look to the principles already issued by privacy commissioners across Canada and by the IPC with the Ontario Human Rights Commission, which call for AI systems that are valid and reliable, safe, privacy protective, transparent and accountable.

At a practical level, that means building in guardrails before AI tools are unleashed on real‑world data and workflows. The IPC is urging organizations to de‑identify structured data or use synthetic data where possible when training or fine‑tuning models, so they can preserve statistical value without exposing real individuals. It also expects clear contracts with AI vendors on whether personal information can be used for training, and privacy and algorithmic impact assessments before systems are put into production – positions the office has already enforced in a decision on biometric proctoring at McMaster University.

Crucially, Parsons said, AI governance cannot be left to technologists alone. At the IPC, technologists, policy advisers, lawyers and communications staff assess AI together, and he encouraged public bodies to mirror that cross‑functional approach so they do not miss key risks or blind spots when deploying powerful tools.

While he described this as “one of the coolest moments in technology for the past 20 years,” Parsons warned that the lack of concrete safeguards under Ontario’s new law makes early decisions critical.

“The choices we make now will shape how these technologies evolve and whom they ultimately serve,” he said. “It is urgently important that we act deliberately and proactively to ensure that these systems work in the interest of all of us, and not for just a few.”

Related Stories

Keep up with the latest news and events

Join our mailing list, it’s free!