Ontario's new AI framework puts insurers on notice: if your systems make decisions about people, regulators now have the tools to assess how.
In January 2026, the Office of the Information and Privacy Commissioner of Ontario and the Ontario Human Rights Commission released a joint document, Principles for the Responsible Use of Artificial Intelligence. The two bodies are clear about its role: these principles will ground their assessment of organizations' adoption of AI systems consistent with privacy and human rights obligations.
The framework applies to any organization that develops, acquires, uses, or decommissions an AI system. It draws on Ontario's Enhancing Digital Security and Trust Act to define an AI system as: (a) a machine-based system that, for explicit or implicit objectives, infers from the input it receives in order to generate outputs such as predictions, content, recommendations or decisions that can influence physical or virtual environments, and (b) such other systems as may be prescribed. The document says this broad conception includes automated decision-making systems, generative AI systems, foundational large language models and their applications, and traditional AI technologies such as spam filters and computer vision systems.
The six principles are described as interconnected and of equal importance.
First is valid and reliable. AI systems must exhibit valid, reliable, and accurate outputs for the purpose(s) for which they are designed, used, or implemented. They must meet independent testing standards, perform consistently, as required, over a specified duration and in intended environments, and be assessed prior to deployment and regularly through the life cycle. The document cautions that even a highly valid and reliable tool can yield poor outcomes if it is provided with inaccurate, biased, or incomplete data.
Second is safe. AI must be developed, acquired, adopted, and governed to prevent harm or unintended harmful outcomes that infringe upon human rights, including the right to privacy and non-discrimination. Any new use should undergo a comprehensive assessment, and unsafe systems should be temporarily or permanently turned off or decommissioned, with negative impacts reviewed accordingly.
Third is privacy protective. AI should be developed using a privacy by design approach, with proactive measures to protect the privacy and security of personal information and support the right of access to information from the very outset, supported by clear lawful authority to collect, process, retain, and use data, and compliance with applicable federal or provincial privacy laws, directives, regulations, or other legal instruments. The document acknowledges that AI systems' need for large, diverse data volumes challenges the principle of limiting collection; re-use of data for training tests purpose limitation; knowledge retained from training may persist after training data is deleted, challenging the principle of limiting retention; and even anonymized information may sometimes be re-identified by AI systems. The notification and opt-out provisions will be felt directly by any organization whose AI systems make consequential decisions about individuals: people should be informed whether and when their personal information is used in the development, refinement, or operation of an AI system, as well as the purpose and intended use of the AI system, and the document distinguishes between a right of review for non-high-risk automated decision processes and the choice of opting out of high-risk automated decision processes that can materially impact an individual's well-being, in preference of a human decision maker.
Fourth is human rights affirming. Human rights protections must be built into AI design and procedures, with institutions preventing and remedying discrimination effectively and ensuring benefits from the use of AI are universal and free from discrimination. The document calls for proactively identifying and addressing systemic discrimination on grounds protected under the Ontario Human Rights Code, including adjusting training data when monitoring detects inherent biases, and warns that uniform use across diverse groups may still result in adverse impact discrimination.
Fifth is transparent. Institutions must ensure AI systems are visible, understandable, traceable, and explainable to others, including notifying individuals when they are interacting with an AI system and when information presented to them has been generated by AI.
Sixth is accountable. Institutions should implement a robust internal governance structure with clearly defined roles, responsibilities, and oversight procedures, including a human-in-the-loop approach, with up front risk assessments — including privacy and human rights impact assessments and algorithmic impact assessments — and designated responsibility to pause or decommission unsafe, invalid, or unreliable systems. The document also calls for safe whistleblowing protections enabling members to report non-compliance with legal, technical, or policy requirements to an independent oversight body without fear of reprisal, and independent oversight with authority to enforce the principles and require remedial or corrective actions.
The IPC and OHRC describe the principles as complementing other initiatives, including the EU Ethics Guidelines for Trustworthy AI, the G7 Hiroshima Process International Guiding Principles for Organizations Developing Advanced AI Systems, and the OECD AI Principles, while emphasizing the protection of human rights, including privacy laws.