AI Governance Reimagined: Ontario’s New Joint Principles for the Responsible Use of Artificial Intelligence

Jan 29, 2026

5 min read

Two people reviewing a report
Two people reviewing a report
Two people reviewing a report

January 2026 marks a significant milestone in Ontario’s approach to artificial intelligence governance. The Information and Privacy Commissioner of Ontario (IPC) and the Ontario Human Rights Commission (OHRC) have released a comprehensive framework: the Principles for the Responsible Use of Artificial Intelligence.  These principles establish clear expectations for how public‑sector institutions should design, deploy, govern, and ultimately retire AI systems. 

Why Does This Matter? 

Back in 2024, the Ontario government enacted the Enhancing Digital Security and Trust Act (EDSTA), which contains provisions addressing the use of AI. The Act creates a framework for transparency, accountability, and risk management, with the specific requirements to be detailed in future regulations. To date, there are no such regulations in place.   

This new guidance from the IPC and OHRC helps fill the gap. Although these are recommendations rather than enforceable legal requirements, the IPC and OHRC have stated that these principles will “ground our assessment of organizations’ adoption of AI systems.” At the IPC's Privacy Day event on January 28, 2026, the guidance was framed as a framework providing the certainty organizations need to innovate confidently while maintaining public trust. In other words, if your organization's AI tools face privacy or human rights scrutiny, these are the standards regulators will use to evaluate you. The document also aligns with broader international efforts, including the EU’s Ethics Guidelines for Trustworthy AI, the OECD’s AI Principles, and Ontario’s own Responsible Use of Artificial Intelligence Directive. 

What Systems Are Covered? 

The document adopts a broad legal definition of AI from the EDSTA, which covers: 

  • Automated decision-making systems 

  • Generative AI systems 

  • Large language models (LLMs) and their applications 

  • Traditional AI tools like spam filters 

  • Any emerging AI technologies 

Notably, the guidance applies throughout the entire AI lifecycle, from design to retirement. This means institutions must conduct assessments at all relevant stages, with the nature of the assessment depending on their role as a developer, provider, or user. These assessments should be based on the principles outlined below.

The Six Core Principles 

The guidance outlines six interconnected principles that public sector institutions should follow, all of which are treated as equally important. 

1. Valid and Reliable 

AI systems must be tested to confirm they work as intended and produce consistent, accurate results. This requires independent testing before deployment and regular checks throughout the system’s life to ensure it continues to perform reliably across different conditions and for diverse communities. 

2. Safe 

AI systems must be monitored and used in ways that prevent harm to individuals and their rights. Institutions should: 

  • implement strong cybersecurity safeguards, 

  • identify and assess potential harms, and 

  • disable or retire unsafe systems promptly. 

Any new use of an existing AI system should go through a fresh safety assessment.  

3. Privacy Protective 

Institutions should use a “privacy-by-design” approach, building privacy safeguards into AI systems from the start rather than as an afterthought. This includes: 

  • limiting the collection of personal information to what’s actually necessary, 

  • using privacy‑enhancing technologies (e.g., de‑identification, synthetic data), 

  • meeting all federal and provincial legal requirements, 

  • informing individuals when their data is used in AI systems, and 

  • adjusting training data to mitigate biases. 

Notably, the IPC and OHRC expect institutions to provide opportunities for individuals to access and correct their personal data used in and generated by AI systems. Depending on the level of risk, individuals should also have the opportunity to either review or opt out of automated decision-making processes. 

4. Human Rights Affirming 

This guidance reflects how privacy and equity issues are inseparable. AI systems must not create or reinforce discrimination. Organizations are expected to actively identify and address potential discrimination in AI design and deployment, including by adjusting training data to correct biases. Using the same AI system uniformly for diverse groups can lead to discriminatory outcomes. The Commissioners also caution against systems that might “unduly target participants in public or social movements, or subject marginalized communities to excessive surveillance that impedes their ability to freely associate with one another.” 

5. Transparent 

Institutions must be open about their use of AI, which requires them to make AI systems: 

  • Visible – with clear public documentation 

  • Understandable – with explanations accessible to non-experts 

  • Explainable – with clear justification for outputs and their impacts 

  • Traceable – with records of training data, model logic, and monitoring results 

Organizations must notify individuals when they interact with AI systems or AI‑generated information. 

6. Accountable 

Institutions need proper governance structures with clear roles, responsibilities, and human oversight. Someone should be designated as responsible for AI systems and be empowered to pause or shut them down if needed. Institutions must also have processes for receiving and responding to questions or complaints about their AI use. 

Employees and other members of an institution should be empowered through whistleblowing protections to report instances where an AI system fails to comply with legal, technical, or policy requirements. 

Practical Tips for Institutions 

If your institution uses or is considering AI, here are some practical steps to align with these principles. 

Identify what AI you are already using. The definition is broad and may include systems you have not considered “AI”, such as spam filters, chatbots, or any tool that makes predictions or recommendations. 

Conduct impact assessments. The IPC's Privacy Impact Assessment Guide, recent IPC‑related analysis (Privacy Impact Assessments, Rebooted), and the OHRC's Human Rights AI Impact Assessment can help you identify and address risks before deployment. 

Assign clear accountability. Designate specific individuals who are responsible for overseeing your AI systems and who have the authority to intervene or shut things down when necessary. 

Create a transparency process. Establish a mechanism to receive and respond to questions or concerns about privacy, transparency, or human rights related to your AI systems. 

Document everything. Keep records of how your systems work, the decisions made about their design and deployment, and ongoing performance monitoring. This documentation should be written in accessible, non-technical language.  

Protect whistleblowers. Ensure staff members can safely report non-compliance with AI policies without fear of reprisal. 

Looking Ahead 

While this guidance does not create new legal obligations on its own, it provides a clear roadmap for what Ontario’s regulators expect from public-sector organizations using AI. Adhering to these principles will help ensure that the AI systems effectively serve the public interest. 

For more information or to discuss the steps your organization can take to align with this guidance, reach out to any member of our Privacy & Cybersecurity team. 

Share this:

disclaimer

This article shares general information and insights. It is not legal advice, and reading it does not create a solicitor–client relationship.

Privacy and Cybersecurity