October 29, 2025
The Ontario Human Rights Commission (OHRC) welcomes the opportunity to inform the development of Canada’s renewed Artificial Intelligence (AI) Strategy.
The OHRC is a statutory body established under the Ontario Human Rights Code (Code). Its primary mandate is to promote and advance human rights while addressing systemic discrimination in Ontario. The OHRC accomplishes this by developing policies, conducting public inquiries, and engaging in strategic litigation.
The OHRC is pleased to note the vision statement for Canada’s AI Strategy, which describes "a future where AI empowers industries, protects human rights, serves the public good and inspires trust—ensuring that every Canadian community shares in its benefits while risks are addressed and managed responsibly.”[1] This recognition that AI empowerment must be synonymous with human rights, public trust and shared benefits is a strong starting point for the development of an AI strategy.
Minister Evan Solomon has also referred consistently to the importance of safety as a pillar of Canada’s approach to AI, while ensuring that it remains an attractive place for industry.[2] The OHRC firmly shares these views and believes that centring human rights and other core Canadian values will be a catalyst for opportunity, innovation and excellence.
Canada has already recognized that ensuring safe and trustworthy technologies that serves the best interests of all Canadians is essential for the strongest and most beneficial AI innovation. Building this recognition into the foundation of Canada’s AI strategy is critical. Without foundational human rights safeguards, AI development and use can lead to serious harms to Canadians.
As discussed below, these harms are not theoretical or hypothetical. There are, unfortunately, far too many examples of serious harms to individuals and communities, including discrimination based on race, gender and other personal attributes. Repeated examples of both predictable and unforeseen harms have demonstrated the need to establish standards for safeguarding society’s most basic principles in our use of AI and to not view these issues as “fringe cases”.
It is from this shared perspective on the safe and beneficial use of AI that the OHRC provides recommendations to centre human rights in Canada’s renewed AI strategy and ensure that the opportunities, benefits and protections associated with AI are available without discrimination.
Discriminatory impacts from the use of AI systems
AI systems, when left unchecked, have replicated and amplified existing systemic discrimination. In some cases, they have caused incidents at a scale not previously seen.
- Tax investigations: In the Netherlands, 20,000 families were wrongly investigated by the country’s tax authority for fraudulently claiming child benefit allowances, requiring approximately 10,000 families to repay tens of thousands of euros. In some cases, this led to unemployment and bankruptcies. The government had deployed an algorithmic system that placed special scrutiny on people for their ethnicity or dual nationality, profiling individuals based on their race. The Dutch government resigned in 2021 in response to public pressure over the incident.[3]
An AI system used in the United Kingdom to identify possible welfare fraud was also recently found to incorrectly recommended investigations based on people’s age, disability, marital status and nationality.[4] Systems in the United Kingdom and France have also been alleged to discriminate again single mothers.[5]
- Domestic violence investigations: Spain’s risk assessment system for gender-based violence frequently underestimated the risk level for women, resulting in inadequate police action in hundreds of cases that resulted in women who were subsequently killed by their partners.[6]
- Child welfare investigations: Agencies in the United States of America (US) use predictive systems to generate risk scores using data on factors such as disability, poverty rates and family size, as well as proxies such as geographic indicators that could be used to infer other personal attributes. The use of these systems raises human rights concerns by assigning risk based on arbitrary weights and population data related to protected grounds, rather than the specific circumstances of each child and family.[7]
- Mortgage approvals: In the US, Black applicants were 80% more likely to be rejected than White applicants by mortgage approval algorithms used by lending companies, despite claims that developing such AI systems using comprehensive financial data would eliminate disparities.[8]
- Commercial uses of facial recognition: The US Federal Trade Commission found that a drug store company failed to implement procedures for its use of an automated AI facial recognition system in hundreds of locations that incorrectly flagged thousands of customers as security risks.[9] Store staff would erroneously follow and remove customers or contact the police for suspicion of wrongdoing when the system would falsely flag customers. The system was more likely to generate false positives based on race and sex.
- Policing services: The well-known investigation of Clearview AI’s facial recognition technology by privacy authorities in Canada,[10] and the Information and Privacy Commissioner’s (IPC) guidance for police on facial recognition and mugshot databases,[11] highlight concerns about bias in the facial recognition technology and the procedures governing its use. This includes concerns from marginalized communities that they might be disproportionately represented in biometric databases.
Some police services have begun to employ predictive policing technologies involving the use of crime data, which comes from historically discriminatory policing, to determine future probabilities of criminal occurrences.[12] Using biased data to predict crime and inform deployment decisions leads to further police action in over policed areas. Police departments in the US have canceled contracts for predictive technologies due to flawed methodologies[13] and reliability.[14]
Al-related human rights concerns at a significant scale have occurred in Canada as well. In 2019, the OHRC and Canadian Human Rights Commission (CHRC) jointly raised concerns that Facebook Inc. (now Meta Platforms Inc.) and its algorithmic advertising platform was facilitating discriminatory advertising contrary to Canada’s federal and provincial human rights laws. Reports at the time detailed how its platform allowed advertisers to intentionally misuse advertising tools to exclude people from receiving their ads for housing[15] and employment[16] based on personal characteristics such as age, sex and race. The Netherlands Institute of Human Rights issued a decision earlier this year, finding that the company’s advertising platform continues to discriminate based on gender and age.[17]
Earlier this year, Canada’s Competition Bureau initiated an investigation into the use of the software YieldStar for landlords to algorithmically collude to raise rents.[18] Landlords misused the technology to increase rents by as much as 40% in neighbourhoods in Toronto, Ontario where rents were the lowest.[19] Low income, racialized and newcomer populations were overrepresented in these neighbourhoods and were consequently the most impacted by the rent increases that resulted from the use of YieldStar. In the US, New York State has just passed a law prohibiting the use of such technologies.[20]
In many of these cases the organizations involved should have been aware of the limitations of the technologies and their potential to inflict or contribute to harm. These examples raise significant concerns, bringing attention to the risks associated with technologies that have yet to be effectively vetted for safety. They also highlight the serious harms that can result from failures or misuse, including human rights violations up to loss of life, liberty and security of the person. These failures and potential harm are not exclusive to private entities; they can also originate from public institutions.
Following a pattern of past human rights incidents, emerging technologies, particularly Large Language Models (LLM), continue to replicate and amplify existing patterns of bias and discrimination in their outputs, and strongly suggest that developers and operators of AI systems presently do not sufficiently prioritize human rights:
- Applications such as OpenAI’s ChatGPT and Anthropic’s Claude consistently produced outputs that encouraged people to negotiate for significantly lower salaries based on information provided about their sex, ethnicity and refugee status.[21]
- LLM-based text screening tools for uses such as automated job application screening continue to show negative bias against people based on race and other personal characteristics, with those disadvantages amplified when individuals identify with multiple negatively biased attributes.[22]
- LLMs generate outputs that recommend less desirable positions in job matching and harsher sentencing for criminal convictions because of the dialect of English that the person spoke.[23]
AI techniques that have shown promising results, such as cancer screening and other diagnostics, have also been found to reproduce existing patterns in which racialized and other groups are more likely to be misdiagnosed or otherwise have less reliable testing results.[24]
Issues that involve bias and discrimination persist. They demonstrate that merely acknowledging the importance of human rights will not move developers and operators of AI to sufficiently consider the societal impact of their work and the responsible adoption of AI in Canada. Public policy, including Canada’s AI Strategy, must clearly prioritize human rights in the design and use of these technologies.
OHRC Recommendations
The OHRC makes the following recommendations, in support of AI’s potential for growth, innovation and public good, while recognizing its potential for harm without human rights safeguards.
Recommendation 1: Canada’s human rights obligations must be written into Canada’s AI strategy
Human rights are protected by a framework of federal, provincial and territorial laws, including the Canadian Charter of Rights and Freedoms (the Charter), the Canadian Human Rights Act (the Act) and provincial and territorial human rights laws such as the Ontario Human Rights Code (the Code). Together, they provide human rights protections across numerous areas of society.
Canada’s human rights laws apply to AI systems. Developers, operators and owners of AI systems are expected to prevent and address discrimination involving the use of AI. Recognizing the national human rights framework in Canada's AI strategy would build trust and create confidence among the public and Canadian industries, allowing the public to adopt AI enabled approaches, while attracting developers and innovators. It would also support developers, operators and owners of AI systems in their awareness and understanding of their human rights obligations. This approach aligns with Canada’s dedication to leading international standards, including the Organisation for Economic Co-operation and Development’s (OECD). Notably, the OECD has identified human rights as one of its five AI Principles, stating that:
AI actors should respect the rule of law, human rights, democratic and human-centred values throughout the AI system lifecycle. These include non-discrimination and equality, freedom, dignity, autonomy of individuals, privacy and data protection, diversity, fairness, social justice, and internationally recognised labour rights. This also includes addressing misinformation and disinformation amplified by AI, while respecting freedom of expression and other rights and freedoms protected by applicable international law.
To this end, AI actors should implement mechanisms and safeguards, such as capacity for human agency and oversight, including to address risks arising from uses outside of intended purpose, intentional misuse, or unintentional misuse in a manner appropriate to the context and consistent with the state of the art.[25]
The stated goal of the OECD principles is to “promote the use of AI that is innovative and trustworthy and that respects human rights and democratic values.”[26] Currently, 47 countries including Canada have committed to the OECD’s AI Principles.
Moreover, Canada is a signatory to the Council of Europe’s Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law,[27] and in doing so, has committed to the provisions in Article 10 concerning equality and non-discrimination:
Each Party shall adopt or maintain measures with a view to ensuring that activities within the lifecycle of artificial intelligence systems respect equality, including gender equality, and the prohibition of discrimination, as provided under applicable international and domestic law.
Each Party undertakes to adopt or maintain measures aimed at overcoming inequalities to achieve fair, just and equitable outcomes, in line with its applicable domestic and international human rights obligations, in relation to activities within the lifecycle of artificial intelligence systems.[28]
The AI Strategy’s focus on “building safe AI systems and public trust in AI” must consider Canada’s human rights commitments.
The OHRC recommends that Canada explicitly recognize the legal framework for human rights, including the Canadian Charter of Rights and Freedoms, the Canadian Human Rights Act, and provincial and territorial human rights laws, in its AI Strategy; and apply its international commitments to the development of the AI Strategy, including those expressed in the OECD AI Principles and the Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law.
Recommendation 2: Embed human rights principles for the responsible use of AI
As recognized in the AI Strategy vision statement, the use of AI should have a positive impact on all Canadians. To realize this potential, the use of these technologies must be consistent with the importance Canadian society gives to our fundamental values and rights. That is why in 2023, the Information and Privacy Commissioner of Ontario (IPC) and the OHRC issued a joint statement urging the Ontario government to implement guardrails for the public sector’s responsible, safe and trustworthy use of AI.[29] The OHRC and IPC called for these guardrails to be based on the following principles:
- Valid and reliable: AI systems must exhibit valid, reliable and accurate outputs for the purpose(s) for which they are designed, used, or implemented.
- Safe: AI must be developed, acquired, adopted and governed to prevent harm or unintended harmful outcomes that infringe upon human rights, including the rights to privacy and non-discrimination.
- Privacy protective: AI should be developed using a privacy-by-design approach. Developers, providers or users of AI systems should take proactive measures to protect privacy and support the right of access to information from the very outset.
- Human rights affirming: Human rights are inalienable and protections must be built into the design of AI systems and procedures. Institutions using AI systems must prevent and remedy discrimination effectively, and ensure that the benefits from the use of AI are universal and free from discrimination.
- Transparent: Institutions that develop, provide and use AI must ensure that these AI systems are visible, understandable, traceable and explainable to others.
- Accountable: Institutions should implement a robust internal governance structure with clearly defined roles and responsibilities and oversight procedures, including a human-in-the-loop approach, to ensure accountability throughout the entire lifecycle of their AI systems.
The OHRC urges Canada to adopt these principles into Canada’s AI Strategy to help ensure that AI systems are developed, acquired, used, and decommissioned in a manner that safeguards human rights.
The OHRC recommends that Canada embed these principles in its approach to AI to ensure that innovation is firmly grounded in its core principles and not at the expense of Canadians' rights.
Recommendation 3: Encourage the use of human rights impact assessments
Impact assessments are among the leading strategies to identify and assess for risks associated with the use of AI. They are required under the Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law, of which Canada is a signatory, which states that each party is to “adopt or maintain measures for the identification, assessment, prevention and mitigation of risks posed by artificial intelligence systems by considering actual and potential impacts to human rights, democracy and the rule of law.”[30] The Government of Canada’s Algorithmic Impact Assessment[31] tool has been among the global leading standards for assessing and mitigating the risks of automated decision-making systems in the federal government.
Impact assessments are also an important piece of AI governance to evaluate and address human rights issues in AI systems before they contribute to discrimination and other violations of law. That is why last year, the OHRC and Law Commission of Ontario, with contributions from the Canadian Human Rights Commission, released a Human Rights AI Impact Assessment (HRIA) to assist designers, developers, operators and owners of AI systems to create and use AI systems that align with human rights protections in Canada and specifically in Ontario.[32]
Bias and discrimination are complex issues, and the OHRC has heard from groups that assess and audit AI systems that such concerns are often overlooked and ignored because they do not possess the knowledge and expertise to review their uses of AI for human rights concerns. The HRIA is designed to support organizations through these challenges.
The OHRC recommends that the AI Strategy encourage the use of human rights impact assessments to help private and public actors assess and mitigate the human rights impact of AI systems.
Conclusion
The innovative potential of AI offers opportunities to positively reshape key aspects of our lives by driving forward efficiency, economic growth and societal progress. However, this focus often overshadows critical human rights concerns. Early adoption of unvetted AI continues to result in incidents with harmful consequences that demonstrate that human rights are peripheral to the development, governance and use of these technologies.
Canada’s strategy should make human rights a foundational pillar of our country’s approach to AI. It must recognize Canada’s international commitments to human rights and make clear to developers, operators and owners of AI systems that they have human rights obligations. Canada’s strategy should make clear that AI systems and their use must be consistent with human rights by exhibiting that they are valid and reliable, and their use is safe, privacy protective, human rights affirming, transparent and accountable. It should also encourage the use of human rights impact assessments to help actors assess and mitigate the human right impact of their AI systems.
Canada is a leader in AI. The OHRC offers this submission to support the government in fulfilling its vision to ensure that everyone in Canada shares in the benefits of AI free from discrimination.
[1] Innovation, Science and Economic Development Canada, Government of Canada launches AI Strategy Task Force and public engagement on the development of the next AI strategy (2025), online: https://ised-isde.canada.ca/site/ised/en/public-consultations/help-define-next-chapter-canadas-ai-leadership.
[2] Canadian Parliamentary Channel, Minister Evan Solomon speaks at ‘ALL IN’ conference on AI – September 24, 2025 (September 2025) at 10:42, online: https://www.youtube.com/live/HtwKNHOxyLc?si=D1cMrizk8OlkB-5N&t=642.
[3]The Guardian, Dutch government resigns over child benefits scandal (January 2021), online: https://www.theguardian.com/world/2021/jan/15/dutch-government-resigns-over-child-benefits-scandal
[4] The Guardian, Revealed: bias found in AI system used to detect UK benefits fraud (December 2024), online: https://www.theguardian.com/society/2024/dec/06/revealed-bias-found-in-ai-system-used-to-detect-uk-benefits.
[5] Wired, Algorithms Policed Welfare Systems For Years. Now They're Under Fire for Bias (October 2024), online: https://www.wired.com/story/algorithms-policed-welfare-systems-for-years-now-theyre-under-fire-for-bias.
[6] New York Times, An Algorithm Told Police She Was Safe. Then Her Husband Killed Her (July 2024), online: https://www.nytimes.com/interactive/2024/07/18/technology/spain-domestic-violence-viogen-algorithm.html.
[7] American Civil Liberties Union, The Devil is in the Details: Interrogating Values Embedded in the Allegheny Family Screening Tool (2023), online: https://www.aclu.org/the-devil-is-in-the-details-interrogating-values-embedded-in-the-allegheny-family-screening-tool.
[8]The Markup, The Secret Bias Hidden in Mortgage-Approval Algorithms (August 2021), online: https://themarkup.org/denied/2021/08/25/the-secret-bias-hidden-in-mortgage-approval-algorithms.
[9]Federal Trade Commission, Rite Aid Banned from Using AI Facial Recognition After FTC Says Retailer Deployed Technology without Reasonable Safeguards (December 2023), online: https://www.ftc.gov/news-events/news/press-releases/2023/12/rite-aid-banned-using-ai-facial-recognition-after-ftc-says-retailer-deployed-technology-without.
[10] Office of the Privacy Commissioner of Canada, RCMP’s use of Clearview AI’s facial recognition technology violated Privacy Act, investigation concludes (2021), online: https://www.priv.gc.ca/en/opc-news/news-and-announcements/2021/nr-c_210610.
[11] Information and Privacy Commissioner of Ontario, Facial Recognition and Mugshot Databases: Guidance for Police in Ontario (January 2024), online: https://www.ipc.on.ca/en/resources-and-decisions/facial-recognition-and-mugshot-databases-guidance-police-ontario-0.
[12] See “Chapter 6 - Arrests, charges, and artificial intelligence: gaps in policies, procedures and practices” of OHRC, From Impact to Action: Final report into anti-Black racism by the Toronto Police Service (December 2023), online: https://www.ohrc.on.ca/en/impact-action-final-report-anti-black-racism-toronto-police-service.
[13] The Guardian, LAPD ended predictive policing programs amid public outcry. A new effort shares many of their flaws (November 2021), online: https://www.theguardian.com/us-news/2021/nov/07/lapd-predictive-policing-surveillance-reform.
[14] Wired, Predictive Policing Software Terrible at Predicting Crimes (October 2023), online: https://www.wired.com/story/plainfield-geolitica-crime-predictions.
[15] The Guardian, Facebook accused of failing to tackle discrimination in housing ads (March 2018), online: https://www.theguardian.com/technology/2018/mar/28/facebook-accused-discrimination-failure-housing-lawsuit.
[16] CBC News, Use of Facebook targeting on job ads could violate Canadian human rights law, experts warn (April 2019), online: https://www.cbc.ca/news/politics/facebook-employment-job-ads-discrimination-1.5086491.
[17] Netherlands Institute for Human Rights, Meta Platforms Ireland Ltd. discriminates on the ground of gender when displaying job advertisements to users of Facebook in the Netherlands (February 2025), online: https://oordelen.mensenrechten.nl/oordeel/2025-17/4a575c22-d4b0-499f-8811-6b5e6720344d.
[18]CTV News, Competition Bureau says it's probing whether landlords are using AI to set rents (February 2025), online: https://www.ctvnews.ca/business/article/competition-bureau-says-its-probing-whether-landlords-are-using-ai-to-set-rents.
[19] Martine August and Cloé St-Hilaire, Financialization, housing rents and affordability in Toronto (April 2025), online: https://journals.sagepub.com/doi/10.1177/0308518X251328129.
[20] New York State Senate, Assembly Bill A1417 (October 2025), online: https://www.nysenate.gov/legislation/bills/2025/A1417/amendment/original.
[21] Aleksandra Sorokovikova, Pavel Chizhov, Iuliia Eremenko, Ivan P. Yamshchikov, Surface Fairness, Deep Bias: A Comparative Study of Bias in Language Models (September 2025), online: https://www.thws.de/en/research/institutes/cairo/releases/thema/artificial-intelligence-gives-women-lower-salary-advice.
[22] Kyra Wilson and Aylin Caliskan, Gender, Race, and Intersectional Bias in Resume Screening via Large Language Model Retrieval (October 2024), online: https://ojs.aaai.org/index.php/AIES/article/view/31748.
[23] Valentin Hofmann, Pratyusha Ria Kalluri, Dan Jurafsky & Sharese King, AI generates covertly racist decisions about people based on their dialect (August 2024), online: https://www.nature.com/articles/s41586-024-07856-5.
[24] Ariana Mihan, Ambarish Pandey and Harriette G. C. Van Spall, Artificial intelligence bias in the prediction and detection of cardiovascular disease (November 2024), online: https://www.nature.com/articles/s44325-024-00031-9.
[25] Organisation for Economic Co-operation and Development, Respect for the rule of law, human rights and democratic values, including fairness and privacy (Principle 1.2), online: https://oecd.ai/en/dashboards/ai-principles/P6.
[26]Ibid.
[27] Council of Europe, The Framework Convention on Artificial Intelligence, online: https://www.coe.int/en/web/artificial-intelligence/the-framework-convention-on-artificial-intelligence.
[28] Council of Europe, Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law (September 2024), online: https://rm.coe.int/1680afae3c.
[29] Ontario Human Rights Commission, Joint statement by the Information and Privacy Commissioner of Ontario and the Ontario Human Rights Commission on the use of AI technologies (May 2023), online: https://www.ohrc.on.ca/en/news_centre/joint-statement-information-and-privacy-commissioner-ontario-and-ontario-human-rights-commission-use.
[30] Council of Europe, supra note 28.
[31] Government of Canada, Algorithmic Impact Assessment tool, online: https://www.canada.ca/en/government/system/digital-government/digital-government-innovations/responsible-use-ai/algorithmic-impact-assessment.html.
[32] Law Commission of Ontario and Ontario Human Rights Commission, Human Rights AI Impact Assessment (November 2024), online: https://www.ohrc.on.ca/en/human-rights-ai-impact-assessment.
