Language selector

Bill 194, Strengthening Cyber Security and Building Trust in the Public Sector Act, 2024

Page controls

June 11, 2024

Page content

 

Hon. Todd McCarthy
Minister of Public and Business Service Delivery and Procurement
777 Bay Street, 5th Floor 
Toronto, ON M5B 2H7

 

Dear Minister McCarthy,

Re: Bill 194, Strengthening Cyber Security and Building Trust in the Public Sector Act, 2024

 

The Ontario Human Rights Code (the “Code”) states that it is public policy in Ontario to recognize every person’s dignity and worth and provide equal rights and opportunities without discrimination. The assurance of non-discrimination is central to the Code and guarantees to all Ontarians that they will derive equal benefit from all laws and policies. Reflective of the foundational nature of its objective, the Code has primacy over all other provincial acts.[1] This means that all other Ontario laws must be aligned with the Code. In affirming this status, the Supreme Court of Canada held that the Code “must not only be given expansive meaning but also offered expansive application.”[2] These principles require that legislation, regulations, policies, procedures, and programs should be developed according to the Code.

Under the Code, the Ontario Human Rights Commission (OHRC) has a broad statutory mandate to promote, protect and advance respect for human rights, and to identify and promote the elimination of discriminatory practices. In addition, the Memorandum of Understanding between the Attorney General of Ontario – “the guardian of public interest in respect of the rule of law…”  and the OHRC states that the OHRC plays a meaningful role in the development and delivery of policies and programs of the government [F/n MOU article 5, clauses c and f.]. Under those mandates that the OHRC makes this submission on the proposed provisions relating to the public sector’s use of artificial intelligence (AI) in Bill 194, Strengthening Cyber Security and Building Trust in the Public Sector Act, 2024.

The OHRC commends the Ontario government for recognizing that AI already impacts Ontarians today, and that a framework is required for the responsible and safe use of technology by Ontario’s public sector. While many are eager to harness the opportunities and benefits presented by the potential of AI, its use by public sector entities around the world has already resulted in serious harms to individuals and communities, including algorithmic discrimination based on race, gender and other personal attributes, as well as violations of privacy. Human rights on these and other grounds are fundamental to Canadian values.

The present and emerging issues arising from the development and use of AI are the reason the OHRC and Information and Privacy Commissioner of Ontario (IPC) issued a joint statement calling for strong guardrails that uphold a broader human rights-based approach to protect Ontarians from unsafe digital technologies.[3] Since then, our organizations have been meeting with the Ministry of Public and Business Service Delivery and Procurement to advise on the development of the Ontario Public Service’s guidance for the use of generative AI and other components of its Trustworthy AI Framework. The OHRC has also made a submission on the use of AI to the Standing Committee on Social Policy for the review of Bill 149, Working for Workers Four Act, 2023.[4]

In the current submission, the OHRC provides recommendations on Bill 194 to ensure that the opportunities, benefits and protections associated with the use of AI are available to all Ontarians without discrimination. These recommendations are based on internationally established guardrails, which have already recognized the importance of human rights due diligence. As AI technologies become increasingly pervasive, the Ontario government must build human rights protections into the foundation of any governing framework and at each subsequent level of regulation.

 


 

Discriminatory impacts from public sector use of artificial intelligence systems

Bill 194 recognizes that digital information and technology related to children warrants special protection and identifies schools, children’s aid societies and hospitals as vulnerable sectors for cyber threats and technology disruptions. The OHRC supports an approach that prioritizes these sectors. The OHRC brought attention to the need for innovation and adoption of human rights protections in Ontario’s child welfare system[5] and public education system.[6] “Health and wellbeing” forms one of the OHRC’s areas of priority under its Strategic Plan.[7] The three sectors identified are among those that the OHRC is particularly concerned with because there has already been documented misuse of AI technologies which have led to systemic discrimination within them. Incidents involving public sector use of AI in other jurisdictions have already demonstrated the nature and extent of the harm that may occur, and one would be remiss to think similar events cannot happen here. However, there are two significant sectors that are missing from the legislation’s specified areas of focus. Those sectors are policing services and social assistance services, which will be discussed in greater detail below.

The OHRC has publicly highlighted the following uses of AI technologies for their significant harmful impacts or their potential to impact human rights. In many of these cases the organizations involved should have been aware of the limitations of the technologies and their potential to inflict or contribute to harm. These examples raise significant concerns, bringing attention to the risks associated with technologies that have yet to be effectively vetted for safety. They also highlight the significant harms that can result from failures or misuse, including human rights violations. These failures and potential harm are not exclusive to private entities; they can also originate from public institutions.

 

Child welfare

Child welfare agencies across the United States of America (US) are deploying AI systems to predict the risk of neglect and harm to children.[8] The predictive systems generate risk scores using data on factors such as disability, poverty rates and family size, as well as proxies such as geographic indicators that could be used to infer other personal attributes. The use of these systems raises human rights concerns by assigning risk based on arbitrary weights and population data related to protected grounds, rather than the specific circumstances of each child and family. Such algorithmic bias can reproduce historic patterns of systemic discrimination in child welfare investigations, apprehensions, foster care placements and reunification determinations. The US Department of Justice is currently investigating the use of these systems for algorithmic discrimination.[9]

The OHRC’s report on the overrepresentation of Indigenous and racialized children in Ontario’s child welfare system found that tools and standards coupled with workers’ biases could lead to incorrect assessments about the level of risk that children are exposed to and effect decisions to intervene.[10]  The OHRC remains concerned that adopting AI technologies like those found in the US could increase the prevalence of systemic discrimination in Ontario’s child welfare system.

 

Education

Several US states use AI systems to predict school attendance and performance using data on personal attributes such as race, gender, disability and household income.[11] Although human rights-based data collection and use are important for identifying systemic discrimination, the school boards in question have been accused of using data to influence educators negatively, based on students' personal attributes. The use of the technologies was also found to be unreliable and opaque as officials would not disclose or could not explain how factors such as race factored into risk scores.[12]

The OHRC’s current initiative examining anti-Black racism in Ontario’s publicly funded education systems has confirmed the existence of anti-Black racism across school boards, leading to reduced educational attainment, increased child welfare and police involvement, and reduced health and well-being among Black students. Furthermore, insufficient socio-demographic data is collected, and what is collected is inconsistent across schools and school boards. These factors aggravate the risk of harm in the deployment of AI technologies in this context.

In Ontario, the government’s independent review of the Peel District School Board found that the board’s reliance on an AI system to screen prospective teaching candidates may have limited diversity in its hiring. The reviewers noted that the system “selected candidates that mirrored previous successful hires, thereby indirectly reproducing historical preferences in hiring” that may have inappropriately screened out qualified racialized candidates and inadvertently perpetuated discrimination.[13]

 

Social Assistance Services

In Denmark, the government deployed an AI system to proactively detect fraudulent use of the country’s social welfare system. As a result, vulnerable individuals and families were investigated repeatedly and had benefits suspended until the investigations were resolved. Only eight percent of the approximately 50,000 investigations resulted in some form of punishment. The country’s human rights and privacy authorities found that the system relied on attributes such as race, family status, age, gender and disability to increase a person’s risk assessment.[14] The system also ethnically profiled people based on the languages they spoke, including whether they spoke Danish.[15]

In the Netherlands, 20,000 families were wrongly investigated by the country’s tax authority for fraudulently claiming child benefit allowances, requiring approximately 10,000 families to repay tens of thousands of euros, and in some cases, leading to unemployment and bankruptcies. The government had deployed an algorithmic system that placed special scrutiny on people for their ethnicity or dual nationality, profiling individuals based on their race. The Dutch government resigned in 2021 in response to public pressure over the incident.[16]

Human Rights Watch and other organizations have also raised concerns regarding the way AI technologies have been used in social assistance systems in the United Kingdom, Spain and other European countries.[17]

 

Health

AI systems are used by healthcare networks across the US to assign risk scores to patients, aiding in healthcare decision-taking. However, it was discovered that during the training of these systems, the models identified that hospitals allocated fewer resources to "Black patients who have the same level of need, and the algorithm thus falsely concludes that Black patients are healthier than equally sick White patients.”[18] As a result, hospitals reproduced existing life-threatening discriminatory practices in which Black patients were required to be much more ill to receive the same care as White patients.

Al technologies used for medical imaging and diagnostics are known to struggle with accuracy when it comes to underrepresented communities. For example, AI techniques under-perform in distinguishing between malignant and benign moles for racialized people, and in detecting cardiovascular diseases for women.[19]

In Europe, concerns have been raised that AI systems have recommended incorrect medical prescriptions to immigrants, seniors and people with disabilities because they reproduced prejudices among health care professionals to suspect that people within those populations exaggerate their health issues.[20]

The Truth and Reconciliation Commission of Canada’s (TRC) Calls to Action urges all levels of government to acknowledge that the current state of Indigenous health in Canada is a direct result of discriminatory policies and practices. Health Quality Ontario has noted the significant health inequities faced by Indigenous peoples in Ontario, much of which is rooted in systemic racism.[21] It would not be difficult for this discrimination to translate into biased data used to train AI technologies in health care settings, leading to even more devastating health outcomes for Indigenous communities.

 

Policing Services

The use of facial recognition technologies by law enforcement agencies has drawn scrutiny in Canada because they displayed clear privacy and human rights concerns about the collection, handling, use and disclosure of personal information and the procedures of and between public sector entities and other parties.

The well-known investigation of Clearview AI’s facial recognition technology by privacy authorities in Canada,[22] and the IPC’s guidance for police on facial recognition and mugshot databases,[23] highlight concerns about bias in the technology and the procedures governing its use. This includes concerns from marginalized communities that they might be disproportionately represented in biometric databases.

Examples in other jurisdictions have also raised human rights and privacy concerns about law enforcement’s use of AI technologies and their protocols, including:

  • The use of generative AI to produce a face from DNA samples, and running facial recognition on the image;[24]
  • Unauthorized facial recognition requests to other law enforcement agencies in another jurisdiction to circumvent prohibitions;[25] and
  • The use of firearm[26] and gunshot detection[27] technologies, which were unreliable and disproportionately deployed in marginalized communities. Consequently, the technology was sending police officers on high alert into those neighbourhoods.

The OHRC is particularly concerned with predictive technologies used in police deployment decisions. Some police services have begun to employ techniques involving the use of crime data, which comes from historically discriminatory policing, to determine future probabilities of criminal occurrences.[28]  Using biased data to predict crime and inform deployment decisions leads to further police action in over policed areas. Police departments in the US have canceled contracts for predictive technologies due to flawed methodologies[29] and reliability.[30]

Regulatory efforts in Europe and at the state and municipal level in the US have focused on the use of AI technologies for public safety. Regulating the use of AI by police services and other public safety organizations should be considered a priority for the Ontario government as well.

 


 

OHRC Recommendations

 

Recommendation 1: Recognize the overarching importance of upholding human rights in Bill 194

The Preamble to Bill 194 states that the Government of Ontario “believes that artificial intelligence systems in the public sector should be used in a responsible, transparent, accountable and secure manner that benefits the people of Ontario while protecting privacy.”

The OHRC supports the recognition of the fundamental right to protect privacy in the use of AI. It is recognized in the Canadian Charter of Rights and Freedoms and protected under federal and provincial legislation.

However, it is also essential that the opportunities, benefits and protections associated with the use of AI should be available to all Ontarians without discrimination. It is why the OHRC, and the Information and Privacy Commissioner of Ontario issued a joint statement on the importance of understanding privacy rights concerning AI technologies through a broader human rights approach and called on the Ontario government to develop and implement effective guardrails on the public sector’s use of AI technologies. It is important to recognize that human rights are inalienable, indivisible and interdependent.[31] One set of rights cannot be enjoyed fully without the other. Legislation should therefore aim to enhance human rights under the Code, as well as privacy.

Recognizing human rights protections is also important for Ontario’s economic prosperity. The Organisation for Economic Co-operation and Development’s (OECD) has made human rights one of its five AI Principles, and states that the goal of its principles is to “promote the use of AI that is innovative and trustworthy and that respects human rights and democratic values.”[32] Currently, 47 countries including Canada have committed to the OECD’s AI Principles. Recognizing the importance of human rights in Bill 194 and requiring non-discrimination in the use of AI would align Ontario with international standards for effective AI policies and maintain Ontario’s competitive advantage.

The Ontario government has already recognized the importance of upholding fundamental rights guaranteed by the Code and the Charter in other critical areas of public services, including the Comprehensive Ontario Police Services Act (COPS Act) passed by the legislature in 2019, which recognizes as one of its principles:

The importance of safeguarding the fundamental rights and freedoms
guaranteed by the Canadian Charter of Rights and Freedoms and the Human Rights Code.

The Code sets out Ontarians’ most fundamental rights and responsibilities and holds quasi-constitutional status.[33] With the potential for AI to impact every area of the public sector, it is imperative that the organizations deploying these systems have regard for the Code and uphold all human rights as an overarching principle in their use of AI.

Impact assessments are an important tool to evaluate and address human rights issues in AI systems before they contribute to discrimination and other violations of law. The OHRC, in its joint initiative on AI with the Law Commission of Ontario and the Canadian Human Rights Commission,[34] will release a human rights impact assessment tool for AI technologies later this year to assist developers, operators and administrators to align with their human rights responsibilities under the Code and Canadian Human Rights Act.

 

The OHRC recommends that the legislation be amended to:

Identify the importance of human rights in the Preamble as follows:

  • Believes that artificial intelligence systems in the public sector should be used in a manner that is valid and reliable, safe, privacy protective, transparent, accountable and human rights affirming. It should benefit the people of Ontario while protecting fundamental rights and freedoms guaranteed by the Canadian Charter of Rights and Freedoms and the Human Rights Code.
     
  • Enshrine human rights protections in the legislation itself, in the form of a human rights affirming principle as set out in Recommendation 2.
     
  • Mandate that risk management prescribed under Subsection 5(4) must include impact assessments on fundamental rights and freedoms guaranteed by the Canadian Charter of Rights and Freedoms and the Human Rights Code.

 


 

Recommendation 2: Embed the principles recommended by the IPC and the OHRC in Bill 194

The OHRC supports a principle-based approach to AI regulation. The OHRC recommends that the government embed the principles from the May 2023 joint statement of the IPC and OHRC in the legislation itself. This could be achieved in a form like that found in the COPS Act, and would include the following principles:[35]

 

Principle 1: Valid and Reliable
The technology should demonstrate, in continuous testing and monitoring, that it consistently works accurately and as expected, fulfilling its stated uses for the lifetime of its use. With respect to human rights, the technology should perform as required to provide non-discriminatory service to the diverse communities of Ontario.

 

Principle 2: Safe
Robust measures should be in place to ensure that a technology is considered safe to use before it is deployed. These measures should anticipate and account for the possibility that a technology can be misused or have unintended outcomes. The technology and the outcomes from using the technology should not result in discrimination.

 

Principle 3: Privacy Protective
Organizations should use information and data in the development and operation of the technology in a manner that complies with privacy law and only after individuals and communities understand and consent to the use of their information.[36]  Personal data collection and use should be restricted to legally compliant purposes and, with methods that protect privacy to the greatest extent possible.

 

Principle 4: Transparent
Organizations should provide a public account of how the technology is supposed to operate, including the steps taken to develop the technology, and indicators for the reliability and safety of the technology (e.g., the results of impact assessments, incident logs and complaints filed by service recipients). Organizations should be capable of understanding how the technology operates in all situations and why errors are produced, including non-compliance with legal obligations under the Code.

 

Principle 5: Accountable
Organizations should have internal governance structures, including policies and procedures for the development, use and review of a technology, with clear roles and responsibilities for ensuring that the use of the technology is reliable and safe, including compliance with the obligation not to discriminate as outlined in Part 1 of Code. Service users should have clearly defined channels to request information and report incidents.

 

Principle 6: Human Rights Affirming
The benefits from the use of AI should be universal and free from discrimination. Human rights are inalienable and should not be dismissed or taken away by the use of AI. Legal compliance with human rights protections should be built into the design of AI technologies and procedures, and organizations should prevent and remedy discrimination effectively.

Embedding these principles into the legislation will ensure that regulations will protect the fundamental rights and freedoms of all Ontarians and follow a human rights approach. Please see the IPC’s submission on Bill 194 for a more comprehensive overview of the principles.

 

The OHRC recommends that:

The legislation be amended to include a “Declaration of Principles” at Section 1 similar to the COPS Act. As set out above, these principles should assert that AI should be used in a manner that is:

 

  1. Valid and reliable
     
  2. Safe
     
  3. Privacy protective
     
  4. Transparent
     
  5. Accountable, and
     
  6. Human rights affirming.
     

To the greatest extent possible, require in legislation the disclosure of information relating to governance, technical characteristics of AI systems and practices

 


 

Recommendation 3: Require strong explainability requirements for individual decisions and overall logic of AI systems to protect the public and government from unreliable and unsafe use of AI by the public sector.

Transparency and accountability are critical to the protection of human rights. All parties, including the government and the public sector, will need to rely on strong explainability requirements to ensure transparency and accountability in the use of AI. The backgrounder that accompanies the government’s announcement states that Ontario is considering requirements for “organizations to inform the public of when they are interacting with AI, or mandating that decisions made by AI always have a channel for human review — recognizing AI’s capacity for bias.”[37]

The OHRC recommends that the government set strong explainability conditions for both individual decisions and overall logic of AI systems so that users can understand and have confidence in the decisions, outputs and results of AI systems. Explainability enables individuals to understand how technologies have impacted them (including whether they have experienced discrimination), to raise concerns, and to file complaints if needed. It creates accountability for developers and service providers, requiring them to ensure that their technologies work reliably and as intended, and to understand and address negative consequences of their technologies.

Explainability also enables the government and the public sector to have the necessary information to procure, test and monitor systems effectively, and to take enforcement action if necessary. For example, vendors may present AI systems as designed to accomplish an intended objective but may do so in a manner that is unethical or otherwise not permitted if a person were to complete the task. Many of the examples listed earlier in this submission involve AI systems that assess individuals and groups based on race, gender, and other personal attributes. Vendors may also claim that limitations of their technologies can be addressed by techniques such as retrieval-augmented generation, whereby accuracy and reliability of their products are enhanced by building on the organization’s data.

Such requirements would be positive first steps that could be strengthened by a requirement that an AI system must be able to provide the reviewer with the necessary information to act. For example, a child welfare worker requires knowledge of the variables involved in an automated risk assessment and how the system factored those variables into the situation. Similarly, a person attempting to access public services cannot understand how they were impacted by an automated assessment technology if they are provided with broad descriptions on the types of information used for the assessment but not how their data were factored into the decision.

AI safety and risk management cannot be accomplished if explanations and explainability requirements are shallow.

 

The OHRC recommends that:

The legislation be amended to provide for regulations that set out explainability requirements before AI systems can be used. In particular, the legislation should require the regulations to include:

  1. Internal procedures to retrieve the necessary information from an AI system for human review, or to retrieve the information that would allow a human to manually perform the task completed by the system to verify reliability;
     
  2. An obligation on organizations to adopt mechanisms to allow service users to request and receive information on how their personal information, including data on their personal attributes, was used by AI technology in the processing and decision-making process; and
     
  3. An obligation on organizations to retain data long enough for a technical accounting and for individuals to exercise their right to access (for example, applications to the Human Rights Tribunal of Ontario must be filed within one year of the alleged discrimination).

 


 

Recommendation 4: Identify the responsibilities and the powers of the government where public sector entities cannot prevent violations of legal rights and protections

The OHRC believes that the protection of human rights necessitates a full prohibition on AI systems in certain circumstances. Such prohibitions are contemplated by s. 5(6). The OHRC and IPC have also recommended such prohibitions to safeguard against harmful impacts of AI technologies on fundamental rights.[38] Similarly, the European Union recognizes the potential for certain uses of AI technologies to violate its Charter of Fundamental Rights (including the right to non-discrimination) and has established human rights-based “no-go” zones for AI technologies under Article 5 of its Artificial Intelligence Act.[39] Even from a general rights-based perspective, there should be no tolerance for activities that violate fundamental rights and freedoms.

With any advancement, there is risk that some technologies might not meet societal values and expectations. While the government has indicated that it may prohibit AI systems, it should also be clear about its role and how else it might act in the public interest if a public sector entity continues to use a technology that proves unreliable, unsafe or unlawful. This might mean ordering a pause in its use or imposing stricter requirements for the testing, use or governance of the technology.

 

The OHRC recommends that:

  • Section 5 and the associated provisions in Section 7 be clarified regarding the government’s obligations and powers where public sector entities continue to use technologies that do not comply within legal requirements, including requirements under the Code.
     
  • Public sector entities be obligated to disclose their use of AI and, to the greatest extent possible, the results of impact assessments and any other compliance tests required by regulation.

 


 

Conclusion: The lack of governance in the use of AI presents challenges and risks to human rights

The consequences of algorithmic discrimination can be severe and have devastating impacts. As described above, families have been separated wrongfully, employment and housing have been taken away, individuals and groups in vulnerable situations have been surveilled and access withheld to critical social and health services, because public sector entities have relied on faulty AI systems leading to erroneous decisions that eroded public trust. It cannot be assumed that the conduct of public sector entities will always be consistent with the fundamental rights and freedoms of the people of Ontario. Thus, incorporating human rights principles in the legislation as recommended will ensure that similar principles will be included in the regulations.

The OHRC supports an approach that prioritizes areas of the public sector working with the most vulnerable members of our society, including child welfare, education and health. As identified above, experiences in other jurisdictions have revealed that the use of AI in these areas, as well as the policing and social services sectors have created serious incidents of harm to vulnerable populations.

The government must ensure meaningful engagement with Ontario’s diverse communities in the development of this AI governance. It should ensure rigorous engagement with groups likely to be negatively affected by AI technologies to mitigate discrimination risks and improve human rights outcomes. Such engagement should take place prior to the development of legislation, regulations, governance, technical standards, methodologies and practices for the use of AI technologies. Public sector entities must also be obligated to disclose common practices, guidelines and specifications for their use of AI technologies to allow for scrutiny and analysis to identify issues and make improvements. The OHRC’s Human Rights Based Approach Framework sets out a clear series of steps that policymakers should undertake in the development of legislation, regulation, policies and guidelines, including a rigorous engagement process that serves to mitigate risks of discrimination and improve human rights outcomes.

The OHRC’s recommendations for Bill 194, if adopted, would ensure that there is a foundation that protects human rights and addresses potential bias in the use of AI. It is the foundational nature of these recommendations that requires their inclusion in the legislation itself, as opposed to the regulations.

Ontario is a leader in human rights and should maintain that leadership by embedding human rights principles in the use of AI. The OHRC continues to advise the Ontario Public Service towards the completion of its AI framework and stands ready to support the development of a broader framework for public sector use of AI in Ontario in a manner compliant with the government’s human rights obligations. In keeping with the OHRC’s positive obligations under the Code and its Memorandum of Understanding with the Attorney General, the OHRC looks forward to making submissions concerning the pending regulations.

According to the OHRC’s commitment to public accountability and service to Ontarians, this submission it will be made public.

 

Sincerely,
Patricia DeGuire
Chief Commissioner

 


 

[1] Unless legislation specifically stipulates it applies despite the Code.

[2] Tranchemontagne v. Ontario (Dir. Disability Support Program) [2006] 1 S.C.R. 513, 2006 SCC 14

[3] OHRC and IPC, Joint statement by the Information and Privacy Commissioner of Ontario and the Ontario Human Rights Commission on the use of AI technologies (May 2023), online: https://www.ohrc.on.ca/en/news_centre/joint-statement-information-and-privacy-commissioner-ontario-and-ontario-human-rights-commission-use.

[4] OHRC, Submission to the Standing Committee on Social Policy regarding Bill 149, working for workers four act, 2023 (February 2024), online: https://www.ohrc.on.ca/en/news_centre/ontario-human-rights-commission-submission-standing-committee-social-policy-regarding-bill-149.

[5] OHRC, Interrupted childhoods: Over-representation of Indigenous and Black children in Ontario child welfare (2018), online: https://www.ohrc.on.ca/en/interrupted-childhoods.

[6] OHRC. Right to Read: Public inquiry into human rights issues affecting students with reading disabilities (2022), online: https://www.ohrc.on.ca/en/right-to-read-inquiry-report.

[7] OHRC, Human Rights First: A plan for belonging in Ontario (2023), online: https://www.ohrc.on.ca/en/human-rights-first-plan-belonging-ontario.

[8] American Civil Liberties Union, Family Surveillance by Algorithm: The Rapidly Spreading Tools Few Have Heard Of (2021), online: https://www.aclu.org/documents/family-surveillance-algorithm.

[9] Associated Press, Child welfare algorithm faces Justice Department scrutiny (31 January 2023), online: https://apnews.com/article/justice-scrutinizes-pittsburgh-child-welfare-ai-tool-4f61f45bfc3245fd2556e886c2da988b.

[10] OHRC, supra note 5.

[11] The Markup, False Alarm: How Wisconsin Uses Race and Income to Label Students “High Risk” (2023), online: https://themarkup.org/machine-learning/2023/04/27/false-alarm-how-wisconsin-uses-race-and-income-to-label-students-high-risk.

[12] New America, When Students Get Lost in the Algorithm: The Problems with Nevada's AI School Funding Experiment (2024), online: https://www.newamerica.org/education-policy/edcentral/when-students-get-lost-in-the-algorithm-the-problems-with-nevadas-ai-school-funding-experiment.

[13] Ena Chadha, Suzanne Herbert and Shawn Richard, Review of the Peel District School Board (February 2020), online: https://files.ontario.ca/edu-review-peel-dsb-school-board-report-en-2023-01-12.pdf.

[14] Wired, How Denmark’s Welfare State Became a Surveillance Nightmare (2023), online: https://www.wired.com/story/algorithms-welfare-state-politics.

[15] Ibid.

[16] The Guardian, Dutch government resigns over child benefits scandal (2021), online: https://www.theguardian.com/world/2021/jan/15/dutch-government-resigns-over-child-benefits-scandal.

[17] Lawfare Institute, The Algorithms Too Few People Are Talking About (January 2024), online: https://www.lawfaremedia.org/article/the-algorithms-too-few-people-are-talking-about.

[18] American Association for the Advancement of Science, Dissecting racial bias in an algorithm used to manage the health of populations (October 2019), online: https://www.science.org/doi/10.1126/science.aax2342.

[19] National Centre for Biotechnology Information, Algorithmic Discrimination in Health Care: An EU Law Perspective (June 2022), online: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9212826.

[20] Ibid.

[21] Health Quality Ontario, Northern Ontario Equity Strategy (2018), online: https://www.hqontario.ca/Portals/0/documents/health-quality/health-equity-strategy-report-en.pdf.

[22] Office of the Privacy Commissioner of Canada, RCMP’s use of Clearview AI’s facial recognition technology violated Privacy Act, investigation concludes (2021), online: https://www.priv.gc.ca/en/opc-news/news-and-announcements/2021/nr-c_210610.

[23] Information and Privacy Commissioner of Ontario, Facial Recognition and Mugshot Databases: Guidance for Police in Ontario (January 2024), online: https://www.ipc.on.ca/en/resources-and-decisions/facial-recognition-and-mugshot-databases-guidance-police-ontario-0.

[24] Wired, Cops Used DNA to Predict a Suspect’s Face—and Tried to Run Facial Recognition on It (January 2024), online: https://www.wired.com/story/parabon-nanolabs-dna-face-models-police-facial-recognition.

[25] Washington Post, These cities bar facial recognition tech. Police still found ways to access it (May 2024), online: https://www.washingtonpost.com/business/2024/05/18/facial-recognition-law-enforcement-austin-san-francisco.

[26] The Verge, NYC’s AI gun detectors hardly work (April 2024), online: https://www.theverge.com/2024/4/2/24119275/evolv-technologies-ai-gun-scanners-nyc-subway.

[27] American Civil Liberties Union, Four Problems with the ShotSpotter Gunshot Detection System (August 2021), online: https://www.aclu.org/news/privacy-technology/four-problems-with-the-shotspotter-gunshot-detection-system.

[28] See “Chapter 6 - Arrests, charges, and artificial intelligence: gaps in policies, procedures and practices” of OHRC, From Impact to Action: Final report into anti-Black racism by the Toronto Police Service (December 2023), online: https://www.ohrc.on.ca/en/impact-action-final-report-anti-black-racism-toronto-police-service.

[29] The Guardian, LAPD ended predictive policing programs amid public outcry. A new effort shares many of their flaws (November 2021), online: https://www.theguardian.com/us-news/2021/nov/07/lapd-predictive-policing-surveillance-reform.

[30] Wired, Predictive Policing Software Terrible at Predicting Crimes (October 2023), online: https://www.wired.com/story/plainfield-geolitica-crime-predictions.

[31] United Nations Human Rights Office of the High Commissioner, What are human rights?, online: https://www.ohchr.org/en/what-are-human-rights

[32] OECD, OECD AI Principles overview (updated May 2024), online: https://oecd.ai/en/ai-principles.

[33] Snow v. Honda of Canada Manufacturing, 2007 HRTO 45 (CanLII) at para. 19 [Snow].

[34] Law Commission of Ontario, Ontario Human Rights Commission and Canadian Human Rights Commission working together on human rights and the use of artificial intelligence (December 2021), online: https://www.ohrc.on.ca/en/news_centre/law-commission-ontario-ontario-human-rights-commission-and-canadian-human-rights-commission-working.

[35] OHRC and IPC, supra note 3.

[36] For example, Indigenous peoples have stated that government collection and misuse of data has not always been beneficial or respectful, and have infringed on Indigenous data sovereignty. See: First Nations Information Governance Centre, Ownership, Control, Access and Possession (OCAP): The path to First Nations Information Governance (May 2014), online: https://fnigc.ca/wp-content/uploads/2020/09/5776c4ee9387f966e6771aa93a04f389_ocap_path_to_fn_information_governance_en_final.pdf.

[37] Government of Ontario, Strengthening Cyber Security and Building Digital Trust (May 2024), online: https://news.ontario.ca/en/backgrounder/1004581/strengthening-cyber-security-and-building-digital-trust.

[38] IPC, Submission for Bill 149, the Working for Workers Four Act, 2023, which would amend the Employment Standards Act, 2000 (ESA) (February 2024), online: https://www.ipc.on.ca/sites/default/files/legacy/2024/02/2024-02-07-bill-149-committee-submission.pdf.

[39] See ”Article 5: Prohibited Artificial Intelligence Practices” of European Parliament, Artificial Intelligence Act (March 2024), online: https://www.europarl.europa.eu/doceo/document/TA-9-2024-0138_EN.html.