Language selector

Submission on TPSB Use of Artificial Intelligence Technologies Policy

Page controls

September 22, 2021

Page content

 

Dubi Kanengisser, PhD
Senior Advisor, Strategic Analysis and Governance
Toronto Police Services Board
40 College Street
Toronto, ON M5G 2J3

Dear Dr. Dubi Kanengisser:                                                                                                                                     

Re: Consultation on the Toronto Police Services Board’s Use of Artificial Intelligence Technologies Policy

The Ontario Human Rights Commission (OHRC) welcomes the opportunity to provide a submission to the Toronto Police Services Board’s (TPSB) Use of Artificial Intelligence Technologies Policy (AI Policy). The OHRC recognizes that transparency and accountability are critical factors in the successful implementation of new Artificial Intelligence (AI) systems and commends the TPSB’s commitment to public consultation on the AI Policy. In addition, given the important and continually developing nature of AI, the OHRC urges the TPSB and Toronto Police Service (TPS) to ensure that the implementation of the AI Policy is adequately resourced.

AI regulation is an area of serious concern globally. At the release of a recent report on AI and human rights, the United Nations High Commissioner for Human Rights "stressed the urgent need for a moratorium on the sale and use of artificial intelligence (AI) systems that pose a serious risk to human rights until adequate safeguards are put in place."

[1] In the United States, California implemented a three-year ban on police use of facial recognition technology on body worn cameras beginning in 2020.[2] Oakland,[3] San Francisco[4] and Somerville,[5] followed suit with even broader bans on use by the municipalities. More recently in April 2021, the European Commission published a proposal to introduce harmonized regulations for AI in the European Union. The proposal “lays down a solid risk methodology to define “high-risk” AI systems that pose significant risks to the health and safety or fundamental rights of persons. Those AI systems will have to comply with a set of horizontal mandatory requirements for trustworthy AI and follow conformity assessment procedures before those systems can be placed on the Union market.”[6]

The OHRC is concerned about the unique implications that artificial intelligence presents to human rights in Ontario, particularly in marginalized and vulnerable communities. The federal government has defined AI as any technology that performs tasks that would ordinarily require biological brainpower to accomplish, such as making sense of spoken language, learning behaviours or solving problems.[7] As you may know, some early applications of AI systems in policing have been found to perpetuate historic patterns of discrimination by incorporating biases into the systems, or relying on biased data.

When police services use AI systems that are flawed in their development, they can compound existing disparities and/or create new discriminatory conditions. Such conditions could have a profound and ongoing impact on marginalized and vulnerable communities and erode public trust in the police. As police increasingly rely on automated decision-making and AI systems, it is critical that these systems are not biased and do not create or perpetuate systemic discrimination.

Human rights are legally enshrined in international conventions and Canada’s human rights laws, including the Canadian Human Rights Act, the Charter of Rights and Freedoms, and provincial human rights codes, including Ontario’s Human Rights Code (Code). In addition, “[t]he importance of safeguarding the fundamental rights guaranteed by the Canadian Charter of Rights and Freedoms and the Human Rights Code” is a foundational principle of the Police Services Act. The TPSB must adopt an AI policy that is consistent with its legal obligations under human rights law.

 

Some specific areas of concern identified by the OHRC

In our Policy on eliminating racial profiling in law enforcement, the OHRC identified serious human rights concerns about the potential discriminatory impact of police data collection through using facial recognition and predictive policing algorithms that may adversely and disproportionately affect Code-protected groups.

As law enforcement organizations increasingly engage AI to identify individuals, collect and analyze data and help make decisions, tools and approaches developed to predict whether people will pose a risk to others should be designed and applied in a way that relies on transparent, accurate, valid and reliable information about risk. Organizations using AI are liable for any adverse impacts based on Code grounds, even if the tool, data or analysis is designed, housed or performed by a third party.

There is a risk that using artificial intelligence tools will result in approaches that are not accurate, are based on racially-biased data, or unintentionally integrate developer biases. These tools or approaches may inaccurately assess the risk posed by racialized or Indigenous peoples, and compound existing disparities in criminal justice outcomes. For example, determining a person’s risk level based on the number of times they have been stopped by police, and have therefore become “known to police,” can have a profound and ongoing impact on groups who are most likely to be stopped due to racial profiling.

Racial profiling can happen at any stage of the decision-taking process engaged in by law enforcement authorities. It may result from an individual’s explicit or implicit bias based on conscious or unconscious stereotypes, personal prejudice, or hostility toward Indigenous or racialized people.

 

A human rights-based approach to governing AI use by the Toronto Police Service

The OHRC recommends several actions for the TPSB to take in developing its AI Policy. Consistent with a human rights-based approach, these actions are aimed at protecting vulnerable and marginalized groups that may be disproportionately affected by AI technology used by the TPS. These actions are designed to insure against consequences that would undermine the desired benefits of police services’ efficiency and effectiveness, and public trust in policing.

The OHRC recently provided submissions on Ontario’s Trustworthy Artificial Intelligence (AI) Framework, and the Ministry of Government and Consumer Services’ proposed legislative reform on privacy and AI technology. The OHRC is also working closely with the Law Commission of Ontario (LCO), which is a leading voice on matters related to AI, and has several publications in the field. The actions listed below are informed by our collective work in this area and should be adopted as current best practices.

  1. Specifically commit to engaging in meaningful consultations with the public; human rights experts, including the OHRC; experts in privacy law, criminal law, data science, and legal aid; and representatives of Code-protected groups, prior to and while developing, using and implementing the AI Policy.

 

  1. Set out a requirement in the AI Policy to engage in meaningful consultations with the public; human rights experts, including the OHRC; experts in privacy law, criminal law, data science, and legal aid; and representatives of Code-protected groups when developing, using and implementing any AI technology that could cause potential harm or have an impact on an individual’s rights.

 

  1. Set out in the AI Policy a recognition of human rights values and principles, and commitment to address systemic bias in AI that negatively affects or fails to appropriately account for the unique needs of Code-protected groups, including but not limited to, vulnerable populations such as people with disabilities, children and older persons, Indigenous, Black and racialized communities, as well as low-income and LGBTQ2+ communities.
    • Recognize in the AI Policy that the use of AI has the potential to facilitate discrimination on both a systemic and individual level.

 

  1. Adopt clear and plain-language definitions in the AI Policy for all technical terms and systems, including but not limited to, Artificial Intelligence, Automated Decision-Making and Data, based on industry standards, such as from the Government of Canada’s Directive on Automated Decision-Making.
    • The AI Policy should also adopt plain-language definitions on the contextual use of significant words such as public, decision and bias.

 

  1. Adopt clear and plain-language definitions in the AI Policy for Extreme Risk Technologies, High Risk Technologies, Moderate Risk Technologies and Low Risk Technologies. The definitions should establish the scope and limits of the technology’s capabilities and the required range of mitigation measures associated with each level of risk.
    • While the definitions should not be limited to lists, it would be helpful to note current examples alongside each definition.  

 

  1. Recognize in the AI Policy’s definitions that an Extreme Risk Technology includes any AI system that is known to carry bias which can cause potential harm or have an impact on an individual’s rights, despite the use of mitigation techniques, and therefore should not be used.

 

  1. Recognize in the AI Policy’s definitions that any AI system that cannot be fully explainable in its behaviour is an Extreme Risk Technology, and therefore should not be used. The AI Policy assessment of Extreme Risk Technology should include the system’s ability to cause potential harm or impact an individual’s rights.

 

  1. Recognize in the AI Policy’s definitions that any AI system that would result in a significant change in operations or allocation of resources in a way that could cause potential harm or have an impact on an individual’s rights is a High Risk Technology.

 

  1. Provide a requirement in the AI Policy to seek TPSB approval for the use or deployment of existing technologies that have not previously been approved based on the AI Policy subsequent to this consultation process. The AI Policy should be applied retroactively, with the approval process requiring testing existing AI technology and monitoring outcomes.

 

  1. Provide a requirement in the AI Policy that all documents related to any AI technologies considered under the AI Policy be made publicly available, and to the extent possible, that these documents be provided in plain language. This includes any relevant documents used in the risk-assessment process, as well as the risk assessment reports themselves.

 

  1. Provide a requirement in the AI Policy for creating and maintaining a mandatory public catalog disclosing AI systems under the AI Policy and used by the TPS, incorporating plain-language explanations of each system, their purpose, how and when they are used, what information is collected, how that information is used, when that information is disposed of and what actions are taken to minimize discriminatory effects and outcomes.

 

  1. Provide a requirement in the AI Policy to collect and make publicly available human rights data, and any other data recommended by experts, on the TPS’s use of AI systems and automated decision-making, disaggregated by sociodemographic variables including Code-protected groups.

 

  1. Provide a requirement in the AI Policy for creating key performance indicators (KPIs) to monitor whether new technologies introduce or perpetuate biases into policing decisions and have a discriminatory impact. KPIs should also monitor whether the operational goal of the AI system is fulfilled during its deployment, and meets the expected outcomes.

 

  1. Provide a mechanism in the AI Policy for an accessible, effective public process for hearing, adjudicating and remedying public complaints regarding the consequences of using AI. The public complaints process must adopt principles of due process and procedural fairness in its design, including an appeals process for complaints.

 

  1. Establish a mechanism in the AI Policy for monitoring by an independent oversight body with a mandate to:
    • Conduct risk assessments and audits of all existing AI systems, including “low risk” applications, for bias and discrimination every 12 months
    • Report on and address systemic issues, and hold TPS use of AI accountable
    • Hold public reviews of the AI Policy as well as the related KPIs every 12 months.

 

  1. Advocate for Provincial legislation and regulations to govern the development, use, implementation and oversight of AI in policing. Any further actions or steps taken related to AI, especially in high-risk applications such as facial recognition or predictive policing, should be taken with extreme caution until such legislation and regulations are enacted.

 

Human rights, data, and AI

The OHRC has long called on the criminal justice system to collect and analyze data to monitor the negative impact of services on vulnerable groups protected under the Code. This includes supporting the Anti-Racism Directorate’s development of consistent data standards for specific public sector organizations, and providing guidance to organizations on how to collect human rights-based data. For example, the OHRC has called for human rights data collection to monitor racial profiling in policing, race and mental health disparities in segregation in provincial jails, and the over-representation of Indigenous and racialized children and youth in the child welfare system.

In our calls for data collection, the OHRC routinely stresses the importance of the human rights principles of equity, privacy, transparency and accountability. As the TPSB moves forward with developing the AI Policy, it has an opportunity to develop principles for AI systems that advance positive human rights changes, rather than creating or perpetuating systemic discrimination. The OHRC supports a thoughtful examination of the opportunities and risks in implementing AI, and would be pleased to assist in ensuring that these important principles form a part of the TPSB’s human rights-based approach.

Sincerely,

Patricia DeGuire
Chief Commissioner
Ontario Human Rights Commission