Regulating Face Recognition to Address Racial and Discriminatory Logics in Policing | EPIC AI Symposium

Sam Andrey, Sonja Solomun, & Yuan Stevens
September 21, 2021

This recap originally appeared on epic.org/ai.

On September 21, 2021, Director of Research, Sonja Solomun and research partners at Ryerson’s CyberSecure Policy Exchange, Sam Andrey and affiliate Yuan Stevens presented work on facial recognition regulation in Canada at EPIC’s (Electronic Privacy Information Center) AI Symposium. The event discussed possible ways to actualize Artificial Intelligence regulation to best protect individuals. EPIC works to ensure regulation of emerging technologies and enforcement of existing civil rights and civil liberties protections.


The Canadian government’s use of Clearview AI was possible because of a gap in the law. The federal police were taking advantage of under-regulation in federal privacy laws. As a result, the police used third-party technology regardless of whether the data was lawfully obtained. Stevens outlined three concerns: 

  1. inaccuracy and the exacerbation of discriminatory decisions by police,

  2. the impact on constitutional rights, and

  3. the legitimacy of erroneous biometric databases.

There is decreased accuracy dependent on a system’s training data. Work by Joy Buolamwini and Timnit Gebru shows that automated face recognition is far more inaccurate for Black and East Asian persons. Stevens also questioned whether the government should be able to build databases of our faces in the first place. In Canada, the Privacy Act applies only to certain federal government bodies and generally only provides access and correction rights. Furthermore, the Office of the Privacy Commissioner is an ombudsperson and cannot render enforceable decisions.  

There are three jurisdictions that have begun to address the harms related to automated face recognition.

First, the European Union’s approach is instructive. Two data protection oversight bodies in the EU issued a call in June 2021 for a ban on use of automated recognition for human features. The GDPR has a general prohibition on the processing of biometric data with certain limited exceptions. It also grants people the right not to be subject to automated profiling or decisions based solely on automated processing when the impacts will be legal or significant in nature. The Data Protection Law Enforcement Directive also generally provides that no decisions by police based solely on Automated Decision-Making Tools (ADM) can rely on biometric data unless rights-protecting safeguards are in place. Data Protection Impact Assessments (DPIAs) are also a required form of monitored self-regulation and risk assessment for acceptable systems, requiring:

  • A description of the processing operations (e.g., algorithms in question) and the purpose of processing;

  • Assessment of the necessity of processing in relation to purpose;

  • Assessment of the risks to people’s rights and freedoms;

  • The measures an entity will use to address these risks and demonstrate GDPR compliance, including security measures.

Still, DPIAs under the GDPR are not required to be released to the public. Over 100 human rights groups in Europe are also demanding significant improvements to the proposed EU AI Act in order to make it compliant with human rights law.

Second, Illinois passed what is being heralded as one of the strongest biometric laws for the private sector. The Biometric Information Privacy Act applies only to private actors. Companies that possess biometric identifiers or information must develop public policies for retention, must not profit from biometric information, must not disclose information without consent or legal requirement to do so, and must store, transmit, and protect info with reasonable standard of care. An example of the law’s effectiveness is Facebook’s $550 million (raised to $650 million) class action settlement for violating disclosure requirements. Similar claims have been filed against Microsoft and Amazon.

Third, Massachusetts enacted a law in 2021 related to police use of face and other remote biometric recognition systems. The police must obtain a warrant before conducting any face recognition searches except if there is a reasonable belief that there is an emergency involving substantial risk of harm to any individual or group of people. The police are also prohibited from using the automated face recognition provided by a third party. Only the state police, FBI, or DMV can perform a search. The police would also need to submit detailed documentation of each search to the public safety office, which shares aggregated information with the public.


Given these legal developments elsewhere in the world, Stevens, Solomun, and Andrey urged lawmakers in Canada to consider the following recommendations for more effective data protection regulation related to face recognition: 

  1. Prohibit the collection use and disclosure of facial information for the purpose of uniquely identifying an individual through automated decision-making systems.

    At a minimum, prohibit uniquely identifying a person in real time. This may require a cohesive privacy approach that spans public and private sectors. If regulation ultimately ends up being enacted that allows for exceptions to this general prohibition, permission must be sought before any face recognition search is conducted (only then in life-threatening situations), and human rights safeguards must be imposed. These safeguards may include a warrant requirement, a prohibition on using third party services, and requiring that searches only be performed with written permission and through government systems (for example, through the registry of motor vehicles).

  2. Provide recourse for violations of privacy and related rights.

    These should include: the right to meaningful explanation, the right to contest decisions, the right to meaningful human intervention, and the right to freedom from discrimination. 

  3. All automated decision-making systems in use should generally undergo impact assessments on a continual basis.

    This would apply to third party systems, if allowed to be run against baseline standards (e.g., Gender Shades study by Buolwamini and Gebru).

  4. Maintain a public register for all automated decision-making systems deployed by law enforcement in Canada.

    The register should include publicly shared results of automated decision-making impact assessments, including transparency and accountability requirements in certain cases.  

There is reason to remain deeply skeptical that algorithmic policing technologies, including automated face recognition systems, can be used in a manner that does not discriminate against or otherwise unjustly impact individuals under equality law.
— Citizen Lab authors of “To Surveil and Protect” (Robertson, Khoo, and Song)
Previous
Previous

“Canada’s Online News Act: Repeating Australia’s mistakes?” (Policy Options)

Next
Next

Mis- and Disinformation During the 2021 Canadian Federal Election