AI Oversight, Accountability and Protecting Human Rights: Comments on Canada’s Proposed Artificial Intelligence and Data Act

Christelle Tessono, Yuan Stevens, Momin M. Malik, Sonja Solomun, Supriya Dwivedi & Sam Andrey
November 2022

 
 

Executive Summary

This report is a collaboration of interdisciplinary researchers from the Cybersecure Policy Exchange at Toronto Metropolitan University, McGill University’s Centre for Media, Technology and Democracy, and the Center for Information Technology Policy at Princeton University.

Canada’s investment in developing AI systems has not been matched by a comparable effort to regulate the technology. While we are encouraged by these initial efforts to regulate AI systems in Canada, we share several key concerns, with corresponding recommendations to improve the proposed framework, particularly under the Artificial Intelligence and Data Act (AIDA) of the newly tabled Bill C-27: Digital Charter Implementation Act, 2022.


Recommendations

  1. The Need for Adequate Public Consultation
    Innovation, Science and Economic Development Canada should formally consult on AIDA with community advocates, researchers, lawyers, and groups representing the interests of BIPOC, 2SLGBTQIA+, economically disadvantaged, disabled and other equity-deserving populations in the country.

  2. The Need for Proper Oversight of AIDA
    To effectively regulate the AI market in Canada, the AIDA Commissioner needs to be an independent agent of Parliament and we need to empower an independent tribunal to administer penalties in the event of contravention, outline best practices for auditing, and enforce the law as required.

  3. AIDA Must Apply to Government Institutions
    Given that AIDA only currently applies to the federal private sector — as government institutions are explicitly exempt from AIDA — it is imperative that AIDA’s framework be broadened to include government institutions.

  4. Bill C-27 Needs Consistent, Technologically Neutral and Future-Proof Definitions Both the Consumer Privacy Protection
    Both the Consumer Privacy Protection Act (CPPA) and AIDA within Bill C-27 should provide for a definition of AI or algorithmic systems that is cohesive across both laws. The definition of AI ought to be technologically neutral and future-proof. A potential pathway for regulation is to define algorithmic systems based on their applications instead of focusing on the various techniques associated with machine learning and AI.

  5. Bill C-27 Must Address the Human Rights Implications of Algorithmic Systems
    Bill C-27 needs to address the human rights risks of algorithmic systems in a comprehensive manner:

    • This should include, but not be limited to, prohibitions on the processing of biometric data such as facial images through automated means for the unique identification of individuals, especially in public settings and potentially subject to a very limited set of exceptions.

    • Bill C-27 and particularly AIDA need to provide people with recourse in order to protect fundamental rights when AI systems are used — such as the right to object to the automated processing of personal data, as well as the right to appeal decisions that are made when algorithmic systems are used.

    • Certain uses of algorithmic systems (e.g., ones that exploit vulnerable groups based on age, physical or mental disabilities, or systems used by the state for social scoring purposes) must also not be allowed because they pose unacceptable risks to people’s safety, livelihoods, and rights, which requires more than AIDA’s current approach to identifying and managing risk.

    • Bill C-27 and AIDA more specifically should also include high levels of protection by default for children.


  1. The Need for Adequate Public Consultation

AIDA came as a surprise to the public: there were no public notices that it would be included in the new iteration of the Digital Charter Implementation Act. As noted by the Canada Research Chair in Information Law and Policy, Professor Teresa Scassa, there was only mention of the creation of a new Data Commissioner in the Budget 2021 and in the Minister of Innovation, Science, and Industry’s mandate letter. However, neither had an articulation of the regulatory framework it would be part of.

But more significantly, Bill C-27 did not have any public consultations. For AIDA, we are aware that a consultation with the government’s AI advisory council took place. However, there are no publicly accessible records accounting for how these meetings were conducted, nor which points were raised by the council. Innovation, Science, and Economic Development Canada (ISED) has recently hosted consultations on the Innovation Canada Act, on a National Quantum Strategy, and the regulatory modernization of the Bankruptcy and Insolvency Act. The absence of a thorough public consultation process for Bill C-27, and more specifically AIDA, demonstrate that effective mechanisms in place to hold consultations are not being used in order to draft critical legislation. Because AIDA regulation is set to be crafted by the the Minister of Innovation, Science and Industry, consultations will be at their discretion. As a result, the risk for future regulation to not take into account the public and the voices of marginalized communities is a significant concern.

Public consultations are important as they allow a variety of stakeholders to exchange and develop innovative policy that reflects the needs and concerns of affected communities. As raised by economist and journalist Erica Ifill, the lack of meaningful public consultation — specifically with BIPOC, trans and non-binary, economically disadvantaged, disabled and other equity-derserving populations — is echoed by AIDA’s failure “to acknowledge AI’s characteristic of systemic bias including racism, sexism, and heteronormativity.” Moving forward, meaningful consultations related to this Act, privacy and AI more broadly, should be spearheaded with the values of equity, diversity, and inclusion in mind. This would enable greater interaction between technical experts, marginalized communities, and regulation.

Moreover, to hold effective public consultations, it is key to assess the different roles the Canadian government performs in the AI ecosystem. If certain departments serve as intermediaries for funding innovation within industry, which departments are then available to the public to influence and hold potentially harmful innovation to account? As raised by Mardi Witzel, associate at NuEnergy.ai, ISED is a government department that plays a key role in supporting technological innovation within industry. And in light of this, it is “reasonable to ask how a ministry that collaborates with and funds digital and AI-enabled industries can serve as an impartial enforcer of the design, development and use of this same technology by its constituent clients.”

Recommendations

  • Moving forward, ISED should facilitate a series of formal consultations on AIDA with community advocates, privacy and AI ethics experts, technologists, lawyers, and groups representing the interests of BIPOC, 2SLGBTQIA+, economically disadvantaged, disabled and other equity-deserving populations in the country.

  • As suggested by Erica Ifill, ISED should release their GBA+ analysis of AIDA in order to see how they considered the impact of the legislation on marginalized populations.


2. The Need for Proper Oversight of AIDA

The AI Commissioner and Advisory Committee Are Not Enough

The proposed Artificial Intelligence and Data Commissioner is set to be a senior public servant designated by the Minister of Innovation, Science and Industry and is therefore not independent of the Minister. They will “assist the Minister in the administration and enforcement” of AIDA. Moreover at the discretion of the Minister, the Commissioner may be delegated the power, duty, and function to administer and enforce AIDA. In other words, the Commissioner is not afforded the powers to enforce AIDA in an independent manner as their powers depend on the Minister’s discretion. As a result, this means that the Commissioner responds directly to the interests and needs of the Minister which makes it difficult for them to be critical in their policy interventions. This stands in stark contrast with Competition Bureau Canada. Led by the Commissioner of Competition — who does not report directly to the Minister, but is appointed by the Governor in Council — is an “independent law enforcement agency that protects and promotes competition for the benefit of Canadian consumers and businesses.” The Commissioner of Competition is in charge of administering and enforcing the Competition Act, the Consumer Packaging and Labelling Act, the Precious Metals Marking Act, and the Textile Labelling Act.

Furthermore, AIDA sets out provisions for the creation of an advisory committee — again at the discretion of the Minister. There are several problems with this:

  1. Unclear and unaccountable convenings. The terms of reference for this committee have not been made available in the proposed law. As currently drafted, it seems as though the committee will meet only when the Minister wishes to discuss issues of interest to their office instead of matters of immediate public concern. The absence of details related to the committee’s composition, report publication guidelines, chair and member term appointments, concentrate power in the hands of the Minister and obscure the regulatory process.

  2. Limited capacity. The advisory committee is not an office staffed with personnel actively investigating issues. Additionally, the advisory committee is not provided with the financial resources nor the power to enforce AIDA itself. And although the experts may be enlisted in the committee, the technical expertise necessary to oversee the AI market in the Canadian context far exceeds the capacity that the committee may hold as they are not full-time employees.

  3. Potential overlap with the existing Advisory Council. In 2019, the government launched the Government of Canada Advisory Council on Artificial Intelligence. This Advisory Council was created to “ensure Canadians benefit from the growth of the AI sector” and “inform government policy in AI-related fields”.

  4. Lack of enforcement power. Advisory committees are not enough to enforce and craft regulation. They are not meant to be the first reference points for regulatory oversight, and this is what we need.

Notable Examples of Robust Accountability through Independent Enforcement Bodies

There are numerous examples of regulatory bodies that the federal government could emulate for the enforcement of AIDA. For example, the European Commission’s proposed regulatory framework for artificial intelligence includes the creation of a European Artificial Intelligence Board (EAIB). The Board will be composed of the European Data Protection Supervisor, the Commission, and national supervisory authorities. The role of the Board will be to facilitate the implementation of rules and regulations between the national supervisory authorities and the Commission itself.

In the United States, the proposed Algorithmic Accountability Act of 2022 establishes the creation of the Bureau of Technology within the Federal Trade Commission (FTC). The FTC is an independent agency of the United States government, whose mandate is to enforce the American antitrust laws and promote consumer protection. The Bureau would be headed by a Chief Technologist and would require hiring a minimum of 50 new staffers within two years of the law’s enactment. Moreover, the legislation would also require the appointment of 25 additional personnel to the enforcement division of the FTC’s Bureau of Consumer Protection. The proposed legislation sets that the bureau personnel should include people with “experience in fields such as management, technology, digital and product design, user experience, information security, civil rights, technology policy, privacy policy, humanities and social sciences, product management, software engineering, machine learning, statistics, or other related fields.”

What Should Adequate AI Oversight Look Like in Canada?

As it currently stands, AIDA provides the Minister with the ability to require that an audit be conducted if there are “reasonable grounds to believe” that contraventions to certain sections of the law have occurred. The audit can be done internally by the company or by hiring the services of an independent auditor — at the expense and discretion of the company. Furthermore, the proposed legislation requires companies to establish mitigating measures according to regulations — which have yet to be written — and also notify the Minister in the event that “the use of the system results or is likely to result in material harm”, but only for systems that a provider deems to be “high-impact”.

As a result, oversight of algorithmic systems in this proposed legislation is primarily administered by the companies themselves in the form of audits. This is deeply concerning because research has demonstrated that the quality of audits is very poor when the auditee selects and compensates the auditor. For example, a two-year study on environmental third-party audits found that audits were more accurate when auditors were not paid by the firm being audited, but instead by a common pool of government-distributed funding. This is a trend that is unfortunately found in other areas, notably in credit rating agencies, supply chain and accounting audits.

Moreover, allowing companies to choose their auditors opens the doors to conflicts of interests, cronyism and corruption. In the absence of regulations addressing auditor selection, there are no legally-binding provisions restricting companies from using firms that are led by family members, friends or other close personal relationships who could otherwise economically benefit from providing laxer audits.

As such, in order to improve the Canadian AI ecosystem, it is key that oversight be conducted independently from companies. This means that auditor selection, funding, and audit scope are not established by the audited company. Instead, audit practices should be determined by standards crafted by an independent regulatory tribunal or body. This entity would outline how audits ought to be conducted and how certification accreditation of auditors should be administered as well.

Independent oversight should extend beyond audits. Audits for algorithmic systems is not yet a professionally codified process, nor is it clear what a professional approach should contain or even what discipline(s) should oversee it (e.g., computer science, engineering, statistics, actuarial science). Until such a time where there is a meaningful professional code, we believe that an multi-stakeholder independent oversight body should be empowered to contribute to the legislative process by developing or proposing laws and regulations, as well as enforcing legislation by having the ability to prohibit, restrict, withdraw and recall AI systems that do not comply with comprehensive legal requirements. This should include technical as well as non-technical expertise — and especially civil society representation.

Who Should be the Independent Oversight Body?

There are multiple pathways towards independent oversight. We outline below three potential avenues.

1. Office of the Privacy Commissioner of Canada (OPC)

Instead of its current ombudsperson role, the OPC could act as a regulator to verify compliance on demand — including when the OPC wishes to initiate an inquiry on its own due to the risks posed by an algorithmic system. As the OPC argues in its annual report, these regulatory powers are frequent in Canada, notably in health and safety, food and restaurants, and in the tobacco industry. For example, in the event of a salmonella outbreak in a given product, it is the role of the Canada Food Inspection Agency (CFIA) to conduct investigations, publish advisory notices and coordinate recalls. Furthermore, the CFIA is responsible for the administration and enforcement of several acts such as the Safe Food for Canadians Act, the Agriculture and Agri-Food Administrative Monetary Penalties Act, and the Fertilizers Act.

An empowered OPC is not a novel idea, but rather in line with demands made by the OPC themselves and privacy scholars over the past few years. In their 2016-2017 annual report, the OPC has stated that “the time has come for Canada to change its regulatory model [...] through privacy regulators who, like those of its trading partners in the U.S., the EU and elsewhere, have strong enforcement powers commensurate with the increasing risks that new disruptive technologies pose for privacy.” The OPC believes that such a “proactive enforcement model would be most effective in ensuring that organizations are demonstrably accountable for their protection of consumer privacy.”

2. Creation of an Independent Tribunal to administer and enforce AIDA

Bill C-27 includes the An Act to establish the Personal Information and Data Protection Tribunal, which creates an administrative tribunal that will hear appeals to decisions made by the OPC under the proposed CPPA. The tribunal is allowed to issue penalties for certain contraventions of the CPPA as well. A similar tribunal to administer and enforce AIDA would therefore be an interesting avenue for the government to follow. To be effective in its administration of AIDA, the independent regulatory body could be in charge of setting out standards for auditing practices, and enforcing the law’s requirements by withdrawing and recalling AI products that are in contravention with AIDA.

3. Tribunal as set out in CCPA to include AIDA

However, seeing as issues related to privacy heavily overlap with artificial intelligence, an expanded tribunal that covers both the CCPA and AIDA would be an avenue that would avoid the duplication of these tasks.

Recommendations

  • The AI and Data Commissioner is not an independent and powerful enough agent to administer AIDA. To effectively regulate the AI market in Canada, the government needs to establish or empower an independent body that will be empowered to properly oversee and enforce AIDA.


3. AIDA Must Apply to Government Institutions

In its current iteration, AIDA will only apply to the federal private sector. The law would not apply to all federal departments and Crown corporations nor to any system used by the Department of National Defence, Canadian Security Intelligence Service (CSIS), the Communications Security Establishment (CSE), or any other person who is responsible for federal or provincial departments or agencies and “who is prescribed by regulation”. This is a major problem given the known human rights risks of state-deployed AI systems as described in section 5 below — and as illustrated by the Royal Canadian Mounted Police’s unlawful use of face recognition services provided by Clearview AI and the use of two AI-driven hiring services by the Department of National Defence.

There is no legal or constitutional reason to exempt federal government institutions from AIDA, given their potential to pose serious harm to individuals through the use of AI or algorithmic systems. Such exemptions contradict AIDA’s own stated purpose to “prohibit certain conduct” regarding AI systems that “may result in serious harm” to individuals. The Directive on Automated Decision-Making that applies to certain federal government institutions’ use of AI systems also contains numerous gaps and fails to provide individuals with recourse, and is therefore not currently equipped to address the public safety and human rights risks related to the technology. Moreover, such government exemptions create problematic double-standards within Canadian AI regulation and further distance Canada from global accountability measures.

Recommendations

  • To properly address the public safety and human rights implications of AI systems, AIDA needs to apply to all government institutions.


4. Bill C-27 Needs Consistent, Technologically Neutral and Future-Proof Definitions

Bill C-27 proposes definitions of artificial intelligence that lack cohesion across the CPPA and AIDA. Any definition of AI within these laws ought to be technologically neutral and future-proof.

A Cohesive Definition of AI is Needed in the CPPA and AIDA

The CPPA and AIDA overlap but use different terms (“automated decision system” versus “artificial intelligence system”) and define them differently, which ultimately leads to a lack of cohesion for one of the laws’ central topics. Defining algorithmic or AI systems in different ways could lead to the uneven and unpredictable application of the two laws.

For example, the CPPA captures only “automated decision systems” that “assist” or “replace” human judgment in decision-making contexts,37 whereas AIDA refers to the “autonomous” or partially autonomous processing of data that generates content or “makes decisions, recommendations or predictions”.38 This is problematic given that the very same systems may be used in ways that infringe rights under the CCPA and AIDA. These different definitions of AI could very well mean that a person has recourse under only one of these laws due to the disparate and inconsistent definitions of AI across both laws.

There are concrete examples that highlight the problems that could arise due to the two different definitions of algorithmic systems currently found within the CPPA and AIDA. For example, if an “automated decision system” as defined by the CPPA is used for a decision where human judgment was never required before, then the CPPA would not capture this use of an algorithmic system.39 Use of an algorithmic system that involves the processing of data related to non-human activities — such as activities related to institutions, technology, or other broader systems — would also mean that the CPPA could apply, but AIDA would not.40 AIDA in its current state would also likely apply for the intentional generation of content or to make decisions, recommendations, or predictions, but the CPPA would apply only if an algorithmic system is used to assist or replace human decision-making. The list of techniques enumerated in each of the law’s definitions is also different from each other, which would mean that an entity could attempt to avoid liability under one of the laws by attempting to claim that they did not use one of the techniques listed in one of the laws or that it did not use a technique that is similar to one that has been enumerated.

A cohesive definition of AI is also important across the CPPA and AIDA because these proposed laws balance the same interests. On the one hand, these laws balance the need for informational self-determination and the protection of human rights related to the collection, use and disclosure of personal information when these actions are done with the assistance of algorithmic systems. On the other hand, the CPPA and AIDA also balance those interests with the desire of organizations to collect, make use of, and disclose personal information for their private interests. These two laws need to define AI or algorithmic systems in such a way that they regulate the same technology so that there is cohesion across both laws, both of which seek to hold organizations accountable for their use of the same technology in respect of privacy and human rights.

Figure 1: The Different Definitions of AI or Algorithmic Systems Found Within Bill C-27

 

The Definition of AI or Algorithmic Systems Should Be Technologically Neutral, Future-Proof, and Address the Logic of these Systems

As the CPPA definition of “automated decision systems” and the AIDA definition of “artificial intelligence systems” stand, the proposed legislation could be circumvented by renaming, by disputes over categorization (e.g., statistics versus AI), or by unforeseen future developments (e.g., a new area using the same logic as AI but claiming to be a separate approach and branded differently).

Instead, we believe that the CPPA and AIDA both need a definition of AI that is technologically neutral and future-proof. By technologically neutral, we mean that the laws should address the source of concerns around a technology, rather than specific technologies. And by future-proof, we refer to how many of the issues around AI are not actually about AI itself, but underlying issues that the use of AI will exacerbate (e.g., sexism, racism, etc.). There will inevitably be future technological developments that fall outside of the label of “AI” but will raise the same concerns, and it is possible to have a definition that covers these.

In general, definitions of AI and machine learning (ML) from the field of computer science are frequently aspirational, describing what AI attempts to do (mimic human intelligence), rather than how it has succeeded in doing so. Similarly, outward-facing definitions of AI and ML again describe what these fields seem to be or seem to achieve, and not how they achieve those things.

However, the concern regarding definitions should not be whether things are labeled as AI or ML, or if they employ one of the named techniques that are currently associated with AI or ML. Rather, both the CPPA and AIDA should address how decision-making rules learned from data can fail due to unrepresentative data, bad or manipulatable proxies, weak signals, and changes in context — to name a few examples. Thus, to be future-proof, technologically neutral, and to address the core issues of concern, the federal government should consider working with technologists to define algorithmic systems based on their broad applications, instead of focusing on their techniques.

Both the CPPA and AIDA potentially fall short because their definitions focus on a limited number of techniques such as deep learning, predictive analytics, genetic algorithms, and machine learning. A potential pathway for legislation is to focus on how end-users interact with these technologies. As a result, both the CPPA and AIDA should cohesively define AI or algorithmic systems in ways that focus on, but are not limited to, their ability to generate outputs such as predictions, recommendations, and other types of decisions.

Recommendations

  • The CCPA and AIDA should provide for a definition of AI or algorithmic systems that is cohesive across both laws.

  • The definition of AI needs to be technologically neutral and future-proof.


5. Bill C-27 Must Address the Human Rights Implications of Algorithmic Systems

The current draft of Bill C-27 fails to address the broad human rights implications of AI or algorithmic systems. The CPPA and AIDA need to ensure that the wide range of public safety and human rights issues engaged by the use of algorithmic systems are accounted for in a way that is precise yet comprehensive. The regulatory framework facilitated by AIDA should include provisions for prohibitions on unacceptable uses of algorithmic systems. Furthermore, this should translate into the adoption of obligations and rights needed when data is collected and processed through automated means particularly as it relates to vulnerable populations such as children.

Recordkeeping and Transparency Provisions Are Not Enough to Prevent Human Rights Violations

Obligations and rights in Bill C-27 that concern algorithmic systems focus primarily on recordkeeping and transparency, which does not prevent harms that may arise from their use. For example, section 62 of the CPPA requires organizations to make available to the public a general account regarding any use of any “automated decision system” that “could have a significant impact” on individuals. Section 63(3) of the CPPA also provides individuals with the right to receive “an explanation” for any “prediction, recommendation or decision” that has a “significant impact” on them. Similarly, requirements for organizations and people under AIDA are focused almost solely on keeping records, mitigating risks, and disclosing information related to those risks. Moreover, AIDA addresses the impact and not the risk of systems, which as Scassa demonstrates, is problematic as the level of risk associated with the use of a system is not taken into account.

There are prohibitions in AIDA that are useful first steps for addressing the privacy and human rights impacts of algorithmic systems. For example, the law prohibits the possession or use of personal information for the purpose of designing, developing, using, or making available for use an AI system if the information has been directly or indirectly obtained or derived from an illegal act in Canada. AIDA also prohibits making an AI system available for use if the system (i) causes “serious physical or psychological harm to an individual or substantial damage to an individual’s property” and if the person who made the system available did so “without lawful excuse and knowing that or being reckless” regarding this use, or (ii) causes “substantial economic loss to an individual” and if the person who made the system available did so “with intent to defraud the public and to cause substantial economic loss to an individual”.

However, under AIDA, there are a wide array of algorithmic systems — for example, face recognition technology for the unique identification of a person or AI systems used in the criminal justice system — that are not captured by these prohibitions. These systems may still be available for sale, provision, or use, despite the fact that such systems pose serious concerns from a human rights perspective.

Bill C-27 Fails to Address AI’s Wide-Ranging Human Rights Implications

While AIDA addresses the possibility of various types of “harm”, “biased output” and some limited prohibitions, there are numerous human rights risks related to algorithmic systems that go unaddressed in Bill C-27. From a privacy and human rights perspective, algorithmic systems may facilitate the violation of a wide range of rights at incredible speed and scale, and in ways that may be hidden or discreet as well as discriminatory and arbitrary.48 As a result, it is not enough to legislate AI or algorithmic systems based on optimism of the technology’s potential benefits or what it would hypothetically do in a perfect scenario. Instead, what is needed is an approach to the regulation of algorithmic systems in Canada that both (i) identifies the technology’s human rights risks and (ii) provides obligations and rights to properly address those critical risks.49 In the following section, we discuss and provide examples of how human rights are implicated by algorithmic systems.

From a privacy perspective, algorithmic systems raise issues because constructing them requires the collection and processing of vast amounts of personal information, which can be highly invasive. The mere collection of this information can be privacy-intrusive particularly when it is done without consent, such as when information is scraped from the web or when information is used in a context that differs from the context in which the information was originally given. Algorithmic systems can also be used to identify people where they may have the right to remain anonymous, and can be used to monitor people’s political or sexual preferences, relationships, or travel patterns. The re-identification of anonymized information, which can occur through the triangulation of data points collected or processed by algorithmic systems, is another prominent privacy risk. The limited and weak requirements set out in the CPPA in tandem with the obligations of AIDA would appear to mean that it would be possible to lawfully collect, process, and disclose personal data simply because it is anonymized, which is not on its own enough to address the privacy risks related to the mass surveillance that is enabled by AI systems.

From a broader human rights perspective, algorithmic systems can be used to facilitate violations of a wide range of rights on small and at times large scales. This is the case when the technology is used by entities that include but are not limited to law enforcement, public safety actors, judges, social service decision-makers, military actors, other third party service providers that are relied on by government actors, as well as online intermediaries such as social media platforms. There is a rich body of scholarly and interdisciplinary work that examines the human rights impacts of algorithmic systems, and, as such, this commentary will merely touch the surface on this topic.

The following is a non-exhaustive overview of the human rights risks related to the use of AI systems. These risks are not adequately addressed within AIDA nor in the CPPA.

Figure 2: A Non-exhaustive Overview of the Human Rights Implicated by AI or Algorithmic Systems

 

Certain Uses of Algorithmic Systems Are Unacceptable and Must Be Prohibited

The current approach to addressing risk within AIDA is currently ill-equipped to respond to the wide range of human rights risks posed by algorithmic systems. Allowing the use of certain algorithmic systems and in certain contexts may also pose such significant risks to people’s safety, livelihoods, and rights that such uses cannot be allowed in a free and democratic society. In the following section, we discuss examples of algorithmic system applications that should be prohibited or at the very least accounted for within AIDA and within the CPPA where needed.

AI Systems that Impact the Health and Financial Outcomes for Individuals and Communities

There is a rich body of scholarly and policy works on the human rights impacts of algorithmic systems, such as the possibility for the use of the technology to affect people’s well-being and financial situations. In these contexts, algorithmic systems may not be used in such a way where rights are outright violated. Instead, there are deleterious impacts or risks stemming from use of the technology concerning people’s financial situations or physical and/or psychological well-being. The primary issue here is that a significant amount and type of personal information can be gathered that is used to surveil and “socially sort” or profile individuals and communities, as well as forecast and influence their behaviour.

We urge Canadian policymakers to consult and adequately address these issues in Bill C-27 prior to its potential enactment. In comparison to laws that exist or are emerging in the EU, the processing of health data is of such concern that the GDPR prohibits the automated processing, including profiling, based on this type of data with certain exceptions and only if suitable measures are in place that safeguard rights and freedoms as well as legitimate interests. The draft EU AI Act also prohibits the use of algorithmic systems by public authorities, or on their behalf, for social scoring purposes and if the score leads to (a) detrimental or unfavourable treatment of people in social contexts that are unrelated to the contexts in which the data was originally generated or collected and/or (b) detrimental or unfavourable treatment of people that is unjustified or disproportionate to the social behaviour or its gravity. By failing to address the ability for algorithmic systems to detrimentally impact people’s health and financial outcomes, the CPPA and AIDA are both falling far behind data protection and human rights standards.

The Use of Biometric or Health-Related Bodily Information for AI Systems

The collection and processing of biometric information (such as facial images) for the purposes of uniquely identifying people through automated means for public safety purposes poses significant concerns from a human rights perspective. Both politicians as well as civil society organizations around the globe are calling for a prohibition on the use of such biometric recognition practices given the related myriad human rights risks.

Experts note that the use of algorithmic systems to analyze and categorize people on the basis of their biometric and other health-related bodily information can be seen as an extension of physiognomy, or a debunked and type of discredited junk science that examines biological and facial features with a view to ascertain one’s propensity for certain (including criminal) behaviour. The use of biometric recognition and categorization systems can further facilitate systemic discrimination and historical inequities related to various bases of discrimination — what Stark and Hutson have aptly referred to as the use of technology to “infer or create hierarchies of an individual’s body composition, protected class status, perceived character, capabilities, and future social outcomes based on their physical or behavioral characteristics”.

In contrast to AIDA, the proposed EU AI Act would prohibit the use of real-time remote biometric identification systems in publicly accessible spaces for law enforcement purposes except in specific circumstances, and a wide range of civil society organizations in the EU are urging policymakers there to require a blanket prohibition on the use of remote biometric identification in any circumstances and by all actors. Other civil society actors are also calling for a prohibition on the use of remote biometric recognition systems in publicly accessible spaces for the purposes of categorizing people (based on those biometric characteristics) given the risks of discriminatory impacts.

The need for a prohibition on the use of facial recognition technology for the unique identification of individuals by public safety actors in Canada is a message that we have also repeatedly urged policymakers to take note of and establish within Canadian law.

The Use of AI Systems for Access to Social Services or Humanitarian Aid

Algorithmic systems can also be used in public sector contexts to assess a person’s ability to receive social services such as welfare or humanitarian aid, which can result in discriminatory impacts on the basis of socio-economic status, geographic location, amongst other data points analyzed.

As raised earlier, recall that the proposed EU AI Act would prohibit the use of algorithmic systems by public authorities (or on their behalf) for social scoring purposes and if the score leads to (a) detrimental or unfavourable treatment of people in social contexts that are unrelated to the contexts in which the data was originally generated or collected and/or (b) detrimental or unfavourable treatment of people that is unjustified or disproportionate to the social behaviour or its gravity.

Canada is falling behind this emerging legal standard set out in the EU AI Act. Instead, the collection and use of sensitive biometric data for analysis through algorithmic systems and for life-altering, health- and financial-related recommendations or decisions remains inadequately addressed within the CPPA and AIDA, which is a major oversight in Bill C-27 as a whole.

AI Systems That Profile People and Influence Their Behaviour

Algorithmic systems can also be used to profile and influence people’s behaviour, with particularly deleterious impacts on vulnerable populations such as children or disabled people. The primary issue here also concerns the use of algorithmic systems to surveil and profile people, often with a view to forecast or influence their behaviour. For example, consider the case of Facebook-Cambridge Analytica data scandal, where the public learned in 2018 that Cambridge Analytica had harvested the personal data of up to 87 million Facebook users for analysis and profiling through machine learning — for the purpose of providing targeted political ads to those voters in hopes of swaying their vote in various elections.

However, such data mining and targeted messaging can cause or enable harm to vulnerable groups, such as those who lack legal capacity such as children and disabled people. For instance, algorithmic systems for recommending content on websites and social media platforms to children based on their activity, despite the risks of them being shown content that may be inappropriate for them from a child development perspective or that may serve to radicalize them politically. Companies such as Facebook (now Meta) have also allowed teens as young as 13 to be targeted with ads for alcohol, drugs and extreme weight loss. Even more troubling than this is the fact that recent internal Facebook documents reveal the company was aware of how their products and services harm children and youth, further underscoring the need for robust accountability and oversight measures.

These outcomes are of such significant concerns from a human rights perspective that the proposed EU AI Act would prohibit “practices that have a significant potential to manipulate persons through subliminal techniques beyond their consciousness or exploit vulnerabilities of specific vulnerable groups such as children or persons with disabilities in order to materially distort their behaviour in a manner that is likely to cause them or another person psychological or physical harm.” The EU AI Act also provides a specific prohibition on such practices when they are used to exploit vulnerable groups based on their age or physical and mental disability.80 These risks go unaddressed in Bill C-27, which is a major oversight that must be rectified for future versions of the CPPA and AIDA to properly account for the human rights risks related to algorithmic systems.

Children and Young People’s Rights Related to AI Systems Need to be Addressed

The stakes of neglecting to mitigate risks to children and young people before they materialize into harm are significant, particularly given children’s developmental vulnerabilities and their status as ‘early adopters’ of emerging technologies. In fact, these very characteristics afford children special considerations in internationally recognized human rights law, including the United Nations Convention on the Rights of the Child (UNCRC), of which Canada is a signatory. Without affording children’s rights extra protections, AIDA risks flattening the impact from known privacy risks across groups that are differentially and disproportionately affected by AI systems.

Canadian and international organizations alike have emphasized the need to incorporate proactive measures such as clear instructions for the design and testing of AI products and services before they are deployed (or modified). This is reflected by growing policy trends to embed robust children’s rights impact assessments, auditing measures and accountability mechanisms into legislative efforts around the world. Such systemic approaches are already seeing success in California, the European Union, and notably, the UK, which recently passed the Age Appropriate Design code which mandates how children’s data are collected, used, and sold by setting high levels of data privacy and protections for children under 18. The Code also sets out specific design obligations for developers of services “likely to be accessed by children” by requiring that the best interest of the child be a primary consideration during design and development.

The federal government ought to learn from these emerging best practices related to children’s rights that are implicated when AI systems are used, and children should be given special category status and protection within Bill C-27 and in AIDA more specifically.

Recommendations

  • The CCPA and AIDA need to address the human rights risks of algorithmic systems in a comprehensive manner. Bill C-27 and AIDA need amendments to protect fundamental rights when AI systems are used — such as the right to object to the automated processing of personal data, as well as the right to appeal decisions that are made when algorithmic systems are used.

  • In line with our previous findings on the topic and with relevant global human rights efforts, Bill C-27 needs to have a framework addressing the collection and processing of biometric data. This framework should include, but not be limited to, prohibitions on the processing of biometric data such as facial images through automated means for the unique identification of individuals, especially in public settings and potentially subject to a very limited set of exceptions.

  • Certain other uses of algorithmic systems must also not be allowed because they pose unacceptable risks to people’s safety, livelihoods, and rights, which requires more than AIDA’s current two-tiered system. These prohibitions should include but are not limited to the use of algorithmic systems that exploit vulnerable groups based on their age (such as children) or physical and mental disabilities, as well as systems that are used by public authorities for social scoring purposes that lead to detrimental or unfavourable treatment that is unjustified. The drafters of Bill C-27 need to collaborate with and draw on the work of human rights experts and experts on algorithmic systems to properly craft the changes needed to the proposed laws.

  • Bill C-27 and AIDA more specifically should include special category status for children (under 18) given international human rights law precedent, and should require high levels of protection by default, especially against commercial use of children’s data.


APPENDIX A

Machine learning models are built on statistical techniques, but in contrast to how traditional statistical modeling has been used to try and understand the world, machine learning models search only for optimal correlations. A correlation, in statistics and machine learning, is when some attributes (like height and weight), measured over multiple entities (the height and weight of multiple people), have values that tend to vary in the same ways (taller people tend to be heavier, and vice versa). A model is a way to measure correlations between sets of attributes and then use those measured correlations to try to anticipate the value of some target attribute (e.g., using past purchasing behaviour to anticipate future purchasing behaviour).

Machine learning has developed powerful ways of automatically finding correlations, for example deep learning models (“deep” refers to the model having many layers, and not to the model being profound) can find correlations between groupings of pixels and human-given labels for images, which drives automatic image labeling and the ability to recognize specific people in pictures.

However, the caveat that “correlation is not causation” applies: while machine learning can find correlations, it cannot determine whether these are causal or not, which makes machine learning models potentially very fragile. For example, the way deep learning models find patterns of pixels does not correspond to humans visual processing, meaning that deep learning models for image recognition can be fooled by manipulating images in ways that are not noticeable to human viewers. Short of experimentation such as randomized control trials, statistics too cannot reliably identify causation either; but traditionally, statistics has attempted to use theoretical reasoning to identify what correlations might be causal, yet this reasoning is usually absent in machine learning.

APPENDIX B

Note that finding optimal correlations is not unique to machine learning; actuarial science, which uses statistical models for making credit and insurance decisions and for making criminal justice decisions (bond, sentencing, parole) related to potential recidivism, have been employing the same logic long before machine learning87 (sometimes even developing techniques in parallel with machine learning).

Horror stories of actuarial science, or even how rates can seem arbitrary (because correlations are not always obvious from the perspective of an individual) or unfair (because a correlation can come from injustice, not something within people’s control), and how people are often effectively punished or rewarded for things outside of their control (which, at its worst, is effectively “pil[ing] on” further punishment to those who are already “victims of injustice and cruelty”90), are a direct precedent for horror stories of AI.

This is not to say that existing regulation for these practices could be a model for regulating AI, because there are strong arguments about how those regulations are systematically insufficient. For example, not using race as an explicit input for credit decisions and insurance rates may not formally (directly) perpetuate racial bias, but it does so substantively (indirectly): legacies of redlining and segregation make postal code an effective proxy for both race, and for living in communities suffering from systematic deprivation that tends to make individuals more frequently have bad insurance-related outcomes. Allowing the approach of individualizing risk based on correlates thus results in additional insurance burdens on communities of color; bans on the use of race in credit decisions and insurance rates are insufficient for preventing substantive discrimination on the basis of race.

This also shows that AIDA’s reference to “biased output” is subjective: is output biased if it substantively perpetuates inequality, or — if it “merely” empirically, and accurately, reflects the status quo of a biased society — should it be considered unbiased? If we recognize that we need to make normative, moral decisions about what in the empirical status quo should count as “biased” and by how much, who makes these decisions?

Regulation needs to consider when it is appropriate and morally justified to characterize “risk” (as an abstraction of negative outcomes) purely empirically in order to individualize it for formal equality, or when the pursuit of equity and justice through substantive equality demand that risk (or other targets of machine learning) should instead be collectivized such as in nationalized insurance systems, of the type that exist in Canada for health insurance (but not for loans, or for car or life insurance).

 
 
Previous
Previous

The Online News Act keeps journalism alive while it adapts to a new world

Next
Next

Le rôle de la mésinformation lors de l’élection québécoise de 2022: Observations de mi-élection