Back to All Events

Roundtable on the Artificial Intelligence and Data Act

Summary by Helen A. Hayes

As part of its newly re-introduced privacy legislation, Bill C-27, the federal government has outlined a novel regime to regulate AI in the Artificial Intelligence and Data Act (AIDA). Its central objectives include:  

  1. regulating international and interprovincial trade and commerce in artificial intelligence systems; and,

  2. prohibiting certain conduct in relation to artificial intelligence systems that may result in serious harm to individuals or harm to their interests.  

On Thursday, July 7th, the Centre for Media, Technology, and Democracy hosted a roundtable webinar to discuss the proposed legislation with Dr. Ignacio Cofone, Phil Dawson, Erica Ifill, Dr. Emily Laidlaw, Dr. Fenwick McKelvey, and Christelle Tessono. Despite some cautious optimism about the AIDA’s trajectory, especially as a first step for Canadian AI regulation, our panellists and attendees raised some important questions and considerations about the AIDA’s omissions and potential impacts. 

Definitional Issues and Industry Power 

Technologies considered “high-impact” are not substantively outlined and/or defined in the AIDA, thereby requiring individual actors (or “persons responsible for artificial intelligence systems”) to assess and determine whether an artificial intelligence system ought to be considered a part of the AIDA’s “high impact” category. This speaks to larger issues of accountability and recourse, and whether the AIDA gives too much leverage to industry actors.

The AIDA also takes a harm-based approach to artificial intelligence regulation, yet “material harm” is currently undefined in the Bill’s text, and instead will likely be defined in forthcoming regulations. While the legislation is clearly looking to prompt better industry standards and oversight, there are some profound harms related to AI that require a much stronger legislative framework for accountability. The current draft, however, has delegated the responsibility for the conduct and use of AI technologies to industry actors. Roundtable participants cautioned that this could lead to further marginalization of racialized people and other groups, such as the LGBTQ+ community, in light of increasing evidence that industry systems, standards, and structures can be deeply inequitable. Since the AIDA does not call for or create the scaffolding to implement independent regulatory oversight, this may result in little accountability and meaningful recourse for Canadians negatively affected by the use of AI technologies. 

It is also worth noting that the AIDA sets out a new position, the Artificial Intelligence and Data Commissioner, responsible for its administration and enforcement. However, in stark contrast to the Privacy Commissioner, this new position is not an independent agent of Parliament, and would rather fall under the purview of the Ministry of Innovation, Science and Industry, with powers delegated to it by the Minister.  

Glaring Omissions 

Artificial intelligence systems used by Canadian federal ministries, including those related to defence and national security, are currently excluded from regulations outlined in the AIDA. These government exemptions create problematic double-standards within Canadian AI regulation, while ignoring the privacy and human rights risks of state-deployed facial recognition and other AI systems. 

In fact, in its entirety, but especially within the AIDA, Bill C-27 “refuses to acknowledge AI’s characteristic of systemic bias including racism, sexism, and heteronormativity.” This stems, in part, from a lack of meaningful public consultation on the AIDA, especially with BIPOC, trans, and non-binary people. Notably, during the roundtable, concerns were raised about the transparency of the consultation process. Although we have since come to learn that the government’s AI Advisory Council was indeed consulted on the proposed legislation, this type of consultation does not meaningfully meet the standards for a truly public consultation. Simply put, in proposing a regulatory regime that fails to address longstanding and well-documented concerns of the use and deployment of AI technologies, the government has failed to protect marginalized peoples and their privacy and security, in part because its drafting process did not substantively seek input from those most affected by the harms of AI.  

It is likely that the AIDA will either require substantive revision or a robust accompanying regulatory framework before it can adequately address expressed concerns about the regulation of AI technologies in Canada. Where it goes next will be crucial to determining how seriously the government is willing to consider the feedback it has received, and whether it will live up to its claims that Canada is a country promoting and protecting the use of “AI for good.”

Previous
Previous
April 14

Roundtable on the Online News Act (Bill C-18)

Next
Next
September 28

NEW DATE: Roundtable on Children’s Rights & Safety Online