Scoping AI Chatbots into a revised Online Harms Act: The Case for Immediate Action

February 24, 2026 - Months before Jesse Van Rootselaar was identified as the mass shooting suspect in Tumbler Ridge, a rural community in British Columbia, her interactions with OpenAI’s ChatGPT raised internal concerns. The company said it considered notifying law enforcement but ultimately determined the case did not meet the higher threshold required to make a referral. This evidently raises many questions related to digital governance, data privacy and reporting mechanisms. 

In response to this news, the Centre’s Founding Director, Taylor Owen, and Helen Hayes, Associate Director of Policy, are calling for immediate action to scope AI chatbots into a revised Online Harms Act. You can read the memo they sent to the Hon. Evan Solomon, Minister of Artificial Intelligence and Digital Innovation and the Hon. Marc Miller, Minister of Canadian Identity and Culture below.


February 27, 2026 - Hon. Evan Solomon, Minister of Artificial Intelligence and Digital Innovation, summoned senior OpenAI executives to Ottawa on February 24th to address the Tumbler Ridge mass shooting. After the meeting, OpenAI’s Vice President of Global Policy wrote to Minister Solomon, outlining the company’s updated safety commitments. The letter disclosed that the Tumbler Ridge shooter created a second ChatGPT account after the first was banned, and that OpenAI’s detection systems failed to identify it. OpenAI acknowledges that under its updated referral protocol, it would now refer the first banned account to law enforcement.

In response to OpenAI’s letter, the Centre’s Founding Director, Taylor Owen, and Associate Director of Policy, Helen Hayes, argue that this strengthens the case for updating the Online Harms Act to include chatbots in its scope. They assert that while OpenAI’s voluntary commitments are a good starting point, they are no substitute for a legislative framework establishing an independent regulator with authority to require risk assessments, set age-appropriate design standards, ensure compliance, and enforce consequences when systems fail.


Media Contact:

Isabelle Corriveau

Associate Director, Public Engagement

media@mediatechdemocracy.com

Previous
Previous

AI News Audit: AI, Canadian Journalism, and Paths for Policy Action

Next
Next

New study finds conspiracy theory beliefs amplified by a small number of highly active accounts