AI News Audit: AI, Canadian Journalism, and Paths for Policy Action

March 16, 2026 - AI companies built their products using Canadian journalism without permission and without compensation, and are now delivering that journalism to consumers as their own product. Existing copyright and media policy frameworks were not designed to address this.

In February and March 2026, we conducted the first large-scale empirical audit of how AI models use and distribute Canadian journalism. We ran two studies. First, we tested four major AI models on 2,267 real Canadian news stories in both English and French (18,134 queries in total) to measure what models have absorbed from their training data and whether they attribute it. Second, we enabled web search and asked the same models about 140 specific recent articles from seven Canadian outlets across 3,360 experimental conditions, to measure whether AI models produce viable substitutes for current journalism and whether they credit the source.

When asked about Canadian news events drawn from their training data, ChatGPT, Gemini, Claude, and Grok provide no source attribution 82% of the time. When given web access and asked about specific recent articles, the same models covered enough of the original reporting to substitute for the source in 54 to 81% of cases. Models linked to Canadian news sites in 29 to 69% of responses, but named the originating outlet in the response text in only 1 to 16% of cases. When we named the outlet and asked the same models for citations, attribution rates reached 74–97%. The rules governing how these companies use journalism (who gets credited, who gets compensated, and what obligations attach to those who profit), are being set right now, by default, through inaction. Canada has tools and precedent to act responsibly.

This memo accompanies the technical brief AI News Audit: How AI Models Use and Distribute Canadian Journalism, which contains the full methodology, data, and analysis. What follows are the implications of that evidence for Canadian policy.

A note on AI in this research. This project was an experiment in developing an AI-assisted research methodology. Two senior researchers, Aengus Bridgman and Taylor Owen, designed a pipeline in which AI tools were embedded at each stage of the process, from study design to data collection and response coding to statistical analysis and prose drafting to graphic design, to test what this methodology could produce in a compressed timeline. Claude (Anthropic) was the primary AI tool used throughout. All code and content was reviewed, tested, and verified by Bridgman and/or Owen. The methodology itself is part of what this project set out to test.



Media Contact:

Isabelle Corriveau

Associate Director, Public Engagement

media@mediatechdemocracy.com

Next
Next

Scoping AI Chatbots into a revised Online Harms Act: The Case for Immediate Action