February 2026

Gen(Z)AI | Forum Report: AI & Information Integrity

Full Report

“With the advent of AI, information integrity is being challenged, and there is still little regulation in place. Young people are feeling this the hardest, especially as trust in the information they see online continues to decline.

Our second forum on AI and information integrity tackled these critical issues directly. This generation is approaching the topic from lived experience and thinking about the kind of future they want to shape. This second forum builds on this momentum and continues the important youth-led policy work of Gen(Z)AI.”

—Alexander Martin, Report Author and Lead Youth Fellow, Gen(Z)AI Forum on AI & Information Integrity

Executive Summary

Gen(Z)AI is a first of its kind Youth Assembly on AI, bringing 100 young Canadians aged 17-23 together to help shape the future of AI in Canada. This project is jointly led by the Centre for Media, Technology and Democracy and the Dialogue on Technology Project, in partnership with Mila, Quebec’s AI Institute. 

This report outlines the purpose, methodology, and key findings of the second Gen(Z)AI forum, held in Montreal and focused on AI & information integrity. The Montreal forum was one of four regional forums, alongside sessions on chatbots, privacy, and age assurance in AI. Over three days, participants engaged with expert speakers, took part in workshops, and deliberated on policy challenges and opportunities related to AI & information integrity. A consolidated final report will be released in May 2026, following the completion of all four regional forums held across Canada.

Gen(Z)AI Forum #2: AI & Information Integrity

Information integrity refers to the accuracy, reliability, and trustworthiness of information as it is produced, circulated, and consumed. Generative AI has intensified existing challenges for information integrity by enabling the large-scale creation of convincing synthetic text, images, and audio, while algorithmic recommendation systems shape which content is amplified and seen. Together, these dynamics influence how information is interpreted and trusted, with direct implications for public discourse and democratic processes.

Public concern about information reliability in Canada is high. National survey data show that 59 percent of Canadians are very or extremely concerned about online misinformation, while 64 percent report low trust in social media content that is not affiliated with recognized government, scientific, or news organizations. Many Canadians also report frequent exposure to misleading or false information and increasing difficulty distinguishing between true and false content online.

Participants in Gen(Z)AI emphasized that these challenges disproportionately affect vulnerable populations, including children and youth. Risks discussed included exposure to misleading or harmful content, algorithmic recommendation systems that reinforce echo chambers, declining trust in institutions, and heightened polarization driven by engagement-based platform incentives. While other jurisdictions have begun to respond through policy and regulation, participants noted that Canada has been slower to adopt a comprehensive approach.

The second Gen(Z)AI forum on AI and information integrity combined expert briefings, preparatory resources, interactive workshops, and deliberative policymaking sessions. Participants engaged with perspectives on platform design, independent media, mis- and disinformation practices, and AI-enabled manipulation before working collaboratively to identify priority concerns and policy directions. Across discussions, participants expressed dissatisfaction with the status quo and agreed that current approaches to information integrity and AI governance do not adequately address emerging risks.

Key Issues

  • AI-generated content, including mis- and disinformation, overwhelms users and undermines confidence in reliable and accurate information, which has distinct and disproportionate effects on vulnerable populations.

  • AI recommender systems push content that is ideologically extreme and reinforces information echo chambers, resulting in social and political polarization with effects in both on- and off-line spaces.

  • An information environment driven by engagement-based incentives and flooded with mis- and disinformation contributes to mistrust in news, government, and other traditional institutions.

Preliminary Recommendations

  • These include policy tools that establish clear reporting pathways, remedies, and institutional responses when AI-enabled mis- and disinformation or deceptive content causes harm.

  • These approaches focus on addressing risks within the design of information systems and AI technologies, embedding safeguards and limits on deceptive or manipulative practices before harms occur.

  • These recommendations emphasize strengthening user agency and resilience through media and AI literacy, alongside greater transparency and participation by governments and technology companies.


Report Author: Alexander Martin

Project Leads: Helen Hayes, Fergus Linley-Mota

Contributors, Julian Lam, Madeleine Case, Nonso Morah

Operations Lead: Sequoia Kim

Designers: Ibrahim Rayintakath (Illustration), Mathilde Robert (Cover layout)

Layout Editor: Sequoia Kim

Special Thanks: Anna Jahn, Taylor Owen


License: This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. You are free to share, copy and redistribute this material provided you give appropriate credit; do not use the material for commercial purposes; do not apply legal terms or technological measures that legally restrict others from doing anything the license permits; and if you remix, transform, or build upon the material, you must distribute your contributions under the same license, indicate if changes were made, and not suggest the licensor endorses you or your use. Images are used with permission and may not be copied, shared, or redistributed outside of this material without the permission of the copyright holders.