March 2026Gen(Z)AI | Forum Report: AI & Data Privacy
Full Report
“As AI systems become increasingly embedded in everyday digital platforms, the collection and use of personal data is expanding rapidly, often without clear transparency or meaningful safeguards. Young people encounter these systems constantly, yet many feel they have little control over how their data is collected, shared, or used.
Our third forum focused directly on these issues. Drawing from their lived experiences, participants reflected on the need for stronger safeguards and clearer accountability around how AI systems handle personal data. This forum builds on the momentum of Gen(Z)AI and continues its youth-led work to inform emerging conversations on AI governance and data protection.”
— Julian Lam, Report Author and Lead Youth Fellow, Gen(Z)AI Forum on AI & Data Privacy
Executive Summary
Gen(Z)AI is a first-of-its-kind youth assembly that focuses on artificial intelligence (AI), bringing together 100 young Canadians aged 17-23 from across the country to contribute directly to discussions on the future of AI governance in Canada. The initiative is jointly organized by the Centre for Media, Technology and Democracy and the Dialogue on Technology Project, in partnership with Mila, Quebec’s AI Institute.
This report outlines the purpose, methodology, and key findings of the third Gen(Z)AI forum, held in Vancouver and focused on AI & Data Privacy. The Vancouver forum was one of four regional forums, alongside sessions on chatbots, information integrity, and age assurance in AI. Over three days, participants engaged with expert speakers, took part in workshops, and deliberated on policy challenges and opportunities related to AI and data privacy. A consolidated final report, which will outline the policy recommendations from each forum, will be released on April 30, 2026, following the completion of all four regional forums.
Forum Focus: AI & Data Privacy
Data privacy focuses on who is authorized to collect, process, and share an individual’s data, and the degree to which that individual can meaningfully control such access. Data privacy has become a central concern in Canadian policy discussions on AI. A 2025 study by the Privacy Commissioner of Canada, found that 9 in 10 Canadians are concerned about their personal information being used to train AI systems. AI tools, in turn, ranked third among the most worrisome technologies for Canadians in terms of privacy.
Among youth, data privacy in AI contexts is an especially pressing but largely overlooked concern. Youth rights in the digital age often lie at the periphery of national AI policymaking processes. A UNICEF review of 20 national AI strategies found that relatively few nations meaningfully engage with AI's unique impacts on youth, including risks related to data protection and privacy. Canada’s AI Strategy omits any reference to youth, and proposed federal privacy reforms fail to incorporate their distinct vulnerabilities.
Participants at the Vancouver forum expressed deep concern that contemporary AI systems operate through opaque data practices that undermine their ability to express informed consent, to feel individual agency online, and to maintain a sense of community well-being. Across discussions, participants highlighted how they perceive the technology as intensifying surveillance and profiling, which often lack clear safeguards. A common feeling was that youth users are expected to make privacy decisions without meaningful transparency, which, in turn, results in apathy or resignation about data loss. Participants also acknowledged a deep tension between the personalization AI offers and, to some degree, power imbalance it creates. Values such as accountability, stewardship, justice, and community repeatedly surfaced during discussions, suggesting a shared desire for governance frameworks that protect sensitive data and mitigate disproportionate harms to vulnerable groups, such as younger youth and the elderly.
The third Gen(Z)AI forum on AI and Data combined expert briefings, preparatory resources, interactive workshops, and deliberative policymaking sessions. Participants engaged with perspectives on privacy regulation, child-centric platform design, and global policy approaches to AI systems and data before working collaboratively to identify priority concerns and policy directions. Ultimately, participants emphasized that governance models must ultimately work to restore trust and agency and also encourage collective responsibility in an increasingly data-driven society.
Key Issues
-
The way that AI systems collect user data and disclose that collection is deliberately opaque, making it difficult for users to provide informed consent about how their data is being used and sold. This may disproportionately affect vulnerable groups, including children and the elderly.
-
AI systems are not subject to adequate and enforceable safeguards for the collection, storage, and sharing of user data. This may lead to (un)intended harms, including, but not limited to, excessive surveillance and profiling.
Preliminary Recommendations
-
These include policy tools that establish clear reporting pathways, remedies, and institutional responses when harm is caused by opaque consent mechanisms, data misuse and privacy violations related to AI systems.
-
These approaches focus on addressing risks in the design of AI systems and platforms by embedding safeguards and limits on deceptive or manipulative practices before harms occur.
Report Author: Julian Lam
Project Leads: Helen Hayes, Fergus Linley-Mota
Contributors, Alexander Martin, Madeleine Case, Nonso Morah
Operations Lead: Sequoia Kim
Designers: Ibrahim Rayintakath (Illustration), Mathilde Robert (Cover layout)
Layout Editor: Sequoia Kim
Special Thanks: Anna Jahn, Taylor Owen
License: This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. You are free to share, copy and redistribute this material provided you give appropriate credit; do not use the material for commercial purposes; do not apply legal terms or technological measures that legally restrict others from doing anything the license permits; and if you remix, transform, or build upon the material, you must distribute your contributions under the same license, indicate if changes were made, and not suggest the licensor endorses you or your use. Images are used with permission and may not be copied, shared, or redistributed outside of this material without the permission of the copyright holders.