Submission for the Federal Government’s 2021 Proposed Approach to Address Harmful Content Online

Recommendations for Children’s Safety Online to Canadian Heritage

Taylor Owen & Sonja Solomun
September 20, 2021

 
 
 
 

On July 29, 2021, the Federal Government announced its proposal to set “new rules” obliging “Online Communication Service Providers” (OCSPs) to address five categories of harmful content on their platforms: hate speech; child sexual exploitation content; non-consensual sharing of intimate images; incitement to violence content and terrorist content. The legislation requires OCSPs “take all reasonable measures” (including automated filtering and ISP website blocking as a last resort) to identify and block these five categories within 24 hours of being flagged, while also providing procedural transparency to users and survivors. 

While we share significant concerns about the wide scope; 24-hour takedown requirement; proactive monitoring of all harmful content; and website blocking with  Canadian and international experts alike, we raise additional concerns about the lack of consideration of children’s rights in the upcoming online harms legislation. We highlight Canada’s duty of care to protect children from harmful content online, to impose age-specific requirements, and to mandate provisions for special categories of harmful content, including altered sponsored and paid content.  

A growing number of civil society groups, lawmakers and governments around the world have established that all online governance should consider children’s rights. This note offers a few key areas of concern pertaining to children’s safety and well-being online that Canada should consider in its “approach to addressing harmful content online.”

1. Recognize duty of care toward ‘best interest of the child’ in all online regulation and mandate ‘best interest of the child’ as the primary consideration when in conflict with commercial interests [1]

While Canada is a signatory of the United Nations Convention on the Rights of the Child (UNCRC) it has yet to formally acknowledge and uphold its duty to afford special protections to children established in such international human rights law frameworks in its upcoming plans to “address harmful content online.” Such special protections should include i) the ‘best interest of the child’ as set out by general comment No.25 of the UNCRC and ii) protecting children from encountering harmful content.  

Defined in Article 3 of the UNCRC, the ‘best interest of the child’ should “be a primary consideration in all decisions to regulate online activity” by incorporating provisions that protect children’s safety, health, wellbeing, psychological and emotional development, identity, freedom of expression and agency to form individual views, among others. Countless civil society organizations around the world have advocated for the ‘best interest of the child’ and the United Kingdom recently demonstrated its commitment to take them seriously -- especially when children’s interests stand in contrast to commercial interest. The new Age Appropriate Design Code mandates websites and apps take the “best interests” of their child users into account when designing and developing online services likely to be accessed by a child, or face fines of up to 4% of annual global revenue. These services span a wide range of social media platforms, video and music streaming sites, as well as gaming apps and sites. 

In April 2021, the Alliance for Protecting Children’s Rights and Safety Online addressed specific recommendations to proactively protect children from harm to Prime Minister, Justin Trudeau. The Alliance recommends that the ‘best interest of the child’ be a primary consideration by incorporating specific provisions for “all products and services likely to impact children – not only for those directed at them.” 

So far, the published federal guides include instructions for children only in provisions around child sexual exploitation, in alignment with Canada’s Criminal Code. While of utmost importance, the proposal leaves a wide array of other online harms to children’s safety, well-being, health and psychological and emotional development unattended. [2]

These harms include content and communication which promotes medical misinformation, incitements to violence and radicalization, and harmful activities such as suicide, self-harm and disordered eating, to name only a few. 

As such, specific categories for the scope and definition of online harms as they pertain to children should be built into Canada’s upcoming proposal, including age-specific obligations. 

2. Break down online harms proposal into specific legislation for children’s rights and protection from harmful content online

Mandating specific requirements and duties of care for age-appropriate design standards for children online would support broader recommendations to break down Canada’s proposal into subject-matter specific legislation as legal experts Cynthia Khoo and Emily Laidlaw advocate. Narrowing the scope of online harms would also address a key point of public contention about the overly broad sweep of Canada’s current proposal. Carving out special categories of harm to children beyond criminal offences would also align Canada’s upcoming proposal with leading global regulation in this space. 

For instance, the recent United Kingdom’s Online Safety Bill includes “services likely to be accessed by children” as one of three separate categories of harm. The duties for this children-specific category include taking proportionate measures to mitigate and manage the risk and impact of harms to children in different age groups as well as preventing children of any age from encountering certain material alongside preventing specific age groups who might be at risk of harm from encountering harmful content.

The UK Online Safety Bill includes requirements for companies to carry out risk assessments and adhere to “safety duties” for each category of harm. By carving out more specific categories of harm, Canada could impose child safety and wellbeing risk assessments which would account for both harmful content as well as the systems which promote and amplify the spread of harmful content, including algorithms and other functionalities for circulating content. While not without limitation, risk assessments are crucial accountability mechanisms for preventing harm before they occur. 

3. Canada’s upcoming legislation should include strict requirements to
i) minimize children encountering manipulated images of facial and body features in paid and sponsored content and should ii) mandate strict disclosure of manipulations to facial and body features in paid and sponsored content. 

Given the growing number of self-harms, harms to mental health and body image, and disordered eating resulting from the consumption of visually modified content online, Canada should account for images with manipulated facial and body features as specific categories of risk and mandate reasonable provisions to minimize their harm, including clear disclosure and content labels. This reflects a growing global commitment to address the promotion of unrealistic body image standards to children and young people. 

In Norway, for instance, anorexia is the third most common cause of death among young girls. The country has recently enforced legal disclosure for advertisements that have been photoshopped or otherwise manipulated, including “enlarged lips, narrowed waists, and exaggerated muscles.” In Canada, where suicide rates are the leading cause of death for children aged 10 - 14, preventing undue risk from encountering unrealistic body images should be of top priority for the federal government. The government should at minimum extend further consultations with children’s health and safety experts and advocacy groups before enacting legislation for harmful content online.

4. Children-specific legislation must be proactive and address design features over harm. 

While prevention of harm is of utmost importance generally, the stakes of neglecting to mitigate risk before they materialize into harm is especially high for children given “both their developmental vulnerabilities and their status as ‘early adopters’ of emerging technologies.” Without special consideration for children, Canada’s current plans to address harmful content online flatten impact from known harms across groups that are differentially and disproportionately affected by the digital environment. 

Recent evidence shows social media companies such as Facebook are already aware of how their services harm children and young people, especially to their mental-health and psychological wellbeing. The same companies are well resourced and adept at implementing proactive measures to safeguard specific threats, such as those to national security. Yet Canada’s proposal to address harmful content includes minimal accountability mechanisms for mitigating risks to children before they become harms. As many have noted, the amplification of online content means that even the most violent and dangerous material can reach millions of children before it is flagged and removed. 

One way to ensure proactive mitigation of harm outside controversial provisions to monitor all harmful content through automated filtering currently included in Canada’s proposal is by incorporating clear instructions for the design and testing of services before they are deployed (or modified). Such systematic approaches, incorporated in the European Union’s recently unveiled Digital Services Act and the aforementioned Age-Appropriate Design Code in the United Kingdom, are already showing promise in affecting change. In the weeks leading up to the passage of the latter specifically, a number of major platform companies including Instagram, YouTube, TikTok and Google introduced changes to how they treat child users on their platforms. Instagram for instance, will no longer allow unknown adults to send direct messages to children under 18, while Google will stop targeted advertising to children under 18. 

Lastly, incorporating age-appropriate design standards and proactive measures moves legislation beyond a narrow focus on harm and allows policymakers to support children and youth’s autonomy and growth in online environments by maximizing their benefits and embedding children’s rights by default and into design. As leading children’s rights organization 5Rights argues, “The enormous potential of digital technology will only be realised when it is proactively directed towards the promotion of children and young people’s rights, rather than retroactively adapted or deployed merely to protect their safety. 

We echo the need for Canada to introduce legislation to address online harms. As many have highlighted however, significant nuance and consultation is needed to ensure Canada gets it right. If the federal government takes this time now to consider special categories for the scope and definition of harmful content likely to be encountered by children, and protect both individual and collective children’s rights with proactive measures (beyond automated filtering), it can very well lead the way in international norm setting. 

 
Footnotes

[1] This recommendation follows from the work set out by the 5Rights Foundation on ‘contract’ risks to children, specifically advocating that the best interests of the child supersede commercial opportunity. See Ambitions for the Online Safety Bill, April 2021, https://5rightsfoundation.com/uploads/Ambitions_for_the_Online_Safety_Bill.pdf.

[2] This note is primarily concerned with harmful content which is outside the scope of child sexual exploitation as defined by Canada’s Criminal Code.

 
Previous
Previous

“How to Fix Facebook” (The Decibel)

Next
Next

MEMO #3: Misinformation Spillover from the United States