Highlights from the 2023 Platform Governance Research Network Conference

Disclaimer: This post originally appeared first on Data & Society Points.

Over the last two years, we have seen massive changes across the platform and technology industries. Platforms that once felt ungovernable due to their massive scale are now facing unprecedented regulation in many jurisdictions, most notably in the EU, where the Digital Services Act (DSA) recently came into effect. Mass layoffs and corporate restructuring have created rising uncertainty in the tech sector. Twitter — not the largest platform, but nonetheless important because of its use by media and governments — has been taken private by Elon Musk, prompting new questions about platform accountability and alternatives to centralized networks. It was with this context in mind that this year’s Platform Governance Research Network conference, which took place online across multiple time zones on April 3 and 4, was conceptualized. The conference theme, “Imagining Sustainable, Trustworthy, and Democratic Platform Governance,” pushed participants to imagine “new” (or revisit “old”) ways of thinking or theorizing about platform governance.

Funded by the MacArthur Foundation, the conference was organized collectively by an international group of researchers, with additional support from McGill University, Data & Society, Centre for Internet and Society-India, and the WZB Berlin Social Science Center. Over two days, 49 research presentations across 16 sessions brought forth wide-ranging concerns and explored the nuances of platform governance in different geographies and contexts. Transgressing the usual boundaries of academic conferences, we heard from civil society organizations, academics, and independent researchers.

While it is difficult to summarize the rich discussions that unfolded during the conference, we have attempted to string together five overarching themes that emerged. We understand these themes as a collective contribution to the field, and acknowledge the contribution of all the scholars who participated, even if they are not specifically named.

1. Contested Governance

Presentations across panels accentuated the anxieties of cultural producers (from the margins and otherwise) who are grappling with the opaque governance mechanisms of platforms, as well as with concerns about increased dependency, precarity, and surveillance. Through the concept of “algorithmic gossip” — defined by Sophie Bishop as “communally and socially informed theories and strategies” pertaining to algorithmic recommendation systems — researchers explored how cultural producers are contesting the rationales of algorithmic governance. Presentations also considered how cultural producers’ reliance on algorithmic platforms pushes them to engage in self-governance or rely on informal networks to gain insight into platform governance.

Other presentations explored the contested meanings of justice in platform governance. Carolina Are examined this in relation to the moderation of content produced by users who are transgender or are sex workers, making the case that loopholes in systems of appeals leave room for discrimination that targets these groups in particular. Through their examination of moderation over Google Arts and Culture, Sarah Smith and Bethany Beard pointed to inconsistencies in governance. The issue of how platform governance shapes the lived experience of users, particularly of marginalized communities and content, was explored by scholars including Faheem Muhammed Mp and Musthafa Mubashir, who examined the governance practices of Aarogya Setu, an Indian COVID-19 contract tracing service. Their work weighed how an “embodied approach to data and surveillance” could reduce the harms of structural surveillance.

Explorations of the contested meanings of governance also pointed to the continual subordination of historically silenced voices through content moderation (or the lack thereof). Work by Lucinda Nelson and Nicolas Suzor examined the disconnect between research into online misogyny and existing content moderation practices, while Louisa Bartolo’s work explored how Amazon’s recommendation algorithm for books produces “winners” within historically contentious domains. These presentations propelled questions about whether platforms should continue to support the status quo or use their power to make decisions to increase the visibility of historically oppressed content. Rachel Griffin explored another group that is often overlooked in these assessments — advertisers — and their role in shaping platform policies to create more “brand suitable” spaces.

2. The Risks of Algorithmic Governance

Conference participants debated the implications of automated decision-making (ADM) within platform governance. In a co-authored piece, Jonas Valente presented a study on behalf of his colleagues that explored how the “asset-light” model of digital platforms actually externalize the burden of asset accumulation to workers, through leasing and selling assets like cars used in ride-hailing. The accuracy and fairness of ADM systems, particularly as it relates to content governance, was also a point of discussion. Luis Hernando Lozano Paredes, among others, focused on how user communities are appropriating algorithmic governance mechanisms put in place by platforms, like Uber’s fare calculator, and adapting them for their own goals and needs.

Renana Keydar presented a co-authored work examining an expanding gap between how online platforms publicly present their content moderation policies (as rules), and the ways that AI-driven technologies enforce platform policies (as standards). In the same vein, work by both Laura Aade and Charlotte Spencer-Smith demonstrated how ADM systems frequently flag content that does not actually violate policies, and how these systems can be used to reduce the reach of content that platforms consider to be less commercially viable. The use of “shadowbanning” — or the reduction of reach of content without informing users — as a means of mitigating the risk of potentially policy-violating content, and the complex ethics of these practices, was a thread considered throughout many papers and presentations.

Many participants spoke to global inequity concerns about how AI systems and Large Language models (LLMs) are developed and deployed. Julian Posada shared insights based on his empirical work that examined the experiences of outsourced workers based in Latin America who contribute to the “data work” that is necessary to make artificial intelligence systems operate. On the flip side, Gabriel Nicholas examined many of the explicit and implicit claims social media platforms have been making about the multilingual content analysis capabilities of their LLMs, making the case that languages from communities that do not have a sufficient digital presence are seldomly used to train LLMs. This work is integral to understanding the limits of LLMs in automating content moderation, particularly in what Nicholas refers to as “lower resource” languages.

3. Reproduction of Governance

Underlying many papers were concerns about the extractive nature of platforms, and how private companies reproduce traditional governance mechanisms in the online world. Alluding to this phenomenon, researchers used terms like emulation, echo, transplant, and reinstitution. Amy Hasinoff and Moritz Schramm found that private platforms’ processes often replicate institutions and governance mechanisms associated with democratic designs, while lacking equitable participation, procedural transparency, and appropriate adjudication. They explored how platforms’ reproduction of criminal-based legal systems and references to due process can play a role in legitimizing platform control and justifying unjust forms of private regulation. Similarly, Kyooeun Jang’s work explored the homogenization of content governance practices across borders, in the context of the Metaverse.

Hesam Nourouz Pour examined how two different “imaginaries” of platforms — as public squares or as private shopping malls — have led to conflicting approaches to the regulation of platforms across the US and EU. Giovanni De Gregorio and Catalina Goanta pointed to the replication of Silicon Valley-based terms of service agreements across the globe. They proposed that we look to consumer protection as a way to rethink digital constitutionalism and existing fundamental rights frameworks within the context of social media.

The global nature of the conference facilitated a comparative approach to platform regulation and brought forth necessary discussions about the regulation of online speech in the majority world. Researchers on the Comparative Platform Regulation panel found evidence suggesting that platform regulations copied from those passed in the Global North were potentially contributing to injustices in the Global South. Throughout the conference, scholars highlighted the necessity of studying the jurisdictional nuances of platform regulation. Divij Joshi’s work critiqued recent developments in the area of intermediary liability, using a review of judicial proceedings in India to argue that attempts to regulate platforms based on conditional immunity paves the way for arbitrary state actions against platforms, perpetuating private ordering. Similarly, Fan Yang discussed how the extension of Chinese state interests and surveillance are challenging the growth of ChatGPT-like AI chatbots in China.

4. Potential Failures of State Policies

The panel on Bridging Media and Platform Policy delved into recent efforts to adapt media regulations for the platform era. Sascha Molitorisz and Michael Davis provided an illuminating update on Australia’s News Media Bargaining Code, looking at how the new regulation — a law that requires platforms to pay publishers for linked content — has failed to check existing power dynamics (namely, the power of the Murdochs and News Corp) within the media industry. In a similar vein, though within a different political and cultural environment, Angela Xiao Wu’s work examined how platformization in China followed from the “media reforms” of the 1990s legacy mediascape, converging commercialization with monopolistic infrastructural control. And Brenda Dvoskin looked at this phenomenon within the American context, specifically at how media companies have historically integrated feedback from a select group of civil society organizations, as we see with the Facebook Oversight Board.

Legal blindspots have allowed platforms to assume the privilege of classifying and, thus, legitimizing the gig workforce, often through algorithmically managed systems. This was argued by scholars including Jake Goldenfein and Lee McGuigan, whose work considered how data privacy laws fail to educate people — because they assume that users are rational actors, in contrast to advertisers and platforms, which assume that people’s behavior can be manipulated.

5. Alternative Governance

All presentations stressing unjust platform regulation shared an underlying call for increased participation, transparency, and accountability in platform governance. In line with this, researchers presented cases for alternative models and mechanisms in platform governance. We heard a loud and resounding call for politically aware and corrective moderation brought on by community autonomy and participation. Participants discussed the possibilities of involving civil society actors, gig workers, and users in processes of rule-making and dispute settlement. Brenda Dvoskin, for instance, asserted that the participation of communities is necessary to redefine the values that underlie speech regulation, and to prevent the creation of an elite private public sphere where certain stakeholders push favorable platform policies. Similarly, Soujanya Sridharan and Gautam Misra proposed representative governance models comprising civil society and workers’ organizations.

The Fediverse, and the possibilities and challenges of federated networks as a means of steering civil discourse away from centralized corporate structures, was a subject of much attention — including from Darius Kazemi. The panel on Ruling the Fediverse stressed the potential pitfalls of federated network models, such as distributed governance failures, moderation challenges, commercial entanglements, and reputational risks. These observations are crucial to avoid replicating current regulatory and governance failures. To this end, panelists suggested that the research community evaluate the efficacy of such governance systems based on their capacity to give power to individuals and facilitate collective organization, enabling equitable governance systems.

Steps Ahead for Researchers

The rich discussions between presenters and participants provoked reflections on what future platform governance research could and should look like. The panel on Reimagining Core Concepts urged the community to revisit conceptual approaches to platform governance, which is crucial to breaking free from platforms’ dictating logic and ensuring broader stakeholder participation. Led by Paddy Leerssen and Rebekah Tromble, the panel on Empirical Research and the plenary session on Access to Data for Researchers highlighted the importance of data access for researchers studying platforms, and for civil society in advocating for transparent platform governance. They also offered insight into the EU’s efforts to address this issue through the DSA.

Researchers accentuated the need to broaden the scope of platform governance research, stressing the value of investigating the materiality of platform governance and its political-economic and environmental implications. To that end, the panel on Infrastructures shed light on how digital platforms fuel a techno-solutionist paradigm, creating infrastructural empires that impact our embodied experiences and futures.

How can public values be embedded in speech infrastructures? How can platform governance be made more sensitive to cultural and contextual nuances? How can platforms’ infrastructural extensions into society and climate governance be curbed? How can societies imagine alternative models that redistribute power and accountability? The Platform Governance Research Network conference emboldened the community to better tackle these and other pressing questions. We cannot help but eagerly await the next chapter in 2024!


Charis Papaevangelou is a postdoctoral researcher at the Institute for Information Law at the University of Amsterdam. He holds a PhD in the political economy of online platform governance, which he recently completed at the University of Toulouse. His research expertise lies at the nexus of media, political, cultural, and social sciences.

Simran Agarwal is a doctoral researcher at the LabEx ICCA affiliated with the Université Sorbonne Paris Nord, France. Her thesis research looks at platformization as a form of governance, and its impacts on the digital news industry in India. Her research interests include media policy and regulation, platform governance, and the political economy of communication.

Previous
Previous

Centre for Media, Technology and Democracy Comments on Google’s Block on News in Canada

Next
Next

Centre for Media, Technology and Democracy Comments on Meta’s Decision to Block News for Some Canadians