Public Interest and Media Infrastructures: Regulating the Technology Companies that Make ‘Pictures in Our Heads’

Mike Ananny
June 2020

 
 
MediaInfrastructures.jpg

Introduction: The Pictures in Our Heads

Writing shortly after the end of the First World War—when professional journalism was in its infancy and publishers were discovering that the U.S. government had lied to them (and the public) for much of the war—Walter Lippmann formulated a key challenge that is still with us. He observed that “the world that we have to deal with politically is out of reach, out of sight, out of mind. It has to be explored, reported, and imagined.”[1] Given that our political worlds exist at scales beyond our direct experience—within privately owned and technologically obscured social media platforms—how can we better explore, report, and imagine public life today by better holding these platforms publicly accountable?

Lippmann’s insight was that people were increasingly living in complex webs of relations that were big and powerful and complicated, impossible to escape, and deeply dependent on media.  Journalists’ job was to create the “pictures inside the heads of…human beings, the pictures of themselves, of others, of their needs, purposes, and relationships”[2] that could make people feel, know, and act in particular ways. What irked Lippmann was that, in its effort to secure public support for the war, the U.S. government had not only fabricated casualty counts and lied about battles but, in doing so, it had also manipulated people into feeling solidarity, outrage, and patriotism. It had helped to form imagined false images of the world that the state then co-opted and used for its purposes. This was a double betrayal because the media could not be good-faith sources for individual learning, nor could it be a vehicle for discovering and managing shared social conditions. If the media lied to you, how could you trust what you or your friends thought? And why would you ever willingly sacrifice anything in the service of a larger, collective, public good, based on what the media told you?

These questions sparked decades of research into media systems. How should media be created, acted on, funded, professionalized, and held publicly accountable? Who was more or less susceptible to media manipulations? What exactly counts as “the media,” whose stories are being told, and who has access to publishing power? Although decades of Communication research tells us that propaganda and social manipulation are complex social and cultural processes that cannot be reduced to mere information transmission,[3] “the media” continues to be an ill-defined, fragile meaning-making system that makes and remakes its philosophical and professional moorings anew in every era. In other words, Lippmann’s insights remain true.

Today, the “imagined communities”[4] that social media create are lenses we use to know how to think, feel, and act. Journalists and audiences alike[5] look to these platforms to understand issues as varied as climate crises, food supply, whether to go to war, or what it means to be “Canadian” or “European.” Writing in May 2020, the pandemic makes this point especially powerfully. For millions of people, their beliefs and behaviours—whether to wear a mask, get tested for Covid-19, socially isolate themselves, trust medical experts—depend not only upon where people live and the policies of their local governments, but also on the relationships and algorithms of social media platforms.[6]

The key difference between Lippmann’s era and today, though, is that we have a very different media system, with very different power dynamics. The media is still a complex mix of people, economic interests, professional values, regulatory frameworks, and ideals of public life. But it also includes a new and largely inscrutable set of privately controlled and proprietary computational systems driven by advertising markets and optimized through machine learning algorithms. Variously called the “hybrid media system”[7] or the “networked press,”[8] today’s media system includes not only the traditional newsroom personnel, editorial judgments, and publishing channels. It also contains a messy mix of training data, user clicks, advertising metrics, surveillance systems, machine learning models, and recommendation engines. These systems are motivated by data. Indeed, at the most basic level, these systems are data—and we are that data.[9]

This media system needs a constant stream of data in order to create pictures for our heads that are personalized, predictable, and profitable. Unlike in Lippmann’s era, scale is not a problem for the media to overcome or an unfortunate side-effect of modern life. On the contrary, scale is a resource for these media systems to extract and harness, a key method for creating tailored, instantaneous images of the world that can be bought and sold.[10] These systems buy and sell people by surveilling, commodifying, and shaping their behaviours—valuing some people more than others because their data are worth more than others.[11]

The complex systems that produce media use algorithmic processes to convert “big data” into stable stories. These are the stories that drive individual beliefs, fuel commerce, and organize collective action.[12] Today’s “liars” are not (only) states that deceive individuals and manufacture solidarity. Rather, their power is more subtle. They claim that they do not “lie” to you, but simply show you what you and others have said. They position themselves as neutral mirrors that simply reflect the best and worst of society. If deception and co-optation happen, it’s because of what you do, not what they do.[13]

Publicly and precisely critiquing how platforms position themselves is critical to the future of platform governance. Regulators must squarely tackle the narrative of disinterest, user service, objectivity, and voluntary participation that platforms repeat. But doing so means delving into the details of how platforms work and understanding them much better than we currently do. It means conceptualizing platforms not as channels or broadcasters but as private, for-profit, invisible infrastructures of human values and computational power that even their creators often do not fully understand. Indeed, although they bear some resemblance to earlier media institutions, their form is unprecedented and overwhelmingly motivated by financial, not editorial priorities.[14]

The pictures that these infrastructures create “work” if they are economically viable, culturally palatable, and politically plausible. Because they have the potential to keep users and advertisers engaged, platforms try to create as many realities as possible,[15] outsourcing the consequences of those realities to the societies that they say their technologies simply reflect. Indeed, by adopting a (profitable) marketplace model of truth in which the truth is seen to be “produced by its collision with error,” platforms reject anything other than a libertarian image of free speech.[16] This hands-off approach aligns well with platforms’ desires for large scales of data. More data bring more truth, faster.

Lippmann’s concerns about lies and manipulation remain valid, but I suspect he would be shocked at platforms’ general disregard for the very idea of stable, human-created truths, and the relatively small-scale investments that they have made in fact-checking[17] and self-governance,[18] which their public relations staffs celebrate as public commitments. Recent investigative journalism tells us that platforms know that they damage public life, but they will do nothing that upsets their business models or takes responsibility for the reality-shaping power of their algorithms.[19]

So, if social media platforms are not motivated by truth seeking, shared reality, and collective action based on knowledge and expertise—key ingredients of healthy public life—then how can we reform them toward more public ends? As a small number of powerful technology companies increasingly controls the conditions under which people and computational systems make, interpret, circulate and act upon information, how can we rescue the idea of collective, publicly accountable self-governance through communication? To address this question, we need two types of progress (the first of which is the focus of the rest of this essay).

First, the public needs far more sophisticated mastery of the inner workings and impacts of today’s media systems. If regulators could better understand the complexities, assumptions, and interconnections that shape how online news is made, commodified, and acted upon, they would be much better equipped to protect the public interest. To better implement and evaluate media policy, I want to suggest that regulators adopt and deploy the concept of “infrastructure,” explained below.

Second, although not the focus of this essay, progress on these questions requires significant political will. Technology industries often respond to regulatory threats by claiming that:

  • Their systems use proprietary knowledge that they cannot publicly disclose;

  • Their business models require large-scale data harvesting;

  • People are unwilling to pay for services that are currently underwritten by people’s data; and

  • Encryption technologies, transparency commitments, and controlled data disclosure obviate the need for public oversight.

They defend themselves through a mix of trade secrets, economic claims, promises of self-regulation, and technological solutionism, forestalling real public oversight.


Media as Infrastructure

In Lippmann’s era, he could squarely frame the problem of the media as unsophisticated journalists parroting elite politicians to citizens who were too busy or ignorant to resist manipulation and do their civic duties. The answer, he suggested, was better, more objective, “scientized” journalism[20] motivated by a “faith in ‘facts,’ a distrust of ‘values,’ and a commitment to their segregation.”[21] Though often tempered with calls for “mature subjectivity”[22] that reject the possibility of a truly disinterested and neutral reporter, this belief in objectivity still dominates journalism today.

In many ways, “mature subjectivity” is not a bad ideal image of the media. The challenge is that the media systems of 2020 look radically different from those of Lippmann’s time. Today, it is more accurate to say that news and information emerge from media infrastructures that include not only a dwindling number of professional journalists and news publishers, but also:

  • machine learning algorithms and international workforces that rank and moderate content;[23]

  • fact-checking partnerships between news organizations and technology companies;[24]

  • online political parties;[25]

  • election law;[26]

  • voter management platforms;[27]

  • digital advertising markets;[28]

  • self-governance initiatives like Facebook’s Oversight Board;[29] and

  • automated content-producing social media bots.[30]

How can we make sense of this mix so that it has a shape and structure that can be regulated in the public interest? What does “mature subjectivity” mean when platforms persistently describe themselves as technology companies[31] with no editorial position other than a desire to provide “the ability for anyone to talk about what matters to them”?[32] We need an approach to platform governance that captures the layers of “relationships structuring interactions between key parties in today’s platform society, including platform companies, users, advertisers, governments, and other political actors.”[33]

To see and influence these layers and relations, one especially promising approach is to use the concept of “infrastructure,” an increasingly prevalent idea that Communication, Media Studies, and Science and Technology Studies scholars use to trace complex intersections between people and computational systems. Infrastructures are the relationships that run underneath the more visible system components that most people see and use. Infrastructure is usually taken for granted, grows out of specialized work cultures, depends upon norms and unspoken knowledge, and is invisible until it breaks down.[34] Many scholars foreground ideas of architecture and infrastructure in their studies of platform power and internet policy,[35] and some are beginning to use the idea to frame empirical fieldwork on analytics dashboards,[36] fact-checking tools,[37] distribution channels,[38] internet protocols,[39] advertising technologies,[40] and technological affordances[41] that form the invisible, political infrastructural backbones of online content.

Infrastructures are powerful because they depict people and stakes in new ways. At first pass, they look like boring, messy, technical “middle layers” where only engineers work. But because infrastructures are where important decisions are made, they are the best and most underexploited places where regulation can have the greatest impact.

Some people focus on parts of infrastructure that are essential, but that few will ever see directly. For example, consider the Facebook engineers who tweak the algorithms that make News Feed advertising recommendations. Most people see the advertisements, but never see the training data, rule structures, machine learning systems, and test cases that place the algorithms there. But if you are one of those engineers, one or more of those things is your focus. That is your infrastructure, and you have a sophisticated set of practices, cultures, norms, and metrics that structure your work. You may have a more or less sophisticated understanding of how your work connects to the larger platform and, indeed, you may be better able to do your work if you limit your focus to your layer of the infrastructure and let your bosses and colleagues worry about the other layers. Now, if you care about regulating advertising systems, you need a detailed understanding of that part of the infrastructure. You need to focus on the practices, cultures, norms, and metrics of those engineers. Otherwise, you will stay at the level of an infrastructure user, never fully appreciating what is taken for granted, which knowledge is privileged, why exceptions are made, and who has power within the cultures of advertising infrastructure.[42]

Likewise, if you work at Twitter on the system used to report offensive content, you are intimately familiar with categories of speech, company policies, and user penalties that most Twitter users never experience directly. But if you have had your account suspended for some violation, you very quickly care about otherwise invisible and seemingly boring infrastructure: the language used to describe violations, the algorithms that flag content, the training data that teaches machine learning algorithms, the working conditions of content moderators, the appeal mechanisms, and how your case is judged similar to another.

These are just two examples that regulators know exist but that they rarely seem to see as infrastructures that are ripe for public oversight.


Infrastructure Concepts for Regulating Media

It is incredibly difficult to regulate new infrastructures. They are unstable and hard to centre as bounded objects of concern. Companies often do not acknowledge that they exist or they minimize them as “just” boring, technical tools that are whatever people want them to be.[43] Critics of infrastructural regulation will also balk at rules that are too specific, that “compress” values into particular technologies.[44]

This is where the concept of infrastructure can be helpful. It rejects simple distinctions between user versus tool. It instead focuses on relations among people and materials, humans and computation.[45] But how might regulators centre these relations and make them objects of public oversight?

The literature on infrastructural concepts is too large to be summarized here, but I want to focus on three infrastructural concepts (categories, probabilities, and exceptions) that regulators might consider as opportunities for oversight. To be sure, the academic literature cannot depict these with exactly the framings that regulators need to create actionable, measurable policy instruments, but I offer them here with the goal of bringing sociotechnical scholarship closer to policy design.

Categories[46]

Categories are crucial parts of infrastructures because they define people and data into predictable units that can be aggregated, combined, and analyzed. Platforms need words like “false news”, “misinformation”, “fake stories”, “inauthentic content”, “misleading content”, “politician”, “election”, “engagement”, “like”, and “friend” to have stable meanings.” Platforms’ definitions of these words become baked into their policies, algorithms, monetization strategies, and public defences. One of the journalists I interviewed who works with Facebook’s fact-checking partnership said that the word “popularity” was never defined, even though it figured heavily into the dashboard that the partnership used to organize content. “We’ve asked [Facebook] a hundred ways to Sunday what ‘popularity’ means. We don’t know the mechanism they use to determine popularity.”[47] Facebook owned the word “popularity” because the word’s stability and predictability was key to making stable and predictable its fact-checking infrastructure of algorithms and fact-checkers. If that word became contestable or its politics became too apparent, it would harm Facebook’s operations and undermine its business model. Companies see unstable and diverse categories as risks to be minimized.

To regulators: look for and challenge the public power of the seemingly boring, obvious, and incontrovertible categories that platforms use to stabilize their infrastructures.

Probabilities[48]

Platforms need their infrastructures to behave predictably, knowing which outcomes are more likely, which successes are probably achievable, and how likely errors will be. Probability is a way of governing scale—a way to turn massive amounts of data, nearly instantaneous actions, and highly varied personal behaviours into stable actuarial possibilities. Facebook’s Monika Bickert acknowledges that a “company that reviews a hundred thousand pieces of content per day and maintains a 99% accuracy rate may still have up to a thousand errors.”[49] Twitter’s Del Harvey says “if you’re talking about a billion tweets, and everything goes perfectly right 99.999% of the time, then you’re still talking about 10,000 tweets where everything might not have gone right.”[50] And when Facebook partnered with U.S. news and fact-checking organizations to fight misinformation, it celebrated that it was able to “cut future views by more than 80%” of content that fact-checkers had labelled as false.

Once you look for them, probability, chance, likelihoods, error rates, and actuarial calculations are fundamental to how platforms operate. Probability is the key instrument for governing scale, but it is largely ignored by regulators. Why is it sufficient to reduce views of misinformation by 80%? How is the other 20% distributed? Is this percentage an artifact of machine learning algorithms that have been judged “good enough” to deploy? How is the labour of training these probabilistic systems distributed among vulnerable populations? How are false positives and false negatives distributed and who must bear the burden of their correction?

To regulators: delve more deeply into the probabilistic machinery of platforms, ask whose interests error rates serve, and block platforms from releasing products that fail too often and that systematically harm the weakest.

Exceptions[51]

Although platforms have long stated their policies and community standards and have recently started formalizing these principles and appeals processes into self-regulatory bodies,[52] they have also reserved the sole right to make exceptions to their own rules. The cases of exceptions are famous. After the Norwegian Prime Minister posted to her page a Pulitzer Prize-winning photo that Facebook had censored, the company said that because of its status “as an iconic image of historical importance, the value of permitting sharing outweighs the value of protecting the community by removal, so we have decided to reinstate the image.”[53] Google Play’s Books content policy states that it “may make exceptions to these policies based on artistic, educational, historical, documentary, or scientific considerations, or where there are other substantial benefits to the public.”[54] And Twitter lists an extensive set of exceptions to its moderation policies—even with exceptions to the exceptions—saying that “there are certain cases where it may be in the public’s interest to have access to certain Tweets, even if they would otherwise be in violation of our rules.”[55], [56] Most recently, Twitter made an exception to its public figure exception and publicly fact-checked several of President Trump’s tweets,[57] setting up a debate about which people and circumstances warrant exceptions to exceptions.

Setting aside the thin definitions of “public interest” that platforms usually offer—again, it is to their advantage to leave the phrase ambiguous and control its meaning—platforms exercise power by creating policy, creating exceptions to those policies, and selectively applying exceptions when circumstances warrant. As Schauer puts it, the power to manage exceptions is the “power both to change rules and to avoid their constraints.”[58] While self-regulating policies give platforms the strategic benefit of seeming like they have rules to anticipate and manage outcomes responsibly, exceptions give them the added strategic benefit of changing their minds and seeming responsive to new circumstances and contexts. Their thin definitions of “public interest” can easily persist in this space of rules and exceptions.

To regulators: carefully and critically scrutinize platforms’ thin definitions of public interest and challenge their strategic use of exceptions as ways to simultaneously enact, apply, and ignore self-regulating policy.

Conclusion

If we return to Lippmann’s concern about the media systems that create the “pictures in our heads” through this infrastructural lens, we can start to see new and powerful ways to regulate the technology companies. We can see media not as channels for delivering content but as relationships among people and computational technologies that make, distribute, interpret, and act upon the stories we use to make public life. We can better interrogate the claims of platforms—challenging the words they use, the errors they tolerate, and the exceptions they make or refuse to make. We can more critically challenge their usual excuses (“we don’t create content”, “we’re just computer scientists”, “we make no human interventions”) and create media governance that forces technology companies to enact—in their infrastructures—a public service mandate.[59]

There is a significant barrier, though, to this dream of public interest social media. Most of the infrastructures that power social media are far too central to the ideologies and business models of technology companies. If they were to acknowledge that their machine learning algorithms, artificial intelligence models, and recommendation systems are actually driven by their values and goals (and are not simply objective mirrors of society), then they would have to out themselves as interested, ideologically driven actors, not neutral technologists implementing common sense norms. And they would have to acknowledge that their business models aim to shape people’s desires, not simply fulfill them. They would have to show us how they design their infrastructures to fuel our emotions, exacerbate our divisions, and get us to spend money.

But even if they were to acknowledge how their ideological positions and economic motivations define their practices, we would still be stuck trying to access their infrastructures. Facebook, for example, shows how clearly it understands the power of its controversial and secret infrastructures by distracting the research community with an Oversight Board with limited scope, and by giving a small set of academics limited access to its server data, long after it said it would.

Infrastructural oversight in the public interest will not be easy. Regulators will have to see these interconnections between people and systems, understand their power and public significance, and exert political will to force technology companies to give researchers and journalists the access they need to create a better public life.

The power struggles needed to create this change will be real and controversial. They will mean tackling head-on what infrastructure scholar Lisa Parks calls “the politics of infrastructural intelligibility”: those who understand infrastructures and their power best have the most to gain by keeping them secret, mysterious and private.

The good news is that there is a generation of sociotechnical scholars ready to do the work and create a better public life. They just need the support and political courage of regulators willing to create change.  


Endnotes

[1] Lippmann, W. (1922). Public opinion. New York, NY: Free Press, p.18.

[2] Lippmann, W. (1922). Public opinion. New York, NY: Free Press, p.18.

[3] Jack, C. (2019). Wicked content. Communication, Culture and Critique.

[4] Anderson, B. (1983). Imagined communities (Revised ed.). London, UK: Verso, p.62.

[5] McGregor, S. C. (2019). Social media as public opinion: How journalists use social media to represent public opinion. Journalism.

[6] Holtz, D., Zhao, M., Benzell, S. G., Cao, C. Y., Rahimiana, M. A., Yang, J., Aral, S. (2020). Interdependence and the cost of uncoordinated responses to COVID-19.

[7] Chadwick, A. (2017). The hybrid media system: Politics and power (2nd ed.). Oxford, UK: Oxford University Press.

[8] Ananny, M. (2018a). Networked press freedom: Creating infrastructures for a public right to hear. Cambridge, MA: MIT Press.

[9] Cheney-Lippold, J. (2017). We are data: Algorithms and the making of our digital selves. New York, NY: NYU Press is; Koopman, C. (2019). How we became our data. Chicago, IL: University of Chicago Press.

[10] Couldry, N., & Mejias, U. A. (2019). The costs of connection: How data Is colonizing human life and appropriating it for capitalism. Stanford, CA: Stanford University Press.

[11] Benjamin, R. (2019). Race after technology. London, UK: Polity.

[12] Crawford, K. (2013, May 9). Think again: Big data. Foreign Policy.

[13] Gillespie, T. (2018a). Custodians of the internet: Platforms, content moderation, and the hidden decisions that shape social media. New Haven, CT: Yale University Press.

[14] Elsewhere I have argued that “sometimes [platforms] are like cities, newsrooms, post offices, libraries, or utilities — but they are always like advertising firms” (Ananny, 2019b, n.p.).

[15] Silverman, C. (2020, May 22). The information apocalypse Is already here, and reality is losing. BuzzFeed News.

[16] Wu, T. (2018). Is the First Amendment obsolete? In L. C. Bollinger & G. R. Stone (Eds.), The free speech century. Oxford, UK: Oxford University Press.

[17] Ananny, M. (2018b, April 4). The partnership press: Lessons for platform-publisher collaborations as Facebook and news outlets team to fight misinformation. Tow Center for Digital Journalism, Columbia University.

[18] Douek, E. (2019). Facebook's 'Oversight Board:' Move fast with stable infrastructure and humility. North Carolina Journal of Law & Technology, 21(1).

[19] Horwitz, J., & Seetharaman, D. (2020, May 26). Facebook executives shut down efforts to make the site less divisive. The Wall Street Journal.

[20] Hallin, D. C. (1985). The American news media: A critical theory perspective. In J. Forester (Ed.), Critical theory and public life. Cambridge, MA: The MIT Press, p.130.

[21] Schudson, M. (1978). Discovering the news: A social history of American newspapers. New York, NY: Basic Books, p.6.

[22] Schudson, M. (1978). Discovering the news: A social history of American newspapers. New York, NY: Basic Books, p.192.

[23] Gray, M. L., & Suri, S. (2019). Ghost work. Boston, MA: Houghton Mifflin Harcourt; Roberts, S. T. (2019). Behind the screen. New Haven, CT: Yale University Press.

[24] Ananny, M. (2018b, April 4). The partnership press: Lessons for platform-publisher collaborations as Facebook and news outlets team to fight misinformation. Tow Center for Digital Journalism, Columbia University.

[25] Kreiss, D., & McGregor, S. C. (2017). Technology firms shape political communication: The work of Microsoft, Facebook, Twitter, and Google with campaigns during the 2016 U.S. presidential cycle. Political Communication.

[26] Cohen, J. E. (2020). Tailoring election regulation: The platform is the frame. Georgetown Law Technology Review, 4(1).

[27] McKelvey, F. (2019). Cranks, clickbait and cons: On the acceptable use of political engagement platforms. Internet Policy Review, 8(4).

[28] Braun, J. A., & Eklund, J. L. (2019). Fake news, real money: Ad tech platforms, profit-driven hoaxes, and the business of journalism. Digital Journalism, 7(1).

[29] Douek, E. (2019). Facebook's 'Oversight Board:' Move fast with stable infrastructure and humility. North Carolina Journal of Law & Technology, 21(1).

[30] Dubois, E., & McKelvey, F. (2019). Political bots: Disrupting Canada's democracy. Canadian Journal of Communication, 44(2); Gorwa, R., & Guilbeault, D. (2018). Unpacking the social media bot: A typology to guide research and policy. Policy & Internet.

[31] Napoli, P. M., & Caplan, R. (2017). Why media companies insist they're not media companies, why they're wrong, and why it matters. First Monday, 22(5).

[32] Twitter. (2019, June 27). Defining public interest on Twitter. Twitter Safety.

[33] Gorwa, R. (2019). What is platform governance? Information, Communication & Society, p.1.

[34] Star, S. L., & Bowker, G. C. (2006). How to infrastructure. In L. A. Lievrouw & S. M. Livingstone (Eds.), Handbook of new media: social shaping and social consequences of ICTs. London, UK: Sage Publications.

[35] DeNardis, L. (2012). Hidden levers of internet control: An infrastructure-based theory of internet governance. Information, Communication & Society, 15(5); Nieborg, D. B., & Poell, T. (2018). The platformization of cultural production: Theorizing the contingent cultural commodity. New Media & Society, 20(11); Plantin, J.-C., Lagoze, C., Edwards, P. N., & Sandvig, C. (2016). Infrastructure studies meet platform studies in the age of Google and Facebook. New Media & Society; van Dijck, J., Poell, T., & de Waal, M. (2018). The platform society. Oxford, UK: Oxford University Press; van Schewick, B. (2010). Internet architecture and innovation. Cambridge, MA: MIT Press.

[36] Petre, C. (2018). Engineering consent. Digital Journalism, 6(4).

[37] Ananny, M. (2018b, April 4). The partnership press: Lessons for platform-publisher collaborations as Facebook and news outlets team to fight misinformation. Tow Center for Digital Journalism, Columbia University; Graves, L., & Anderson, C. W. (2020). Discipline and promote: Building infrastructure and managing algorithms in a ‘structured journalism’ project by professional fact-checking groups. New Media & Society, 22(2).

[38] Braun, J. A. (2015). This program is brought to you by: Distributing television news online. New Haven, CT: Yale University Press.

[39] McKelvey, F. (2018). Internet daemons. Minneapolis, MN: University of Minnesota Press.

[40] Braun, J. A., & Eklund, J. L. (2019). Fake news, real money: Ad tech platforms, profit-driven hoaxes, and the business of journalism. Digital Journalism, 7(1).

[41] Bimber, B., & Gil de Zúñiga, H. (2020). The unedited public sphere. New Media & Society, 22(4).

[42] Braun, J. A., & Eklund, J. L. (2019). Fake news, real money: Ad tech platforms, profit-driven hoaxes, and the business of journalism. Digital Journalism, 7(1).

[43] Gillespie, T. (2010). The politics of 'platforms'. New Media & Society, 12(3).

[44] Schauer, F. (2005). Towards an institutional first amendment. Minnesota Law Review, 89.

[45] Seaver, N. (2017). Algorithms as culture: Some tactics for the ethnography of algorithmic systems. Big Data & Society, 4(2).

[46] This framing of categories and platforms depends heavily on Ananny (2020).

[47] Ananny, M. (2018b, April 4). The partnership press: Lessons for platform-publisher collaborations as Facebook and news outlets team to fight misinformation. Tow Center for Digital Journalism, Columbia University.

[48] This framing of probabilities and platforms depends heavily on Ananny (2019a).

[49] Bickert, M. (2018). Defining the boundaries of free speech on social media. In L. C. Bollinger & G. R. Stone (Eds.), The free speech century. Oxford, UK: Oxford University Press, p.269.

[50] Cited in Gillespie, T. (2018b). Platforms are not intermediaries. Georgetown Law Technology Review, 2(2), p.198.

[51] This framing of exceptions and platforms depends heavily on Ananny and Gillespie (2016).

[52] Douek, E. (2019). Facebook's 'Oversight Board:' Move fast with stable infrastructure and humility. North Carolina Journal of Law & Technology, 21(1).

[53] Levin, S., Wong, J. C., & Harding, L. (2016, September 9). Facebook backs down from 'napalm girl' censorship and reinstates photo. The Guardian.

[54] Google. (n.d.). Publisher content policies for Google Play books.

[55] The company somewhat tautologically defines “public interest”: “We consider content to be in the public interest if it directly contributes to understanding or discussion of a matter of public concern” (Twitter, n.d.).

[56] Twitter. (2019, June 27). Defining public interest on Twitter. Twitter Safety.

[57] Dwoskin, E. (2020, May 27). Trump lashes out at social media companies after Twitter labels tweets with fact checks. The Washington Post

[58] Schauer, F. (1991). Exceptions. The University of Chicago Law Review, 58(3), p.873.

[59] Napoli, P. M. (2019). Social media and the public interest. New York, NY: Columbia University Press. 


References

Ananny, M. (2018a). Networked press freedom: Creating infrastructures for a public right to hear. Cambridge, MA: MIT Press.

Ananny, M. (2018b, April 4). The partnership press: Lessons for platform-publisher collaborations as Facebook and news outlets team to fight misinformation. Tow Center for Digital Journalism, Columbia University. Retrieved from https://www.cjr.org/tow_center_reports/partnership-press-facebook-news-outlets-team-fight-misinformation.php/

Ananny, M. (2019a, August 21). Probably speech, maybe free: Toward a probabilistic understanding of online expression and platform governance. Knight First Amendment Institute at Columbia University, "Free Speech Futures." Retrieved from https://knightcolumbia.org/research/free-speech-futures-reimagining-the-first-amendment-in-the-digital-age

Ananny, M. (2019b, October 10). Tech platforms are where public life is increasingly constructed, and their motivations are far from neutral. Nieman Lab. Retrieved from https://www.niemanlab.org/2019/10/tech-platforms-are-where-public-life-is-increasingly-constructed-and-their-motivations-are-far-from-neutral/

Ananny, M. (2020). Making up political people: How social media create the ideals, definitions, and probabilities of political speech. Georgetown Law Technology Review, 4(1).

Ananny, M., & Gillespie, T. (2016). Public platforms: Beyond the cycle of shocks and exceptions. Paper presented at the The Platform Society, Oxford, UK. http://blogs.oii.ox.ac.uk/ipp-conference/sites/ipp/files/documents/anannyGillespie-publicPlatforms-oii-submittedSept8.pdf

Anderson, B. (1983). Imagined communities (Revised ed.). London, UK: Verso.

Benjamin, R. (2019). Race after technology. London, UK: Polity.

Bickert, M. (2018). Defining the boundaries of free speech on social media. In L. C. Bollinger & G. R. Stone (Eds.), The free speech century (pp. 254-271). Oxford, UK: Oxford University Press. 

Bimber, B., & Gil de Zúñiga, H. (2020). The unedited public sphere. New Media & Society, 22(4), 700-715. doi:10.1177/1461444819893980

Braun, J. A. (2015). This program is brought to you by: Distributing television news online. New Haven, CT: Yale University Press. 

Braun, J. A., & Eklund, J. L. (2019). Fake news, real money: Ad tech platforms, profit-driven hoaxes, and the business of journalism. Digital Journalism, 7(1), 1-21. doi:10.1080/21670811.2018.1556314

Chadwick, A. (2017). The hybrid media system: Politics and power (2nd ed.). Oxford, UK: Oxford University Press.

Cheney-Lippold, J. (2017). We are data: Algorithms and the making of our digital selves. New York, NY: NYU Press.

Cohen, J. E. (2020). Tailoring election regulation: The platform is the frame. Georgetown Law Technology Review, 4(1).

Couldry, N., & Mejias, U. A. (2019). The costs of connection: How data Is colonizing human life and appropriating it for capitalism. Stanford, CA: Stanford University Press.

Crawford, K. (2013, May 9). Think again: Big data. Foreign Policy. Retrieved from http://www.foreignpolicy.com/articles/2013/05/09/think_again_big_data?page=full

DeNardis, L. (2012). Hidden levers of internet control: An infrastructure-based theory of internet governance. Information, Communication & Society, 15(5), 720-738.

Douek, E. (2019). Facebook's 'Oversight Board:' Move fast with stable infrastructure and humility. North Carolina Journal of Law & Technology, 21(1). Retrieved from https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3365358

Dubois, E., & McKelvey, F. (2019). Political bots: Disrupting Canada's democracy. Canadian Journal of Communication, 44(2), 27-33. doi:https://doi.org/10.22230/cjc.2019v44n2a3511

Dwoskin, E. (2020, May 27). Trump lashes out at social media companies after Twitter labels tweets with fact checks. The Washington Post. Retrieved from https://www.washingtonpost.com/technology/2020/05/27/trump-twitter-label/ 

Gillespie, T. (2010). The politics of 'platforms'. New Media & Society, 12(3), 347-364. 

Gillespie, T. (2018a). Custodians of the internet: Platforms, content moderation, and the hidden decisions that shape social media. New Haven, CT: Yale University Press.

Gillespie, T. (2018b). Platforms are not intermediaries. Georgetown Law Technology Review, 2(2), 198-216.

Google. (n.d.). Publisher content policies for Google Play books. Retrieved from https://support.google.com/books/partner/answer/1067634?hl=en

Gorwa, R. (2019). What is platform governance? Information, Communication & Society, 1-18. doi:10.1080/1369118X.2019.1573914 

Gorwa, R., & Guilbeault, D. (2018). Unpacking the social media bot: A typology to guide research and policy. Policy & Internet. doi:10.1002/poi3.184

Graves, L., & Anderson, C. W. (2020). Discipline and promote: Building infrastructure and managing algorithms in a ‘structured journalism’ project by professional fact-checking groups. New Media & Society, 22(2), 342-360.

Gray, M. L., & Suri, S. (2019). Ghost work. Boston, MA: Houghton Mifflin Harcourt.

 Hallin, D. C. (1985). The American news media: A critical theory perspective. In J. Forester (Ed.), Critical theory and public life (pp. 121-146). Cambridge, MA: The MIT Press.

Holtz, D., Zhao, M., Benzell, S. G., Cao, C. Y., Rahimiana, M. A., Yang, J., … Aral, S. (2020). Interdependence and the cost of uncoordinated responses to COVID-19. Retrieved from https://mitsloan.mit.edu/shared/ods/documents/?PublicationDocumentID=7397

Horwitz, J., & Seetharaman, D. (2020, May 26). Facebook executives shut down efforts to make the site less divisive. The Wall Street Journal. Retrieved from https://www.wsj.com/articles/facebook-knows-it-encourages-division-top-executives-nixed-solutions-11590507499

Jack, C. (2019). Wicked content. Communication, Culture and Critique. doi:10.1093/ccc/tcz043

Koopman, C. (2019). How we became our data. Chicago, IL: University of Chicago Press.

Kreiss, D., & McGregor, S. C. (2017). Technology firms shape political communication: The work of

Microsoft, Facebook, Twitter, and Google with campaigns during the 2016 U.S. presidential cycle. Political Communication, 1-23. doi:10.1080/10584609.2017.1364814

Levin, S., Wong, J. C., & Harding, L. (2016, September 9). Facebook backs down from 'napalm girl' censorship and reinstates photo. The Guardian. Retrieved from https://www.theguardian.com/technology/2016/sep/09/facebook-reinstates-napalm-girl-photo

Lippmann, W. (1922). Public opinion. New York, NY: Free Press.

McGregor, S. C. (2019). Social media as public opinion: How journalists use social media to represent public opinion. Journalism. doi:10.1177/1464884919845458

McKelvey, F. (2018). Internet daemons. Minneapolis, MN: University of Minnesota Press.

McKelvey, F. (2019). Cranks, clickbait and cons: On the acceptable use of political engagement platforms. Internet Policy Review, 8(4). doi:10.14763/2019.4.1439

Napoli, P. M. (2019). Social media and the public interest. New York, NY: Columbia University Press.

Napoli, P. M., & Caplan, R. (2017). Why media companies insist they're not media companies, why they're wrong, and why it matters. First Monday, 22(5). Retrieved from http://firstmonday.org/ojs/index.php/fm/article/view/7051

Nieborg, D. B., & Poell, T. (2018). The platformization of cultural production: Theorizing the contingent cultural commodity. New Media & Society, 20(11), 4275-4292. doi:10.1177/1461444818769694

Petre, C. (2018). Engineering consent. Digital Journalism, 6(4), 509-527. doi:10.1080/21670811.2018.1444998

Plantin, J.-C., Lagoze, C., Edwards, P. N., & Sandvig, C. (2016). Infrastructure studies meet platform studies in the age of Google and Facebook. New Media & Society. doi:10.1177/1461444816661553 

Roberts, S. T. (2019). Behind the screen. New Haven, CT: Yale University Press.

Schauer, F. (1991). Exceptions. The University of Chicago Law Review, 58(3), 871-899.

Schauer, F. (2005). Towards an institutional first amendment. Minnesota Law Review, 89, 1256-1279.

Schudson, M. (1978). Discovering the news: A social history of American newspapers. New York, NY: Basic Books.

Seaver, N. (2017). Algorithms as culture: Some tactics for the ethnography of algorithmic systems. Big Data & Society, 4(2). doi:10.1177/2053951717738104

Silverman, C. (2020, May 22). The information apocalypse Is already here, and reality is losing. BuzzFeed News. Retrieved from https://www.buzzfeednews.com/article/craigsilverman/coronavirus-information-apocalypse

Star, S. L., & Bowker, G. C. (2006). How to infrastructure. In L. A. Lievrouw & S. M. Livingstone (Eds.), Handbook of new media: social shaping and social consequences of ICTs (pp. 151-162). London, UK: Sage Publications.

Twitter. (n.d.). About public-interest exceptions on Twitter. Twitter Help Center. Retrieved from https://help.twitter.com/en/rules-and-policies/public-interest

Twitter. (2019, June 27). Defining public interest on Twitter. Twitter Safety. Retrieved from https://blog.twitter.com/en_us/topics/company/2019/publicinterest.html

van Dijck, J., Poell, T., & de Waal, M. (2018). The platform society. Oxford, UK: Oxford University Press.

van Schewick, B. (2010). Internet architecture and innovation. Cambridge, MA: MIT Press.

Wu, T. (2018). Is the First Amendment obsolete? In L. C. Bollinger & G. R. Stone (Eds.), The free speech century (pp. 272-291). Oxford, UK: Oxford University Press.


 
Previous
Previous

Facial Recognition Moratorium Briefing #1: Implications of a Moratorium on the Use of Facial Recognition Technology in Canada

Next
Next

Anti-intellectualism and Information Preferences during the COVID-19 Pandemic