What if automated takedowns of harmful and hateful content do not work? Or at least, not to the extent proposed by social media platforms in response to right-wing extremisms?

In the wake of the January 6 insurrection at the US Capitol, Facebook promised to remove all content delegitimizing the last election. But recent reporting suggests that Facebook’s past efforts failed to address the growth of a violent movement against the election. More troubling, Facebook’s recent content moderation efforts have done little to dampen public support for election disinformation with 55 percent of Republicans believing that Biden’s election was “the result of illegal voting or election rigging.” Even as content moderation’s utility is questioned, the insurrection precipitated a new push to increase automated content moderation to combat right-wing extremists.

“Overstating technology’s effects understates the problem of right-wing extremism.”

But right-wing extremisms cannot be automated away. The rush to remove content automatically overemphasizes technology as the most effective response to right-wing extremisms. Moreover, automation ignores the structural conditions that instigated these movements’ rise over the past decade. Overstating technology’s effects understates the problem of right-wing extremism.

We must reject what I call walled strategies of content moderation and regulation on social media platforms and look toward more holistic beginnings. A first step requires rejecting right-wing extremisms as some kind of “content” problem. Following an established and growing scholarly consensus,1For example, see Whitney Phillips and Ryan M. Milner, You Are Here: A Field Guide for Navigating Polarized Speech, Conspiracy Theories, and Our Polluted Media Landscape (Cambridge, MA: MIT Press, 2021), and Daniel Kreiss, review of Social Media and Democracy: The State of the Field, Prospects for Reform, ed. Nathaniel Persily and Joshua A. Tucker, International Journal of Press/Politics 26, no. 2 (March 2021): 505–512. we should treat far-right extremisms and their associated disinformation as symptoms of historically situated, deep social structures and inequities.

Acknowledging the deeper contexts reframes the question of automated content moderation from isolation to responsibility. By rethinking ways to address right-wing extremisms on social media, platforms can include approaches that shed light on the historical origins of bigotry and hatred. As I conclude, I can only begin to understand what such an approach would mean in the context of Canada’s failed reconciliation with Indigenous peoples and the need to see media regulation as a small part of wider systemic change.

Where does the push for content moderation come from?

There is little doubt insurrectionists organized on social media and shared calls to violence on January 6, but considerable doubt remains over whether social media provoked the situation. There is no consensus about how social media contribute to right-wing extremist radicalization2Maura Conway, “Determining the Role of the Internet in Violent Extremism and Terrorism: Six Suggestions for Progressing Research,” Studies in Conflict & Terrorism 40, no. 1 (2017): 77–98. nor democratic decay across liberal democracies (especially outside the United States and Europe3Ariadna Matamoros-Fernández and Johan Farkas, “Racism, Hate Speech, and Social Media: A Systematic Review and Critique,” Television & New Media 22, no. 2 (2021): 205–224.).

“Though filtering can be effective (especially if harmonized with hate speech laws), measuring success through isolation lets the wider harms fester.”

Despite their ambiguous role, social media platforms advocate for greater automated media regulation4Robert Hunt and Fenwick McKelvey, “Algorithmic Regulation in Media and Cultural Policy: A Framework to Evaluate Barriers to Accountability,” Journal of Information Policy 9 (2019): 307–335. as part of the solution to right-wing extremisms.5State actors, politicians, and civility society have also framed the problem of far-right extremisms as a content problem. For more context, see Tanner Mirrlees, “GAFAM and Hate Content Moderation: Deplatforming and Deleting the Alt-right,” in Media and Law: Between Free Speech and Censorship, eds. Mathieu Deflem and Derek M. D. Silva (Bingley: Emerald Publishing Limited, 2021), 81–97. There is a certain media logic at work in how Facebook, for example, understands its users and this logic shapes its own sense of agency. If, as C. W. Anderson writes, Facebook understands “human users as subjects who act, and whose acts are influenced by short-term communicative stimuli” then short-term, micro-decisions about what content is appropriate are an adequate solution to the problem.6C.W. Anderson, “Fake News Is Not a Virus: On Platforms and Their Effects,” Communication Theory 31, no. 1 (2020): 42–61. Automated content moderation is a perpetual, short-term action that never addresses the deeper issues. Though filtering can be effective (especially if harmonized with hate speech laws), measuring success through isolation lets the wider harms fester.

Here, I’d like to connect automated content moderation with a broader walled strategy of enclosure in modern politics.7Wendy Brown, Walled States, Waning Sovereignty (New York: Zone Books, 2017), 10. Though a wall is very different from an algorithm, both function as a symbolic barrier that creates the perception of harmony amid growing discord.

Walled sites, waning effects

Automated content moderation is a clear example of a walled strategy. According to Facebook’s latest transparency report, it removed 9.8 million pieces of content labelled Organized Hate. Facebook automatically found and flagged 98.6 percent of that content—a seeming success until considering that Organized Hate is up 612.5 percent since Facebook started tracking the category in Q4 2019.

Pursuing a walled strategy, moreover, seems as much about preserving brand safety as protecting users. Mike Davis, writing on Los Angeles’ 1980s urban strategy, noted that walled compounds were a “beachhead for gentrification,” marking valuable property amid widening inequity and systemic failure.8Mike Davis, City of Quartz: Excavating the Future in Los Angeles (New York: Verso, 1990), 240. Today’s for-profit social media are oft-called walled gardens. The dominance of these walled gardens is a gentrification of sorts as Jessa Lingel argues. These digital enclosures protect intellectual property, lock in users,9Princeton, NJ: Princeton University Press, 2018More Info → and, now, try to isolate themselves from hateful and harmful content. These enclosures, following Robyn Caplan, are part of a move toward a verified Internet that leverages a platform’s gatekeeping functions to decide what’s allowed within its domain.

Leaks, floods and bounded media policy

Walled strategies are not an effective response to right-wing extremisms on their own. We should expect that walls leak, following critical policy scholar Tess Lea whose calls for policy ecology foreground how the past “soak[s] into ambient surroundings.”10Tess Lea, Wild Policy: Indigeneity and the Unruly Logics of Intervention (Redwood City, CA: Stanford University Press, 2020), 30. Forgotten or ignored histories of racism, bigotry, and misogyny add mounting pressure to these barriers, such that they collapse and fail as can be seen in the latest scandal of Facebook auto-generating white supremacist pages. Context comes flooding back.

“Advocates for automated content regulation first have to acknowledge its place in fraught situations.”

If walls are susceptible to breaches, what happens if we reject containment as a response to right-wing extremisms? Advocates for automated content regulation first have to acknowledge its place in fraught situations. Right-wing extremisms are not new nor predominantly a social media problem. In North America, far-right extremisms are associated with generations of violent dispossession of women, queer, Black, Indigenous, immigrants, and other peoples of their lives, land, and human rights.11London: Palgrave Macmillan, 2019More Info → As other essays in this series point out, right-wing extremisms draw on specific histories and cultures.

Reconciliation as a holistic, unachieved agenda

What is the responsibility of a social media company, or any media company, to address past injustice and inequity in the places they operate? I look to the Truth and Reconciliation Commission of Canada (TRC) as one guide. Used across the globe, truth commissions have been a contested instrument to address historic trauma and violence through reforms, including recommendations for media industries.12Lisa J. Laplante and Kimberly Theidon, “Truth with Consequences: Justice and Reparations in Post-Truth Commission Peru,” Human Rights Quarterly 29, no. 1 (2007): 228–50.

In 2015, the TRC released its final report documenting the trauma of a cultural genocide intended to “cause Aboriginal people to cease to exist as distinct legal, social, cultural, religious, and racial entities in Canada.”13Truth and Reconciliation Commission of Canada, Honouring the Truth, Reconciling for the Future: Summary of the Final Report of the Truth and Reconciliation Commission of Canada, 2015, 1. Telling the histories of survivors of this genocide is just one component of reconciliation in Canada. Reconciliation, according to the TRC, must involve “an awareness of the past, acknowledgement of the harm that has been inflicted, atonement for the causes, and action to change behavior.”14Truth and Reconciliation Commission of Canada, Honouring the Truth, Reconciling for the Future, 6–7. Along with the report, the TRC released 94 calls to action toward reconciliation. Among these 94 recommendations, media reforms are a small part, because they are only a component of the actions needed.

My own role, as an educator, is to teach Canada’s history (Item 86). Learning about this genocide is still a journey for me. I was born and raised on the unsurrendered and unceded traditional lands of the Wolastoqiyik15Andrea Bear Nicholas, “The Role of Colonial Artists in the Dispossession and Displacement of the Maliseet, 1790s–1850s,” Journal of Canadian Studies 49, no. 5 (2015): 25–86. in a city that refused to grant land or status to Black settlers. My alma matter is named after an architect of Canada’s residential school system. I write now on the unceded lands of the Kanien’kehá:ka Nation. Now I look to works like the Critical Disinformation Syllabus (cocreated with workshop participant Alice Marwick) and the Media Manipulation Playbook to rethink how I teach my field to better acknowledge the deeper structures that enable online extremisms and the very conditions of political communication in Canada.

As I ask myself about my role in reconciliation and reparations, the same, if not more, might be asked of the powerful companies running social media platforms. The TRC does not discuss social media but its deep history and recommendations on media (Items 84–86) and businesses (Item 92) are relevant to developing more situated interventions. Interpretating the TRC might better define online harmful speech or what content gets flagged.

“Automated content moderation then becomes a smaller part of a deeper commitment to civil rights.”

In the United States, platforms should acknowledge how US history and government fostered right-wing extremisms. Facebook recently commissioned a Civil Rights Audit of the platform that called for specific programs for Facebook to prioritize its commitments to civil rights and address “organized hate against Muslims, Jews and other targeted groups on the platform.” Automated content moderation then becomes a smaller part of a deeper commitment to civil rights.

Advocates of automated regulation then must appreciate its limited role in addressing far-right extremisms and avoid isolation from the troubled worlds in which we find ourselves; or, in the words of inaugural poet Amanda Gorman, “So let us leave behind a country better than the one we were left with.”

Acknowledgements
Thanks to Mike Miller, Jason Rhody, Carrie Hamilton, and Rodrigo Ugarte at the SSRC for their support in bringing this essay to publication as well as Tanner Mirrlees for very help feedback with a late draft. A final note of gratitude to Maura Conway and the rest of the participants for their care and commitment to a great workshop during a difficult time.


Banner photo: Lysander Yuen/Unsplash.

References:

1
For example, see Whitney Phillips and Ryan M. Milner, You Are Here: A Field Guide for Navigating Polarized Speech, Conspiracy Theories, and Our Polluted Media Landscape (Cambridge, MA: MIT Press, 2021), and Daniel Kreiss, review of Social Media and Democracy: The State of the Field, Prospects for Reform, ed. Nathaniel Persily and Joshua A. Tucker, International Journal of Press/Politics 26, no. 2 (March 2021): 505–512.
2
Maura Conway, “Determining the Role of the Internet in Violent Extremism and Terrorism: Six Suggestions for Progressing Research,” Studies in Conflict & Terrorism 40, no. 1 (2017): 77–98.
3
Ariadna Matamoros-Fernández and Johan Farkas, “Racism, Hate Speech, and Social Media: A Systematic Review and Critique,” Television & New Media 22, no. 2 (2021): 205–224.
4
Robert Hunt and Fenwick McKelvey, “Algorithmic Regulation in Media and Cultural Policy: A Framework to Evaluate Barriers to Accountability,” Journal of Information Policy 9 (2019): 307–335.
5
State actors, politicians, and civility society have also framed the problem of far-right extremisms as a content problem. For more context, see Tanner Mirrlees, “GAFAM and Hate Content Moderation: Deplatforming and Deleting the Alt-right,” in Media and Law: Between Free Speech and Censorship, eds. Mathieu Deflem and Derek M. D. Silva (Bingley: Emerald Publishing Limited, 2021), 81–97.
6
C.W. Anderson, “Fake News Is Not a Virus: On Platforms and Their Effects,” Communication Theory 31, no. 1 (2020): 42–61.
7
Wendy Brown, Walled States, Waning Sovereignty (New York: Zone Books, 2017), 10.
8
Mike Davis, City of Quartz: Excavating the Future in Los Angeles (New York: Verso, 1990), 240.
9
Princeton, NJ: Princeton University Press, 2018More Info →
10
Tess Lea, Wild Policy: Indigeneity and the Unruly Logics of Intervention (Redwood City, CA: Stanford University Press, 2020), 30.
11
London: Palgrave Macmillan, 2019More Info →
12
Lisa J. Laplante and Kimberly Theidon, “Truth with Consequences: Justice and Reparations in Post-Truth Commission Peru,” Human Rights Quarterly 29, no. 1 (2007): 228–50.
14
Truth and Reconciliation Commission of Canada, Honouring the Truth, Reconciling for the Future, 6–7.
15
Andrea Bear Nicholas, “The Role of Colonial Artists in the Dispossession and Displacement of the Maliseet, 1790s–1850s,” Journal of Canadian Studies 49, no. 5 (2015): 25–86.