Considering that President Donald Trump spent the last four years in the White House emboldening and reinforcing white supremacy, President Joe Biden’s overtures to address “domestic violent extremism” are a welcome relief. As we have learned in the aftermath of the storming of the US Capitol, social media platforms played an important role in assembling and coordinating an attack in which a confederation of nationalists, misogynists, fascists, and white supremacists participated.

Facebook—which has continued to recommend white supremacist pages to its users after the Christchurch terrorist attack in 2019—was a key platform for this activity, despite COO Sheryl Sandberg’s denial of this fact. Of course, alt-tech platforms also played an important role. These platforms, such as Gab and Parler, despite marketing themselves as “free speech” platforms, provide an alternative infrastructure that serve as “recruitment and organizing sites for the far right.”1Joan Donovan, Becca Lewis, and Brian Friedberg, “Parallel Ports. Sociotechnical Change from the Alt-Right to Alt-Tech,” in Post-Digital Cultures of the Far Right, ed. Maik Fielitz and Nick Thurston (Bielefeld, Germany: transcript-Verlag, 2018), 50. Indeed, Parler—a popular migration destination for those suspended by platforms like Twitter—hosted thousands of videos posted by the insurrectionists.

“While Parler was a platform tailored to the far right that made little effort to moderate the extreme content produced by its many violent users, our attention must remain on mainstream platforms.”

While Parler was a platform tailored to the far right that made little effort to moderate the extreme content produced by its many violent users, our attention must remain on mainstream platforms. For years, these platforms enabled extremist microcelebrities to accrue views, likes, and ad revenue;2Rebecca Lewis, “‘This Is What the News Won’t Show You’: YouTube Creators and the Reactionary Politics of Micro-Celebrity,” Television & New Media 21, no. 2 (February 1, 2020): 201–17. recommended white supremacist influencers to a wide range of users; and repeatedly allowed politicians, campaigns, lobby groups, and troll farms to purchase ads with racist content that civil society groups, including the NAACP and Anti-Defamation League, referenced in their 2020 call for a boycott on Facebook advertising. No doubt Facebook, Twitter, and YouTube have made significant steps in addressing the rise of the far right and responding to calls for transparency and collaboration with civil society. That said, this progress has been too slow and marred by inconsistency.

In 2017, Ariadna Matamoros-Fernández proposed the term “platformed racism” to understand how platforms are themselves “amplifiers and manufacturers of racist discourse by means of their affordances and users’ appropriation of them.”3Ariadna Matamoros-Fernández, “Platformed Racism: The Mediation and Circulation of an Australian Race-Based Controversy on Twitter, Facebook and YouTube,” Information, Communication & Society 20, no. 6 (June 3, 2017): 940. However, many of the problems that Matamoros-Fernández identified—including the recommendation of racist content and the amplification of extremist narratives—remain unsolved today.

Platform racism?

Platforms still have not developed a robust response to right-wing extremism on their platforms. Despite no longer providing an unencumbered platform for “free speech” that affirms entitlement to racial insults, these platforms still contribute to racist structures through their policies that seek to address hate speech, extremism, and terrorist content on their platforms. To date, platforms have failed to challenge the far right to avoid backlash from politicians and political parties and protect their sizable audiences that share hate.

“In other words, it focuses on how mainstream platforms’ policies and governance strategies to counter the far right reinforce racism rather than challenge it.”

While the concept of platformed racism focuses on how platforms, such as Facebook, Twitter, and YouTube, amplify and circulate racist discourse, platform racism is a lens through which to critically interrogate how platforms reinforce and reproduce racist social structures. In other words, it focuses on how mainstream platforms’ policies and governance strategies to counter the far right reinforce racism rather than challenge it. By adopting narrow, minimal definitions of far-right extremism, platforms privilege these actors by only acting on the most egregious exponents of white supremacy. This represents what Eduardo Bonilla-Silva refers to as “color blind racism,” or more provocatively, “racism without racists.”

Following Bonilla-Silva’s logic, I do not claim that the platforms, the executives that run them, or the employees working there are themselves “racists.” Bonilla-Silva argues that racism is more than the acts of individual racists but also the “practices and mechanisms that reproduce racial inequality and white privilege.”4Eduardo Bonilla-Silva, “Toward a New Political Praxis for Trumpamerica: New Directions in Critical Race Theory,” American Behavioral Scientist 63, no. 13 (November 1, 2019): 1778. Upon investigating how platforms seek to govern right-wing extremist activity, it becomes clear that while platforms have taken action against the most egregious exponents of white supremacy, they have not addressed the software, policies, and infrastructures that amplify hate, racism, and white supremacy. Though the recommendation engines, advertising interfaces, and algorithmically curated feeds provided by these platforms have certainly enhanced the reach and influence of white supremacist messages, platforms have a choice in who is able to benefit from their software and infrastructure. Platforms make this choice through the policies that they design and enact. To date, these policies privilege far-right extremism rather than challenge it.

“This disregard for the context of racism, extremism, and hate combines with technological solutionism ‘to turn questions of politics and justice into operational problems’.”

In a recent article, Eugenia Siapera and Paloma Viejo-Otera build a convincing argument that Facebook’s policies on hate speech minimize the persistence of racist social structures: “in insisting that blindness to racial history and the history of racism is the way to be fair, [Facebook] merely repeats the same unfair treatment to which racialized people have been subjected.”5Eugenia Siapera and Paloma Viejo-Otero, “Governing Hate: Facebook and Digital Racism,” Television & New Media 22, no. 2 (February 1, 2021): 127. This disregard for the context of racism, extremism, and hate combines with technological solutionism “to turn questions of politics and justice into operational problems.”6Siapera and Viejo-Otero, 127.

Indeed, this minimization of racism is one of the key principles that racist structures today rest upon.7Eduardo Bonilla-Silva, Racism without Racists: Color-Blind Racism and the Persistence of Racial Inequality in America (Rowman & Littlefield, 2017), 43. The failure to name racism and white supremacy as specific, historically situated problems shows how platform policies continue to privilege the far right. After the Christchurch attack, in which a white supremacist murdered 51 Muslims in their houses of worship, Facebook committed to redirecting users searching for white supremacist content to the organization Life After Hate that specializes in deradicalization services. However, a report by the Tech Transparency Project released in 2020, shows that such redirects only occurred in 6 percent of all cases due in part to the minimal, narrow definitions of “white supremacy” that Facebook used. A civil rights audit released around the same time by Facebook corroborates these claims, stating that the company’s policy on white supremacist content “is too narrow in that it only prohibits content expressly using the phrase(s) ‘white nationalism’ or ‘white separatism,’ and does not prohibit content that explicitly espouses the very same ideology without using those exact phrases.” By adopting the most minimal definition of white supremacy in its design of policies—here, instances of users that searched for that term specifically—Facebook’s software and infrastructure enabled the persistence of white supremacists on its platform.

GIFCT and the minimization of racism

The minimization of racism is also evident in the Global Internet Forum to Counter Terrorism (GIFCT), an institution created in 2017 by Facebook, Microsoft, Twitter, and YouTube. GIFCT has one primary policy product, the “hash sharing database,” designed to prevent visual content—images and videos—from being reposted and circulating widely. Essentially, the hash sharing database takes a video or an image and converts it into a “hash” or a cryptographic string that represents the visual features of an image, like a fingerprint. By comparing one hash to another, it is possible to automatically detect duplicates. So, when an ISIS image or propaganda video is “hashed” and stored in the database, attempts to repost the image or video can be prevented. This builds on a technical solution for countering child sexual abuse that was initially developed at Microsoft.8Evelyn Douek, “The Rise of Content Cartels” (Knight First Amendment Institute at Columbia, 2020).

Currently, the majority of content hashes in GIFCT’s database are jihadist in nature. This is because GIFCT relies on the UN Security Council’s (UNSC) list of sanctioned extremist individuals and groups. There are no right-wing extremists on the UNSC’s list. Consequently, GIFCT’s protocols essentially create an exemption for right-wing extremism. Ostensibly, the reason the UN sanctions list is used is due to the difficulty of finding common ground on what can be considered “extremist” between a range of different companies in different countries. Yet, the vast majority of GIFCT’s members are headquartered in the United States, with New Zealand (Mega.nz) being the only exception.

“There is an extensive scientific debate on definitions of terrorism that GIFCT eschews for the politically easy, but ultimately minimal, definition of terrorist content.”

Of course, a globally accepted definition of what counts as “terrorist” content is elusive. Yet other UN bodies, such as UNDP, have robust and extensive definitions of hate speech, racism and xenophobia, extremism, and terrorism that recognize a wider range of threatening actors and contexts that could serve as an alternative basis for GIFCT’s definitions. There is an extensive scientific debate on definitions of terrorism that GIFCT eschews for the politically easy, but ultimately minimal, definition of terrorist content. Finally, it is even more difficult to take arguments for the use of the UN Sanctions List seriously considering most of these platforms have far stricter definitions of extremist content. It should, however, be noted that as GIFCT has moved to engage more directly with civil society, it has recently called for a revision of these definitions and an improved taxonomy of what counts as terrorist content.

When GIFCT does address right-wing extremism, it only does so after the fact, with the minimal goal of only taking down content associated with a specific attack rather than propaganda, glorification of racist violence, or radicalizing content (which it appears to do for jihadist content that fits in those categories). Indeed, Susan Benesch writes, “There’s not much point in prohibiting incitement to violence if you define it only in a rear-view mirror, after it leads to violence.” The first time GIFCT addressed right-wing extremist content was after the Christchurch attack. During the attack, the terrorist livestreamed his crime, and users tried to share and upload copies of this film. In response, GIFCT members arranged what is referred to as a Content Incident Protocol (CIP), a “[triage] system aiming to minimize the online spread of terrorist or violent extremist content resulting from a real-world attack on defenseless civilians/innocents.” For GIFCT, right-wing violence is currently only defined in a “rear-view mirror”: It is only recognized as such when violence occurs. Jihadist content is hashed based on its ideology, features, and quality while GIFCT only hashes right-wing extremism in the wake of violence.

“GIFCT’s selective process of defining extremism prioritizes jihadism for the full force of pre-emptive content moderation at the same time that it exempts right-wing extremism from the power of its moderation technology.”

In their transparency report for 2019, GIFCT had a separate category solely dedicated to videos related to the Christchurch attack, titled “New Zealand Perpetrator Content,” which constituted 0.6 percent of all hashes in the database. In their 2020 transparency report, this percentage increased significantly to 6.8 percent, and GIFCT added categories referring to attacks in Halle, Germany, and Glendale, AZ, following CIPs for both. GIFCT should be commended for taking this necessary action, but it also reveals how the institution treats right-wing extremism (and in the case of Glendale, AZ, male supremacy) as a secondary concern: GIFCT’s selective process of defining extremism prioritizes jihadism for the full force of pre-emptive content moderation at the same time that it exempts right-wing extremism from the power of its moderation technology.

These processes lead to a lighter touch on the far right, which means many of them remain able to benefit from the audience, amplification, and markets to which platforms provide access. It also means that the far-right media ecosystem has been able to develop rapidly in recent years,9See Yochai Benkler, Rob Faris, and Hal Roberts, Network Propaganda: Manipulation, Disinformation, and Radicalization in American Politics (Oxford: Oxford University Press, 2018). providing a key vector for the spread of racism, intolerance, hate, and disinformation.

Conclusion

In the wake of the insurrection that Donald Trump instigated, the major technology companies and platforms took aggressive action. Facebook and Twitter deplatformed him, and Facebook sent the case to its Oversight Board. Amazon ceased its provision of cloud servers to Parler. Twitter suspended 70,000 users associated with QAnon, and Telegram began shutting down white supremacist and neo-Nazi channels using its messaging and broadcasting services. Further, Facebook recently banned another block of organized violent social movements. However, the brief look at Facebook and GIFCT above demonstrates how their minimization of racism maintains the privilege of the far right to build audiences, revenues, and network using racial insult, hate, and falsehood. These minimal policies enabled the far right to maintain a presence on mainstream platforms. As we know, the confederation of insurrectionists benefited from this privilege and used these platforms to disseminate their white supremacist ideologies, spread Trump’s “big lie,” and organize an assault on the US Capitol.

“The narrow definitions of white supremacy that platforms use are shaped by the limitations on the understanding of racism that characterize discourse and policy in liberal democracies today.”

At the center of addressing these structural privileges granted to the far right is the problem of what counts as racism. The narrow definitions of white supremacy that platforms use are shaped by the limitations on the understanding of racism that characterize discourse and policy in liberal democracies today. As Bonilla-Silva puts it, the focus on racist acts distracts us from the structures that sustain racial inequity today: “America’s ‘race problem’ has never been about a few rotten apples but about a shaky apple tree.”10Bonilla-Silva, “Toward a New Political Praxis for Trumpamerica,” 1777. It is crucial to understand that platforms’ infrastructure, affordances, and governance approaches are grounded in minimizing the historical and structural persistence of racism, and in doing so, maintain rather than challenge, racism.

References:

1
Joan Donovan, Becca Lewis, and Brian Friedberg, “Parallel Ports. Sociotechnical Change from the Alt-Right to Alt-Tech,” in Post-Digital Cultures of the Far Right, ed. Maik Fielitz and Nick Thurston (Bielefeld, Germany: transcript-Verlag, 2018), 50.
2
Rebecca Lewis, “‘This Is What the News Won’t Show You’: YouTube Creators and the Reactionary Politics of Micro-Celebrity,” Television & New Media 21, no. 2 (February 1, 2020): 201–17.
3
Ariadna Matamoros-Fernández, “Platformed Racism: The Mediation and Circulation of an Australian Race-Based Controversy on Twitter, Facebook and YouTube,” Information, Communication & Society 20, no. 6 (June 3, 2017): 940.
4
Eduardo Bonilla-Silva, “Toward a New Political Praxis for Trumpamerica: New Directions in Critical Race Theory,” American Behavioral Scientist 63, no. 13 (November 1, 2019): 1778.
5
Eugenia Siapera and Paloma Viejo-Otero, “Governing Hate: Facebook and Digital Racism,” Television & New Media 22, no. 2 (February 1, 2021): 127.
6
Siapera and Viejo-Otero, 127.
8
Evelyn Douek, “The Rise of Content Cartels” (Knight First Amendment Institute at Columbia, 2020).
9
See Yochai Benkler, Rob Faris, and Hal Roberts, Network Propaganda: Manipulation, Disinformation, and Radicalization in American Politics (Oxford: Oxford University Press, 2018).
10
Bonilla-Silva, “Toward a New Political Praxis for Trumpamerica,” 1777.