On January 6, 2021, the world watched in shock as Donald Trump’s supporters stormed the US Capitol. While some of the mob roamed the halls snapping selfies with statues holding Trump and Confederate flags, others seemed poised to escalate the violence. Some of the rioters swept through the building with bats and zip ties, calling out for the “traitors,” then-Vice President Mike Pence and Speaker of the House Nancy Pelosi, to show themselves. Outside, gallows were erected near the Capitol Reflecting Pool.

“Evidence quickly emerged that users did more than spread conspiracy theories on mainstream social media.”

When the siege ended, the finger pointing began. Social media platforms quickly emerged as one of the first to shoulder a portion of the blame. Los Angeles Times columnist, Erika Smith, opined that Twitter, Facebook, Instagram, YouTube, and Google were responsible for allowing divisive speech and conspiracy theories “to fester and spread online” virtually unchecked. Evidence quickly emerged that users did more than spread conspiracy theories on mainstream social media. They used Facebook to promote the January 6 event, spread memes, organize bus transportation, plot routes to the Capitol building, and circulate rumors regarding its potential occupation. Unmoderated calls for violence were more prominent on other platforms, such as r/TheDonald and Parler.

A core assumption underlying the conversation following the January 6 event is that extremism can be moderated away.

Over the last year, I have been working with an excellent team of graduate and undergraduate students at Florida State University to systematically assess the characteristics of political expression online and whether moderation might affect how individuals express their political identities and views. The first phase of the project analyzes comments posted to news stories regarding Brett Kavanaugh’s US Supreme Court nomination and accusations that surfaced in 2018 about Kavanaugh sexually assaulting three women. We examined nearly 3,000 comments made by individuals relative to news stories in moderated comment sections about the Kavanaugh nomination in right-leaning outlets (FOX News, Breitbart, Daily Caller, and Gateway Pundit), left-leaning outlets (MSNBC News, HuffPost, Daily Kos, and Raw Story), and more mainstream outlets (USA Today, New York Times, Washington Post, and Washington Times), and found that political polarization and extremism in the United States are not being moderated away.1This is part of a larger project, which also includes an analysis of comments in the same outlets around the Amy Coney Barrett Supreme Court nomination in 2020. We have analyzed nearly 5,000 comments in most of the same outlets relative to her nomination. Please note that outlets were categorized in line with the extant literature. See Rodney Benson, Shaping Immigration News: A French-American Comparison (New York: Cambridge University Press, 2013); and Ceren Budak, Sharad Goel, and Justin M. Rao, “Fair and Balanced? Quantifying Media Bias through Crowdsourced Content Analysis,” Public Opinion Quarterly 80, no. S1 (2016): 250–71. Here, I argue that we haven’t thought critically about individual agency and how it affects the expression of everyday extremism on moderated forums.

Individual agency and moderation

“The drive to create an authentic, participatory community that represents its users is one reason why there is so much angst over moderation strategies as well as why we see such diverse moderation strategies across forums.”

Moderation is nebulous territory, in part, because it involves censoring thoughts and ideas that are regarded as bad for a community. The problem is that good and bad are not objective categories. Outlets and forum users negotiate their meanings, mutually constituting what is clearly acceptable and unacceptable within an online community.2For a discussion of censorship as a practice intended to negotiate, maintain, and reproduce epistemic order within a culture, see Dominic Boyer, “Censorship as a Vocation: The Institutions, Practices, and Cultural Logic of Media Control in the German Democratic Republic,” Comparative Studies in Society and History 45, no. 3 (2003): 511–45. The drive to create an authentic, participatory community that represents its users is one reason why there is so much angst over moderation strategies as well as why we see such diverse moderation strategies across forums. Daily Kos, for instance, provides some general guidance on appropriate comments but ultimately relies on community moderation to determine what ideas are desirable and what ideas are unacceptable. Breitbart, in contrast, uses Disqus to enforce its community standards, which notes that forum users are not to provide content that “is false, misleading, libelous, slanderous, defamatory, obsence, abusive, hateful, or sexually-explicit.”3While forums using Disqus also post community or comment rules, it is not often clear how a given outlet creates and enforces its moderation rules. But, on a forum that is also committed to promoting freedom through the inclusion of “more voices, not fewer,” what constitutes obscene, abusive, or hateful falls in a vast gray area where the lines between what is appropriate and inappropriate seem unclear. It is in this gray area that individuals find interesting and sometimes creative ways to express themselves.

In my research on individual political expression over the last several years around issues as diverse as the removal of Terri Schiavo’s hydration and nutrition tubes in 2005 to the recent debate over gun control in the wake of the Parkland shootings, I have learned that while some individuals flout the norms of communication in a given forum, others play within the gray area. If researchers want to understand the full range of ways in which political polarization and extremism might be expressed online, we need to think more deeply about how moderation policies and practices create gray areas as well as how individuals might exploit them in their political expression.

What’s in a name?

My ongoing research regarding the Kavanaugh nomination suggests that one indicator of user polarization and extremism is commenters’ profiles, which includes their username and user-selected profile picture. Even in forums that are fairly well moderated, commenters’ find ways to express their political points of view. In the majority of forums, usernames are an easy way for individuals to signal their political identities and priorities to others. The New York Times, for example, has a well moderated comment section, and requires individuals to register in advance of posting their first comment. The site gives potential commenters clear guidelines in terms of what kind of comments they are interested in (see below) but does not require users to provide their real names. The main suggestion regarding names is that users generally indicate where they live so that their comments may be promoted more effectively. There is no mention of profile pictures on the page.

While this may not seem like much wiggle room when it comes to political expression, individuals use their usernames—and sometimes their photos—to punctuate their political opinions. Names such as “Illinois Moderate,” “DemocratPatr8,” and “Jesse the Conservative” all are intended to make clear the user’s political orientation. Some profiles even seem designed to underscore the political dissatisfaction and anger expressed in their comments. Users with names such as “Tired of hypocrisy” and “Son of liberty,” whose profile also included a portion of the Betsy Ross flag (a symbol that has been associated with the extreme right), criticized Democrats for impugning Brett Kavanaugh’s reputation, called Christine Blasey Ford’s character into question, and suggested that the investigation was a “sordid delaying tactic” and a “sham” that harmed “the reputations of all women who have actually been sexually assaulted.” Another user named “The fix is in” criticized Kavanaugh’s high school friend, Mark Judge, who said he was shocked at the behaviors young men got away with. The commenter noted “No GOPer, 0.1 percenter or other fraud or puppet pretending to be a genuine ‘Conservative’ instead of a grand scale pain inflicter [sic] and democracy and planet destroyer finds himself shocked anymore at the stuff he gets away with.”

https://items.ssrc.org/wp-content/uploads/2021/03/NYTframepng.png
Image 1. Screenshot of part of the New York Times’ moderation rules.

User profiles, and names in particular, seem to take on increased importance in forums that fall outside of the mainstream. In Daily Kos, where community standards determine appropriateness, and Breitbart, where the moderation appears fairly lax, usernames become an easy and highly visible way for users to express their commitment to politicians, political points of view, and, potentially, the online community in which they see themselves as members. This appears to be particularly true in right-leaning forums where users incorporate variants of deplorable (e.g., “AB Deplorable” and “El Gato Deplorable”), conservative (e.g., “CapeConservative” and “Ultracon”), and attacks on liberals (e.g., “libsrnazi,” “Run, snowflakes, run!” “Laughing at Libtards,” and “Libsareclowns”) into their usernames.

Everyday extremism

While most forums note that they will not tolerate name-calling, there is a fair amount of it happening in comments on all of the forums. More important, there is a fair amount of language that casts political opponents as problematic others that need to be dealt with in some fashion. I am calling this “everyday extremism,” because this kind of language blurs the lines between political polarization and extremism and exists because the language employed by users falls into the gray area of moderation. Here, in the gray area of moderation, users negotiate not only what it means to be a member of a forum, but also how a community can talk about—and presumably think about—its political opponents. More troubling, we find that everyday extremism provides a rationale for the harsh treatment, punishment, or, in some cases, the death of one’s political opponents. Here, I briefly discuss three types of everyday extremism.

Criminalizing Opponents. In all of the news forums, commenters routinely characterized those with whom they did not politically agree as engaging in deceptive behavior that likely violated state or federal law. In discourse surrounding the Kavanaugh nomination, commenters characterized Democrats and Blasey Ford as criminals for everything from promoting a “libelous narrative” of Kavanaugh and committing “perjury” to illegally disrupting the confirmation hearings in a “terrorist effort” designed to reclaim the Supreme Court for themselves.

Pathologizing Opponents’ Behaviors. Another type of everyday extremism fairly common across all of the forums was pathologizing the behavior of one’s opponents. Here, commenters typically cited the negative emotions of their opponents as the source of irrational behavior, which over time manifests in mental illness. Commenters most often pointed to “dislike,” “hatred,” and “denial” as the motivation for the “chaotic” and “unreasonable” choices of their opponents and, eventually, the “sociopathic” and “psychotic” actions they take.

Dehumanization. Discourse in which commenters completely stripped their opponents of their humanity and then, more often than not, called for their injury or death appeared in partisan news forums. A HuffPost commenter responding to Republican Senator Jeff Flake’s affirmative vote on Kavanaugh after he had called for an FBI investigation into the sexual assault allegations made by Blasey Ford, called Flake “a dog” and said he “should be put down.” On Breitbart, a commenter compared Democrats to “rats” and argued that they “should be exterminated.”

What do we do about extremism?

Globally, social scientists are doing an excellent job identifying sources of extremism, how it spreads across media systems, and unpacking some of the meanings associated with seemingly benign images and phrases.4For excellent examples, see Julia R. DeCook, “Memes and Symbolic Violence: #Proudboys and the Use of Memes for Propaganda and the Construction of Collective Identity,” Learning, Media and Technology 43, no. 4 (2018): 485–504; Nitin Govil and Anirban Kapil Baishya,”The Bully in the Pulpit: Autocracy, Digital Social Media, and Right-Wing Populist Technoculture,” Communication, Culture and Critique 11, no. 1 (2018): 67–84; Viveca S. Greene, “‘Deplorable’ Satire: Alt-Right Memes, White Genocide Tweets, and Redpilling Normies,” Studies in American Humor 5, no. 1 (2019): 31–69; Daniel Karell and Michael Freedman, “Rhetorics of Radicalism,” American Sociological Review 84, no. 4 (2019): 726–53; Emma Morris, “Children: Extremism and Online Radicalization,” Journal of Children and Media 10, no. 4 (2016): 508–14; and Luke Munn, “Alt-Right Pipeline: Individual Journeys to Extremism Online,” First Monday 24, no. 6 (2019). While this research is critically important and valuable, it can obscure more commonplace expressions of polarization and everyday extremism hiding in plain sight on mainstream forums. I do not doubt that most news outlets have admirable intents when they vet moderation services and create moderation practices. The point here is that individuals will find ways to express their political beliefs and potentially create extremism communities despite outlets’ best moderation efforts.

“We need to recognize that extremism has become a widespread problem that requires intervention.”

This does not mean that we should quit putting time and energy into improving our moderation policies and practices. I support academic calls for algorithmic accountability, which would make the automated decisions of platforms more transparent as well as hold platforms responsible for the online cultures they help create.5Robert Hunt and Fenwick McKelvey, “Algorithmic Regulation in Media and Cultural Policy: A Framework to Evaluate Barriers to Accountability,” Journal of Information Policy 9 (2019): 307–335. I would also point to the social science research that shows just how important moderation is in the battle against violent extremism. Maura Conway and her colleagues, for instance, find that aggressive account and content takedown can effectively disrupt extremist communities online and make radicalization, recruitment, and organization harder.6Maura Conway et al., “Disrupting Daesh: Measuring Takedown of Online Terrorist Material and Its Impacts,” Studies in Conflict and Terrorism 42, no. 1–2 (2019): 141–160. Likewise, Bharath Ganesh and Jonathan Bright point out that countermessaging and other strategic communication techniques can help curb extremism online.7Countering Extremists on Social Media: Challenges for Strategic Communication and Content Moderation,” Policy and Internet 12, no. 1 (2020): 6–19. However, I do argue that we cannot just focus our energy on ideologically charged platforms or violent groups. We need to recognize that extremism has become a widespread problem that requires intervention. One potential way to disrupt the everyday extremism described here is to integrate political bias training into our workplaces. Many occupations already require safety, racial bias, and sexual harassment training, it seems that we should begin to discuss how our deeply held political identities8Shanto Iyengar and Sean J. Westwood, “Fear and Loathing across Party Lines: New Evidence on Group Polarization,” American Journal of Political Science 59, no. 3 (2014): 690–707. affect our professional lives as well. While this alone is unlikely to solve our political woes, it would represent a clear step toward recognizing a growing problem.

I would like to acknowledge the Institute of Politics at Florida State University and my fantastic research team for their assistance. The team includes Allison Bloomer, Pierce Dignam, Shawn Gaulden, Alex Cubas, Alejandro Garcia, Jade Harris, Emily Ortiz, and Lauren Torres.


Banner photo: Blink O’fanaye/Flickr.

References:

1
This is part of a larger project, which also includes an analysis of comments in the same outlets around the Amy Coney Barrett Supreme Court nomination in 2020. We have analyzed nearly 5,000 comments in most of the same outlets relative to her nomination. Please note that outlets were categorized in line with the extant literature. See Rodney Benson, Shaping Immigration News: A French-American Comparison (New York: Cambridge University Press, 2013); and Ceren Budak, Sharad Goel, and Justin M. Rao, “Fair and Balanced? Quantifying Media Bias through Crowdsourced Content Analysis,” Public Opinion Quarterly 80, no. S1 (2016): 250–71.
2
For a discussion of censorship as a practice intended to negotiate, maintain, and reproduce epistemic order within a culture, see Dominic Boyer, “Censorship as a Vocation: The Institutions, Practices, and Cultural Logic of Media Control in the German Democratic Republic,” Comparative Studies in Society and History 45, no. 3 (2003): 511–45.
3
While forums using Disqus also post community or comment rules, it is not often clear how a given outlet creates and enforces its moderation rules.
4
For excellent examples, see Julia R. DeCook, “Memes and Symbolic Violence: #Proudboys and the Use of Memes for Propaganda and the Construction of Collective Identity,” Learning, Media and Technology 43, no. 4 (2018): 485–504; Nitin Govil and Anirban Kapil Baishya,”The Bully in the Pulpit: Autocracy, Digital Social Media, and Right-Wing Populist Technoculture,” Communication, Culture and Critique 11, no. 1 (2018): 67–84; Viveca S. Greene, “‘Deplorable’ Satire: Alt-Right Memes, White Genocide Tweets, and Redpilling Normies,” Studies in American Humor 5, no. 1 (2019): 31–69; Daniel Karell and Michael Freedman, “Rhetorics of Radicalism,” American Sociological Review 84, no. 4 (2019): 726–53; Emma Morris, “Children: Extremism and Online Radicalization,” Journal of Children and Media 10, no. 4 (2016): 508–14; and Luke Munn, “Alt-Right Pipeline: Individual Journeys to Extremism Online,” First Monday 24, no. 6 (2019).
5
Robert Hunt and Fenwick McKelvey, “Algorithmic Regulation in Media and Cultural Policy: A Framework to Evaluate Barriers to Accountability,” Journal of Information Policy 9 (2019): 307–335.
6
Maura Conway et al., “Disrupting Daesh: Measuring Takedown of Online Terrorist Material and Its Impacts,” Studies in Conflict and Terrorism 42, no. 1–2 (2019): 141–160.
8
Shanto Iyengar and Sean J. Westwood, “Fear and Loathing across Party Lines: New Evidence on Group Polarization,” American Journal of Political Science 59, no. 3 (2014): 690–707.