How does online extremism cross over into the offline world and cause physical harm and violence? This is one of the most difficult questions to answer about online extremism. We know that its online harm is pervasive. Extremists commonly use—and abuse—social and digital media technologies and spaces to express hateful speech, as well as target specific individuals with this speech. Extremists also can amplify their harmful speech by convincing many others on a platform to share their messages or post original expressions of support. We additionally know that online extremism can help incite physical violence, such as the mosque shootings in New Zealand, several attacks in European cities by sympathizers of the Islamic State, and the January 6, 2021, riot at the US Capitol Building. But these latter kinds of harm—physical instances of mass violence and unrest—are rare, especially compared to the number of people who generate and consume extremist content online. Herein lies the difficulty: It is hard to identify reoccurring, generalized processes of how things work when only analyzing rare events.

Yet, I argue here that this difficulty is largely due to data limitations and a particular focus in commentary and research, not to the actual (in)frequency of online extremism spilling over into offline harm. In fact, new evidence, both from the ongoing research of others and my own work, suggests that online extremism—particularly the online extremism of the contemporary right wing in the United States and Western Europe—has been resulting in offline, physical harm more often than we thought. Seeing these patterns required new data and an encompassing view of multiple kinds of violence. Furthermore, now that we are detecting these empirical patterns, we are also advancing our understanding of the different ways that online extremism can result in offline harm.

Data limitations and widespread interest highlight coordination

“We often do not know where social media content is generated, consumed, and discussed.”

We typically know where and when physical violence occurs. Incidents are documented in police reports, journalistic accounts, databases maintained by independent organizations, and so on. In contrast, we often do not know where social media content is generated, consumed, and discussed. People usually do not announce their location when posting public social media messages, and they do so even less frequently when consuming others’ messages or videos. Moreover, when social media platforms, like Twitter, share users’ location data with researchers, the information is usually based on users who enabled tracking or self-identified their locations. This results in unreliable data; only a fraction of users shares location information and many who state their location do so in inaccurate or misleading ways (e.g., saying “I’m in Washington, DC” when they are in the Virginia suburbs). This data limitation has been a major obstacle to linking social media use and instances of physical violence: We cannot reliably observe, on countrywide and months-long scales, people acting out after, say, reading hateful or inflammatory content in a particular place and at a particular time.

Alongside the data challenges, popular commentators and scholars of social media and offline harm have typically focused on dramatic, rare outbursts of violence and unrest, such as terrorist attacks and large but more or less isolated political protests. This focus is partially due to the events’ importance—they are of great public concern and could potentially be affected by (or affect) policy. But it also partially results from the limitations on data: One way to overcome the limitations is to first identify individuals—for example, those who perpetrated an act of mass violence or participated in a particular protest—then study their social media use. After all, we know something about their identities and locations.1For example, David Van Dijcke and Austin L. Wright, “Profiling Insurrection: Characterizing Collective Action Using Mobile Device Data,” SSRN, March 10, 2021.

The focus on headline-grabbing instances of violence and unrest has concentrated attention on one way that online extremism can cross over into the offline world: coordination. That is, extremists can use social media to organize and advertise a planned protest, gather together like-minded individuals to perpetrate an act of violence, and formulate plans for supporters to join a violent group (as Islamic State militants in Syria did with supporters in Europe).2For example, Ruben Enikolopov, Alexey Makarin, and Maria Petrova, “Social Media and Protest Participation: Evidence from Russia,” Econometrica 88, no. 4 (2020): 1479–1514; and Karsten Müller and Carlo Schwarz, “Fanning the Flames of Hate: Social Media and Hate Crime,” Journal of the European Economic Association, October 30, 2020. For example, evidence emerged in early 2021 that some of the rioters at the US Capitol conspired over social media. In sum, journalistic reporting and scholarly research have extensively documented that extremists will use social media as a tool to coordinate acts of physical violence with allies and sympathizers, but actual instances of violence are rare. However, this general consensus is partly a function of what is possible to observe and where the attention is—and it does not rule out other mechanisms linking online extremism and offline harm.

Beyond coordination: Preferences and social norms

“Some recent studies use targeted social media data in studies of hate crimes.”

New research on right-wing social media is expanding our view of the relationships between online extremism and physical violence. It’s doing so by both drawing on new data and looking beyond terror attacks and dramatic, singular incidents of political unrest. For instance, some recent studies use targeted social media data in studies of hate crimes. They find that, in London, hate speech on Twitter correlates with racially and religiously aggravated crimes3Matthew L. Williams et al., “Hate in the Machine: Anti-Black and Anti-Muslim Social Media Posts as Predictors of Offline Racially and Religiously Aggravated Crime,” The British Journal of Criminology 60, no. 1 (2020): 93–117. while, in the United States, counties with high Twitter use have more hate crimes and Donald Trump’s anti-Muslim tweets predicted subsequent hate crimes.4Karsten Müller and Carlo Schwarz, “From Hashtag to Hate Crime: Twitter and Anti-Minority Sentiment,” SSRN, July 24, 2020.

In my own work, coauthored with Andrew Linke (The University of Utah and Peace Research Institute Oslo) and Edward Holland (University of Arkansas), we merge recently released records of activity on Parler, a right-wing social media platform, with a new database of localized, relatively commonplace contentious political events (some of which resulted in bodily injuries and fatalities) that occurred across the United States during 2020 and early 2021. These kinds of events differ from widely reported outbursts of political violence. For example, one event representative of the nearly 2,000 we observe was a fistfight between far-right militia members (demonstrating in support of Confederate statues) and counterprotestors in Stone Mountain, Georgia, on August 15, 2020. Analyzing counties, we find that activity on Parler in a given month increased the frequency of right-wing unrest events in the following month.5Daniel Karell, Andrew Linke, and Edward Holland, “Right-Wing Social Media and Unrest Correspond Across the United States,” SocArXiv, May 5, 2021.

Studying new data and different outcomes indicates that online extremism spills over into offline harm more frequently and in more varied ways than widely assumed. This finding is prompting investigations of mechanisms beyond coordination. For example, some of the research on hate crimes has shown that hate speech on Twitter increases incidents of hate crimes relatively quickly, which conflicts with the idea of taking the time to organize the perpetration of crime.6Müller and Schwarz, “From Hashtag to Hate Crime.” Similarly, my analysis of Parler content—again done with Dr. Linke and Dr. Holland—uncovered very little coordination talk, and the trend we detected was uncorrelated with trends in political unrest. So, what other than coordination could form the link between right-wing social media use and outcomes like (unfortunately) relatively commonplace interpersonal violence and political unrest?

Current research focuses on testing two theorized mechanisms (that are seen as alternative to but complementary of coordination): “changes in preferences” and “changes in social norms.” The former, brought to many people’s attention in a widely discussed essay by Zeynep Tufekci published in 2018, operates through a shift in people’s beliefs, attitudes, or preferences. The change occurs as right-wing social media increasingly exposes people—potentially through a platform’s recommendation algorithm—to content that dehumanizes out-group members, transforming the users into people feeling more hateful sentiments and capable of violence. In the case of right-wing online extremism, this “preferences” mechanism implies that after engaging right-wing social media, individuals who do not harbor especially right-wing ideology or do not particularly prefer right-wing violence become individuals who do.

“Much of the current work has found evidence for this ‘social norms’ mechanism, but not the ‘preferences’ mechanism.”

The latter mechanism unfolds when perceptions of norms change. That is, in the right-wing case, social media alters individuals’ understanding of what is socially acceptable, and those who are already ideologically right-wing and who have been considering engaging in contentious action will become more likely to do so. This change occurs because of three characteristics of right-wing online communities: They expose people to rhetoric denigrating out-group members; they lead people to believe that these ideas and behaviors are more pervasive than they are; and they suggest that contrary, or constraining, norms are ambiguous. Much of the current work has found evidence for this “social norms” mechanism, but not the “preferences” mechanism.7One recent study finds evidence that YouTube users who commented on “Alt-right” videos had previously commented on less extreme right-wing videos. This is suggestive of the “preferences” mechanism at work, but the outcome of physical violence or unrest is not measured. See Manoel Horta Ribeiro, Raphael Ottoni, Robert West, Virgílio A. F. Almeida, Wagner Meira, “Auditing Radicalization Pathways on YouTube,” FAT* ’20: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, January 2020, 131–141.

What’s next for the study of online extremism and offline harms?

Research on online extremism and offline harms has been constrained in part by the limitations of available data. This problem does not have an easy solution since relevant data are controlled by private companies. In addition, we should rightfully be concerned about privacy and consent. Nonetheless, a change to what data are available is necessary if we want to advance the study of online extremism and offline harm. As I have pointed out, we usually know where and when physical harm and violence occurs, but we have less reliable information on where, when, and to what extent different kinds of social media content is generated, consumed, shared, and discussed. New initiatives are needed to create spatial panel datasets that combine records of online activity and content with offline violence, perhaps with the help of social media companies.

In addition, research on online extremism and offline harm has gravitated toward dramatic, yet rare, instances of political violence and unrest, such as terrorist attacks and large political protests. Popular and journalistic reports have similarly tended to focus on these kinds of outcomes, as well as the potential for right-wing social media to “radicalize” individuals, imbuing them with a new ideology and molding them into people capable of violence (i.e., the “preferences” mechanism). However, the research I have discussed here has found evidence that right-wing online extremism has a much broader offline impact, increasing rates of hate crimes and localized, relatively commonplace political unrest. Moreover, this effect unfolds through a shift in people’s perceptions of what is socially acceptable. Future research on online extremism and offline harm should aim to understand this mechanism more precisely. It may not seem as unsettling as people adopting new, radicalized, and violent-prone natures, but, as the social theorist Norbert Elias pointed out, one of the most remarkable accomplishments of modernity has been the widespread norm that violence is not acceptable.8Cambridge: Polity Press, 1996More Info → The early evidence from the research on online extremism and physical violence suggests that online extremism—in this case, manifested via right-wing social media—is shaking the foundations of this accomplishment. It is opening the door for individuals to entertain taboo norms that facilitate violence—not because of (newfound) belief or ideology, but simply because they saw it online.

Banner photo: Andrew Aliferis/Flickr.

References:

1
For example, David Van Dijcke and Austin L. Wright, “Profiling Insurrection: Characterizing Collective Action Using Mobile Device Data,” SSRN, March 10, 2021.
2
For example, Ruben Enikolopov, Alexey Makarin, and Maria Petrova, “Social Media and Protest Participation: Evidence from Russia,” Econometrica 88, no. 4 (2020): 1479–1514; and Karsten Müller and Carlo Schwarz, “Fanning the Flames of Hate: Social Media and Hate Crime,” Journal of the European Economic Association, October 30, 2020.
3
Matthew L. Williams et al., “Hate in the Machine: Anti-Black and Anti-Muslim Social Media Posts as Predictors of Offline Racially and Religiously Aggravated Crime,” The British Journal of Criminology 60, no. 1 (2020): 93–117.
4
Karsten Müller and Carlo Schwarz, “From Hashtag to Hate Crime: Twitter and Anti-Minority Sentiment,” SSRN, July 24, 2020.
5
Daniel Karell, Andrew Linke, and Edward Holland, “Right-Wing Social Media and Unrest Correspond Across the United States,” SocArXiv, May 5, 2021.
6
Müller and Schwarz, “From Hashtag to Hate Crime.”
7
One recent study finds evidence that YouTube users who commented on “Alt-right” videos had previously commented on less extreme right-wing videos. This is suggestive of the “preferences” mechanism at work, but the outcome of physical violence or unrest is not measured. See Manoel Horta Ribeiro, Raphael Ottoni, Robert West, Virgílio A. F. Almeida, Wagner Meira, “Auditing Radicalization Pathways on YouTube,” FAT* ’20: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, January 2020, 131–141.
8
Cambridge: Polity Press, 1996More Info →