Fenwick McKelvey argues that the way we frame social problems, such as online extremism, shapes how we respond to them. The impulse to combat extremism through flawed content moderation strategies reflects a tendency to treat extremism as primarily a content problem. But in order to tackle online extremism, we must first understand and address how it is intertwined with the deep roots of bigotry and hate in our history and social structures.
Right-wing extremism is on the rise in the West, from El Paso, Texas, to Christchurch, New Zealand. Of the five deadliest years for extremist violence in the United States since 1970, three have occurred in the past decade, and many of the perpetrators of these acts of violence have broadcast their actions or ideology online to increasingly large audiences enabled by digital media. But for 30 years or more, terrorism studies focused almost exclusively on leftist groups; in more recent times its focus has narrowed to jihadi terrorism. Even under an acknowledged growing threat of domestic right-wing extremism, the Trump administration further curtailed the Department of Homeland Security office and federal grants dedicated to countering white nationalist terrorism.
Today we know far less than we should about the processes driving right-wing extremism—from white nationalism to anti-Semitism to virulent misogyny—or the distinct mechanisms by which they may occur online. The rise of smartphones, apps, and platforms has changed media habits—e.g., how we read the news or engage in online debate—as well as the state of information diversity. And while it is clear that right-wing extremists exploit social media for political purposes, the extent to which they learn, adopt, and adapt extremist and white supremacist ideologies online is far less certain.
In order to effectively confront right-wing extremism, we must first understand how it operates in a world in which communication increasingly happens online, and in which the affordances of various digital platforms shape how extremism is manifested and spread within and across media. It is in this context that the Media & Democracy program at the Social Science Research Council (SSRC) convened a remote series of interdisciplinary research development workshops in the summer of 2020. The essays gathered here emerged from those workshops and represent a range of perspectives on the growth of white supremacy and right-wing extremism in the United States and abroad, their intersections, and the role that media and technology play in connecting and amplifying hate.
Editor’s note: In an effort to avoid amplifying extremist content online, and in accordance with “better practices” suggested in Whitney Phillips’ The Oxygen of Amplification, we endeavor to exclude direct links to harmful content and only highlight examples of harmful or hateful content where they are essential to the argument of the essays.
This series has been curated by Jason Rhody, program codirector of Media & Democracy; Mike Miller, program codirector of Media & Democracy and program codirector of Just Tech; and Carrie Hamilton, program associate of Media & Democracy and the Social Data Initiative.
The Human Infrastructure of Fake News in Brazil
by David NemerThe role of algorithms in promoting disinformation has received a great deal of attention in recent years, due in large part to the centrality of Facebook in the 2016 US presidential election and the UK Brexit campaign. However, David Nemer argues that in countries such as Brazil, where peer-to-peer messaging apps like WhatsApp are popular, more attention needs to be paid to the "human infrastructure" of coordinated disinformation campaigns.
Online Extremism and Offline Harm
by Daniel KarellIt is often assumed that while extremist content online may result in offline violent behavior, the actual instances of such events are rare. However, in the latest essay from our “Extremism Online” series, Daniel Karell argues that this assumption is wrong, and reflects a misunderstanding of the mechanisms by which extremist content online shapes offline behavior. Indeed, new evidence suggests that online extremism, particularly from the right wing in the United States and Western Europe, results in offline, physical violence far more often than we think.
Mainstreaming Resentment: YouTube Celebrities and the Rhetoric of White Supremacy
by Cindy MaIn her contribution to the “Extremism Online” series, Cindy Ma unpacks the rhetorical strategies used by right-wing YouTube microcelebrities to insert increasingly racist and white supremacist tropes into popular discourse while shielding themselves from accusations of extremism.
Platform Racism: How Minimizing Racism Privileges Far Right Extremism
by Bharath GaneshIn recent years, Facebook, Twitter, and YouTube have taken steps to constrain the ability of users to share or amplify racist discourse on their platforms. However, Bharath Ganesh argues, by limiting the focus of their efforts to only the most egregious forms of racist discourse, the platforms may embolden broader networks of extremists to levy less obvious, but equally pernicious forms of racist discourse.
We Cannot Just Moderate Extremism Away
by Deana A. RohlingerIn the wake of the January 6 attack on the US Capitol, the role of social media in propagating extremism was once again under scrutiny. However, as Deana Rohlinger's research demonstrates, stronger moderation policies alone would fail to account for the many ways that users express political beliefs through online forums. Instead, she argues that additional direct interventions like political bias training are necessary to both protect against extremism and encourage democratic participation.