Right-wing extremism is on the rise in the West, from El Paso, Texas, to Christchurch, New Zealand. Of the five deadliest years for extremist violence in the United States since 1970, three have occurred in the past decade, and many of the perpetrators of these acts of violence have broadcast their actions or ideology online to increasingly large audiences enabled by digital media. But for 30 years or more, terrorism studies focused almost exclusively on leftist groups; in more recent times its focus has narrowed to jihadi terrorism. Even under an acknowledged growing threat of domestic right-wing extremism, the Trump administration further curtailed the Department of Homeland Security office and federal grants dedicated to countering white nationalist terrorism.

Today we know far less than we should about the processes driving right-wing extremism—from white nationalism to anti-Semitism to virulent misogyny—or the distinct mechanisms by which they may occur online. The rise of smartphones, apps, and platforms has changed media habits—e.g., how we read the news or engage in online debate—as well as the state of information diversity. And while it is clear that right-wing extremists exploit social media for political purposes, the extent to which they learn, adopt, and adapt extremist and white supremacist ideologies online is far less certain.

In order to effectively confront right-wing extremism, we must first understand how it operates in a world in which communication increasingly happens online, and in which the affordances of various digital platforms shape how extremism is manifested and spread within and across media. It is in this context that the Media & Democracy program at the Social Science Research Council (SSRC) convened a remote series of interdisciplinary research development workshops in the summer of 2020. The essays gathered here emerged from those workshops and represent a range of perspectives on the growth of white supremacy and right-wing extremism in the United States and abroad, their intersections, and the role that media and technology play in connecting and amplifying hate.

Editor’s note: In an effort to avoid amplifying extremist content online, and in accordance with “better practices” suggested in Whitney Phillips’ The Oxygen of Amplification, we endeavor to exclude direct links to harmful content and only highlight examples of harmful or hateful content where they are essential to the argument of the essays.

This series has been curated by Jason Rhody, program codirector of Media & Democracy; Mike Miller, program codirector of Media & Democracy and program codirector of Just Tech; and Carrie Hamilton, program associate of Media & Democracy and the Social Data Initiative.