From John Perry Barlow to Wael Ghonim, the internet has long been imagined as a social sphere beyond the reach of state power. Notwithstanding the increasing evidence testifying to the ability of states to limit freedom online, prevailing narratives continue to invoke the rights of individuals—above all else, the rights of free speech, privacy, and intellectual property. But freedom is in the eye of the beholder, and the “era of platforms” has corroborated another theory of liberty: that the mere evasion of state control is a narrow freedom indeed. Technologies that were quite recently lauded as tools of “liberation” are today used with increasing efficiency—by states and civil society actors alike—to harass, intimidate, and silence political opponents. Moreover, a growing body of scholarship now suggests the targets of toxic speech are disproportionately women, people of color, and other minorities who have traditionally been marginalized in the marketplace of ideas.

To better understand the mechanisms, contexts, and political effects of toxicity and intolerance online, the Media & Democracy program convened a research workshop on April 25–26, 2019, at the University of Texas at Austin. The workshop, “Race, Gender, and Toxicity Online,” was complemented by a plenary roundtable featuring prepared remarks by professors Zizi Papacharissi, Lisa Nakamura, and Catherine Knight Steele.1Zizi Papacharissi is professor and head of the Department of Communication, professor of political science at the University of Illinois-Chicago, and University Scholar at the University of Illinois System.
Lisa Nakamura is a professor at the University of Michigan, where she holds appointments in the university’s American culture, screen arts and cultures, and women’s studies departments.
Catherine Knight Steele is an assistant professor of communication at the University of Maryland, College Park, and was the first director of the Andrew W. Mellon–funded African American Digital Humanities Initiative (AADHum).

Here I highlight themes that emerged from the workshop itself and from the plenary roundtable. Papers presented at the workshop suggest that far from being a neutral arena allowing users to escape the racial and gendered hierarchies that exist offline, digital media have the capacity to reproduce—and even to augment—racist and sexist structures of power. Moreover, as participants in the plenary roundtable argued, contending with the threats of toxicity and intolerance demands that we reimagine the roles of both elites and publics.

The messenger is the message (and so is the recipient)

A New Yorker cartoon that encapsulates the naïveté of the early internet depicts two pets speaking to each other in front of a desktop computer, with the following caption: “On the Internet, nobody knows you’re a dog.” As this maxim suggests, freedom on “the Net” was imagined not only as flowing from limitless information, but also as erasing the limits of self-creation. Because no one could see who was at a keyboard, one could become one’s true self—or anyone else, for that matter.

“On some social media sites, confirmation of one’s ‘real’ identity may be the cost of admission.”

In the era of social media, however, the value of anonymity competes with the value of visibility. Increasing value is placed on documenting and sharing one’s real life, from day to day, minute to minute, and even livestreaming. Indeed, on some social media sites, confirmation of one’s “real” identity may be the cost of admission. Whereas it was once imagined that one could go online and simply create an identity anew, on today’s internet, we carry much of the baggage of “the real world” with us. In short, and in contrast to the imagined utopia of the early internet, perceptions held by others of who we are offline shape our experience online. This is not, of course, unique to life online. Rather, it gives the lie to the notion that digital spaces are somehow more egalitarian or value-free.

Research presented at the “Toxicity” workshop affirmed this thesis: from candidates running for political office to civilians creating and sharing memes, the identities of communicators shape both how messages are sent and how they are received. To give a few examples, workshop papers demonstrated that gender is the most consistent social predictor of perceived incivility, with women perceiving comments as uncivil at higher rates than men; that the racial composition of a social movement can have an effect on perceptions of the movement’s civility and, in turn, the perceived legitimacy of its goals; and that social media posts depicting behaviors perceived as “unfeminine,” such as menstruation, breastfeeding, or expressions of anger by women, are often censored by platforms more rapidly than depictions or threats of physical violence.2Because the research presented at this workshop is in development, we will not cite authors by name.

Social media has also been mobilized to reproduce racist, sexist, homophobic, and xenophobic institutions that also function offline to censor and intimidate members of traditionally marginalized communities. These structures may be produced consciously and purposely by intolerant agents provocateurs, unconsciously and artificially by recommendation algorithms and automated bots, and even inadvertently by members of the affected groups themselves. Indeed, as one participant’s research showed, images meant to empower and enlighten, such as video recordings of police violence against people of color, may have disproportionate and traumatic effects on certain internet users. That is, to scroll through one’s feed and see consistent, if random, images depicting violence against members of one’s own community exacts a heavy emotional toll that is likely to be unnoticed and unvalued by those for whom the consumption of such images is merely voyeuristic.

Mirroring scholarship on race and gender offline, research presented at the Toxicity workshop suggested that racism and sexism are not simply the work of racists and sexists. Such motivated actors are, of course, a real and present threat whose power is, ironically, amplified by anonymity, the same capacity once expected to insulate subaltern groups. But beyond and beneath outward aggression, inadvertent decisions (and nondecisions)—insofar as they are coded into algorithms, community rules, and norms of propriety—may function to silence or further marginalize subaltern voices. The result is a space that is, for many, far less than liberatory, and often hostile. Importantly, these experiences must be understood as the results of race and gender, which we do not and cannot simply leave behind when we go online.

In Understanding Media, Marshall McLuhan famously argued that “the medium is the message”—that the content of any message is of minor significance when compared to the ways media shape human relations.3Marshall McLuhan, Understanding Media: The Extensions of Man (Cambridge, MA: The MIT Press, 1964), 8. Research presented at the toxicity workshop suggested an addendum: that beyond and beneath the structures of media technology, the social identities of communicators shape not only how media are deployed but also how messages are received. In other words, if the medium is the message, so too are the messengers and the recipients.

What is to be done?

Although the papers presented at the workshop were largely diagnostic, the prepared remarks given by the plenary were decidedly prescriptive. Each speaker identified a class of elites that has, by virtue of their capacity to shape how we use or how we understand media, an outsize influence over the increasingly toxic space of life online. But in order to fulfill their mandate, these stakeholders—journalists, developers, and academics—must reimagine, reorient, or simply rediscover their roles and responsibilities in a changing media ecosystem.

Journalists

The information economy of social media is often described as an attention economy. Access to social media is subsidized by advertisements shown to users while they peruse content on the platform. The longer a user stays on a platform—the longer the platform can retain a user’s attention—the more advertisers are willing to pay.

The same logic that drives platforms to tailor algorithms to keep users engaged drives newspapers to keep readers on their site as opposed to those of their competitors. Journalists, Zizi Papacharissi argues, are clearly influenced by attention economics. In short, there are economic incentives for journalists to write, and for newspapers to publish, stories that keep readers engaged, in the very limited sense of “remaining on the site.”

“Emotions are a valuable and inevitable feature of politics and, when channeled in positive directions, can mobilize the kinds of publics necessary to create lasting change.”

From the perspective of enlightening discourse, however, these incentives are maladaptive. They favor the production of what Papacharissi calls “affective” news: a mélange of report, drama, fact, and opinion, designed to appeal, not to emotion, per se, but to the intensity of emotion. We often lament the emotional tenor of politics, particularly in political interactions on social media platforms. But Papacharissi suggests emotions as such are not the problem. Emotions are a valuable and inevitable feature of politics and, when channeled in positive directions, can mobilize the kinds of publics necessary to create lasting change. In contrast, Papacharissi defines affect as the intensity with which we feel; it is “the difference between a caress and a slap”: the same gesture, but with a different intent and a different effect.

Papacharissi argues that when journalists leverage affect to garner the attention of news consumers, they create conditions favorable to toxic and inflammatory discourse. This is because affective news reports intensity as the event itself; it favors drama over fact, narrative over documentary, and volume over clarity. It therefore rewards bombast, spectacle, and toxic discourse, promoting headlines that are high on alarm, and low on detail. In pursuit of clicks, journalists often mobilize affective cues or catch phrases—the Wall, the caravan, “her emails,” the Green New Deal—that, regardless of their origins, have been sapped of content. In so doing, they invite readers to sense their way into the news, rather than to make sense of the news.

But contrary to popular opinion, it is not the job of journalists to tell stories; rather, it is their responsibility to uncover and to tell the truth. We citizens, Papacharissi holds, are the storytellers; it is our responsibility to make sense of the news. Moreover, in an attention economy, we must resist the low hanging fruit of drama and clickbait that reward toxicity.

Developers

If Papacharissi’s advice was directed at elites within the media, Lisa Nakamura addressed elites within the tech industry: programmers, designers, and developers. Like the news industry, Nakamura argues, the tech industry is also disposed to mobilizing affect to engage users. But whereas the affective turn in journalism seems at least in part the result of larger, structural changes in the news industry, the affective turn in Silicon Valley appears to reflect deeper epistemological biases often held by individuals overrepresented in the industry: white men.

Nakamura argues that after years of ignoring or trying to dissociate itself from the racism and sexism that exist on platforms, the tech industry appears, at last, ready to confront these challenges. However, as humans are apt to do when facing new problems, industry elites have turned to solutions that they know best—technological innovation.

Nakamura identifies the industry’s current fixation with artificial intelligence (AI) and virtual reality (VR) as evidence, not only of the industry’s technological bias, but also of its affective turn. She argues that the tech industry sees AI, by virtue of its efforts to structure decisions on the basis of universal rules, as a terrain on which to ground a new ethics. VR, for its part, is imagined as capable of mobilizing technology to impart users with empathy. In each case, technology is a means not of combating racism or sexism with knowledge but of navigating around it. With VR, as with the affective publics described by Papacharissi, individuals are expected to feel their way into deeper understanding of others and the world. And with an AI-based ethics, derived from a set of logical principles, the goal of individual understanding, which is necessarily relational and contextual, is hampered by an uncritical and universalized set of prescribed actions. Moreover, in each case, the underlying assumption is that those who design technology are themselves objective—that their definitions of ethics and empathy are uncolored by their own identities.

Technological solutions, Nakamura argues, are almost always imagined as solutions for individuals, who can and should adopt them, at their own discretion, and for their own benefit. Ethics asks, what should I do? Empathy asks, how should I feel? Such approaches therefore place the burden of understanding on the shoulders of the individual, and remove responsibility from the social associations in which individuals are necessarily embedded. Importantly, they absolve the tech industry of whatever role it may have in creating or reinforcing structures of oppression on its platforms.

Importantly, the industry’s devotion to affect-based, individual-level solutions reflects the values of its elite, who are, more often than not, white men. The “self-made man” is a popular trope in any liberal society, but has a particular salience in the United States, where the myth of pioneers settling an “unknown” frontier has a firm grip on the national identity. The emancipatory power of the individual will, freed of external constraints, has been received as gospel by groups as diverse as libertarian devotees of Hayek and the “new communalist” progenitors of Silicon Valley.4Fred Turner, “Machine Politics: The Rise of the Internet and a New Age of Authoritarianism,” Harper’s Magazine, January 2019. But belief in the power of the individual will often rests on “a conception of individuality as something ready-made, already possessed” and independent of social relations.5John Dewey, Liberalism and Social Action (New York: G. P. Putnam’s Sons, 1935), 39. It therefore reflects the bias of privilege, which obscures the advantages of social position, and the fantasy of “beings in a mythical condition apart from association” may be entertained.6Dewey, Liberalism and Social Action, 41.

If the bias of the Silicon Valley elite is toward the individual, then Nakamura suggests that a corrective may be found in its obverse: the collective. Importantly, she suggests that in order to understand and to confront the problems of racism and sexism on platforms—many of which exist by design, if not by intention—we need to think in terms of all those who will ultimately use technology. We must abandon the idea that the future, because it is unknown, must be charted by individuals. Invoking Audre Lorde, she insists that to “reclaim the future,” to design technology that is both inclusive and empowering, we must include and empower collectives of women and people of color by inviting them into the decision-making process. If the biases of a narrow elite are coded into universal laws of ethics and empathy, the complexity and particularity of lived experience will not simply be erased. It will have never been depicted in the first place.

Academics

To confront toxicity online, Nakamura asked developers to look beyond the individual, and outside their in-group, for answers. In contrast, Catherine Knight Steele asks a final group of elites, academics, to expand the focus of their inquiries beyond the usual suspects.

In the aftermath of the 2016 election, and subsequent revelations of coordinated disinformation campaigns on social media, a virtual cottage industry has emerged to explain how it was (and is) that some members of our society had fallen prey to manipulation by foreign interlopers, bots, and agents provocateurs. Such studies are, of course, important, and fit neatly within a tradition in academia—particularly within economics and political science—that attempts to understand why individuals don’t do what we expect them to do. In other words, why do individuals behave in ways that are maladaptive?

“In addition to asking why so many were deceived by dis- and misinformation, academics should ask why so many were not.”

Steele, however, would like to push inquiry in another direction. In addition to asking why so many were deceived by dis- and misinformation, academics should ask why so many were not. To understand how to deal with toxicity, narrowly, and information disorder more broadly, we would be better served by looking to those who seem to have been insulated from its threat. We should look to those who have honed intra- and intercultural communicative skills that have enabled them—in spite of greater rates of adoption of key social media platforms and disproportionate targeting by bots and trolls7Suzanne Spaulding, Devi Nair, and Arthur Nelson, “Why Putin Targets Minorities,” Center for Strategic and International Studies, December 21, 2018.—to make choices that better serve their interests and those of their closest potential allies.

In short, rather than—or in addition to—asking about disaffected white voters’ susceptibility to coordinated disinformation campaigns, we should ask why Black women, who on average face greater economic uncertainty than elderly white men, have not abandoned the Obama coalition, much less liberal norms or democratic institutions.

On its face, this is a sound methodological intervention. But the core of Steele’s argument is more than that: it is a normative and epistemological critique of academic bias. By studying, for example, how Black feminists use and understand technology or “how the unique experiences of Black folks are transposed into their relationship with technology,” we don’t simply get greater variation on an outcome of interest; we undermine a cyberculture infused, as Nakamura points out, with an elite—white and male—ideology. By shifting our analysis from folks at the center to folks at the margins of technological experience—Black feminists—we work to undo a technoculture that is “embedded with white ideology, with patriarchy, with misogyny.”

Conclusion

Meredith Broussard defines technochauvinism as a belief that there is always a technological solution to the world’s problems. Broussard argues this is a fallacy, and a dangerous one. In our search for tech solutions to social problems, we often ignore the underlying structures of power that produce, for example, racism and misogyny online. In short, Broussard argues “there has never been, nor will there ever be, a technological innovation that moves us away from the essential problems of human nature.”8Meredith Broussard, Artificial Unintelligence: How Computers Misunderstand the World (Cambridge, MA: The MIT Press, 2018), 8.

What the papers and panel presentations at this workshop demonstrate is that threats to healthy politics, from toxic and abusive language to algorithmic biases, while digital in their manifestation, are, first and always, social problems. If this is so, then the solutions to the problems confronting our politics are unlikely to emerge from technology alone, if at all. Rather, social and political problems will require social and political interventions. Such a perspective not only frees us from dependence on a tech industry that may have neither the will nor the wherewithal to address the crisis of politics online, but also compels us to seriously consider with whom we should place our hopes.

If the plenary speakers at the toxicity workshop are correct, the answers to social problems likely reside outside of the elites to whom we often look. Indeed, in many ways, the failures of these elites to perform their roles in ways that address the public good—and their propensity to neglect more diverse perspectives—have created the conditions in which toxicity and intolerance have thrived.

You can listen to the full plenary panel here, and read more about the Race, Gender, and Toxicity Online Workshop here.

References:

1
Zizi Papacharissi is professor and head of the Department of Communication, professor of political science at the University of Illinois-Chicago, and University Scholar at the University of Illinois System.
Lisa Nakamura is a professor at the University of Michigan, where she holds appointments in the university’s American culture, screen arts and cultures, and women’s studies departments.
Catherine Knight Steele is an assistant professor of communication at the University of Maryland, College Park, and was the first director of the Andrew W. Mellon–funded African American Digital Humanities Initiative (AADHum).
2
Because the research presented at this workshop is in development, we will not cite authors by name.
3
Marshall McLuhan, Understanding Media: The Extensions of Man (Cambridge, MA: The MIT Press, 1964), 8.
4
Fred Turner, “Machine Politics: The Rise of the Internet and a New Age of Authoritarianism,” Harper’s Magazine, January 2019.
5
John Dewey, Liberalism and Social Action (New York: G. P. Putnam’s Sons, 1935), 39.
6
Dewey, Liberalism and Social Action, 41.
7
Suzanne Spaulding, Devi Nair, and Arthur Nelson, “Why Putin Targets Minorities,” Center for Strategic and International Studies, December 21, 2018.
8
Meredith Broussard, Artificial Unintelligence: How Computers Misunderstand the World (Cambridge, MA: The MIT Press, 2018), 8.