The failure to recognize dangerous speech—rhetoric that can inspire group violence—from Trump and other strongmen around the world is just one example of social media companies’ poor use of their vast private power. In this essay, Susan Benesch argues that while international human rights law was made for governments and not private companies, it has the potential, if adequately interpreted, to serve as a guide for social media companies to regulate hateful speech and for outsiders to hold these companies accountable.
The growth and popularity of social media technology has had tremendous consequences for human communication, access to information, and the strategies individuals deploy to share with, convince, and mobilize others. Yet, the promise that this new digital ecosystem would play a primarily equalizing role has been short lived. It is now also a powerful space for pushing partisan and false content to millions of users, and the latest tool for the dissemination of hate speech, wedge issues, and even violence. The pernicious effects of social media in societies has become most visible during recent electoral campaigns and other high-stake political processes in the United States, Europe, Sub-Saharan Africa, and elsewhere, as both domestic and external actors have used these forums to influence domestic public opinion and further polarize communities. Big tech companies have come under much scrutiny for not acting more aggressively to limit the spread of false or extremist content on their platforms, and many governments are struggling or resisting to put in place effective regulation. Some are constrained by free speech concerns while others simply benefit from the effects of information disorder.
The United Nations (UN) and its operations have not been immune to the effects of disinformation, as they not only further complicate the UN’s prevention agenda, but also challenge core Charter values. In his address at the opening of the 40th regular session of the UN Human Rights Council in Geneva, UN Secretary-General Antonio Guterres characterized misinformation and hate speech as global threats and threats to democratic values, social stability, and peace. He asserted, “With each broken norm, the pillars of humanity are weakened.” In June 2019, he introduced an action plan to combat hate speech that aimed at improving preventive efforts and developing counternarratives, but also to enable the system to respond more effectively to hate speech occurring across UN member states. Advancing these goals requires engaging with research communities in order to better understand the impact these technologies have on prevention efforts, help inform policy responses, and shape new research and policy agendas.
Disinformation and hate speech are raising urgent and complex geopolitical questions that all governments, policymakers, and intergovernmental organizations like the UN will be hard pressed to help shape and respond to effectively. The challenges of disinformation and misinformation are transforming how societies grapple with democratic processes, as well as how and when violence emerges and occurs. International policymakers will thus need to deepen their understanding of disinformation, misinformation, and hateful speech and their impact on elections, violence, and conflict prevention. The UN relies on its unique legitimacy, moral authority, and skillset as an interlocutor between states and civil society—yet the target of disinformation campaigns is frequently the very social cohesion that norms, institutions, and multilateral bodies such as the UN help to sustain.
The essays in this series on “Disinformation, Democracy, and Conflict Prevention” are based on presentations at a research workshop on “Disinformation, Democratic Processes, and Conflict Prevention,” convened by the SSRC’s Conflict Prevention and Peace Forum (CPPF) and MediaWell disinformation research mapping initiative for the SSRC’s Academic Network on Peace, Security, and the United Nations. Scholars and researchers from regions around the world examined the frameworks, findings, and debates in emerging research on information disorder and the linkages between disinformation, elections, hate speech, and identity-based violence. The workshop also explored the ways in which disinformation affects the UN prevention agenda, and how the UN system can better identify, track, and respond to the negative impacts of disinformation where the UN is engaged.
While the threats that hate speech, violence, and disinformation—all often amplified through social media and other technologies—are global phenomena, they are also unique in their geographic, political, and technical implementations. This essay series and the workshop that informed it are modest contributions to respond to a growing need for more examinations of these issues both within global frameworks and specific to local contexts. The essays reflect the geopolitical realities of Asia, Africa, Europe, and the United States, and with a comparative lens offer analysis of current conditions and recommendations for future steps.
Additional Resources and Background Information:
Conflict Prevention and Peace Forum (CPPF)
MediaWell live research reviews and essays
- Hate Speech, Information Disorder, and Conflict Prevention
- Defining “Disinformation”
- Disinformation and Election Interference
- Producers of Disinformation
- Disinformation, Democracy, and the Social Costs of Identity-Based Attacks Online
This series has been curated by Tatiana Carayannis, program director of the Conflict Prevention and Peace Forum (CPPF); Jason Rhody, program codirector of Media & Democracy; and Mike Miller, program codirector of Media & Democracy.
Disinformation, Democracy, and the Social Costs of Identity-Based Attacks Online
by Sarah SobierajIn July, US president Donald Trump posted a now-infamous thread on Twitter: “So interesting to see ‘Progressive’ Democrat Congresswomen, who originally came from countries whose governments are a complete and total catastrophe, the worst, most corrupt and inept anywhere in the world (if they even have a functioning government at all), now loudly and viciously telling the people of the United States, the greatest and most powerful Nation on earth, how our government is to be run,” he tweeted. “Why don’t they go back and help fix the totally broken and crime infested places from which they came. Then come back and show us how it is done.”
Southeast Asia’s Disinformation Crisis: Where the State is the Biggest Bad Actor and Regulation is a Bad Word
by Jonathan Corpus OngAs Western democracies debate social media regulation, Jonathan Corpus Ong outlines the valuable lessons they can draw from Southeast Asian experiences. Governments in the region have weaponized regulation and hijacked moral panics about disinformation to consolidate control over the digital environment. The challenge facing the world, he argues, is to build a more precise language of responsibility to tackle this multidimensional issue.
Artificial Intelligence and the Cultural Problem of Online Extreme Speech
by Sahana UdupaA short foray into an AI-based platform’s effort to tackle hate speech reveals its promise, but also the enormous inherent challenges of language and context. Debunking the “magic wand” vision of AI moderation, Sahana Udupa calls for a collaborative approach between developers and critical communities.
Classifying and Identifying the Intensity of Hate Speech
by Babak BahadorHate speech does not operate in a vacuum, and its rise reflects changing political contexts. If we’re serious about fighting hate speech and its violent and destabilizing consequences, we need to identify its earliest manifestations. Babak Bahador offers a hate-speech intensity scale, a strategy that allows us to move beyond the binary approach that dominates current hate speech research. This concept can be operationalized to better identify and understand the evolutions of hate speech before it leads to real-world harms.
The Institutional Crisis at the Root of Our Political Disinformation and Division
by Steven Livingston and Lance BennettThe roots of the information disorder are multiple, but Steven Livingston and Lance Bennett argue that a disproportionate amount of attention—and critique—have been directed at technology. Although social media platforms rightly share blame for the circulation of mis- and disinformation, the authors suggest that a prior and more consequential source of information disorder may be traced to sustained attacks on “authoritative institutions,” which have worked, historically, to foster a sense of shared reality and to mitigate against the threat of disinformation.
Converging Technologies in Africa: Geostrategic Positioning and Multipolar Competition
by Eleonore PauwelsRecognizing that in the absence of adequate regulations and oversight the most intimate data we share can be used to undermine democratic processes and hurt citizens, Eleonore Pauwels offers suggestions for how UN member states, particularly across Africa, might prevent rising forms of data collection and manipulation that lead to information disorders and electoral disruptions.
Nigeria’s Disinformation Landscape
by Idayat Hassan and Jamie HitchenThe increasing threat to democratic institutions posed by disinformation is a global phenomenon. Yet, as Idayat Hassan and Jamie Hitchen reveal in this case study of Nigeria, the local effects of disinformation are shaped as much by offline conventions and institutions as by online interactions.