Many communities in Southeast Asia have a storied legacy of distrust toward media and technology regulation. Governments in the region have a long history of weaponizing the law—from libel to blasphemy to monarchy defamation to media ownership—to harass or even jail journalists, media owners, and activists expressing dissent against state leaders.

The Covid-19 public health crisis only intensified this disposition, as autocratic (and aspiring authoritarian) governments shamelessly went after easy targets and vocal opponents just when publics are at their most vulnerable. Over the past months, Thai authorities retaliated with lawsuits against healthcare workers who exposed corruption and hoarding in the surgical mask supply chain; in the Philippines, government and military officials applied war rhetoric to both Covid-19 and opposition politicians and celebrities who criticized a militarized pandemic response.

“Southeast Asian experiences hold valuable lessons to help advance global efforts to mitigate disinformation.”

The region has been a leading example of “legislative opportunism” through which moral panics about disinformation have been exaggerated and hijacked by state leaders to gain control over the digital environment. The pandemic has only accelerated and amplified the effect of so-called “anti-fake-news” laws on a global level as governments from Romania to Botswana emulate scare tactics seen in Singapore and Malaysia.1Gabrielle Lim, Securitize/Counter-Securitize: The Life and Death of Malaysia’s Anti-Fake News (Data & Society, 2020). For this reason, Southeast Asian experiences hold valuable lessons to help advance global efforts to mitigate disinformation.

Researchers, especially ethnographers in the field of disinformation studies, agree that there is no one-size-fits-all approach to fighting information disorders, and that interventions should be culturally appropriate.2Sahana Udupa, Igino Gagliardone, Alexandra Deem, and Laura Csuka, Field of Disinformation, Democratic Processes, and Conflict Prevention: A Scan of the Literature (Social Science Research Council, February 2020). But learning from Southeast Asia can help answer critical questions in the global fight against fake news. How can researchers help lobby for responsible tech regulation in countries where government leaders are the biggest bad actors? How can we mindfully and successfully navigate multi-stakeholder environments where almost everyone is complicit in sourcing or profiting from disinformation shadow economies? Crucially, should the extreme cases we have seen in the region be treated as “fringe” policy, or should they be the model for global standards on platform bans and content moderation? For example, Facebook belatedly banned Myanmar’s commander-in-chief and military officials from its platform for violating community standards of hate speech: Should it do the same for other political leaders amplifying hateful and xenophobic speech?

As a researcher who has written about disinformation economies from the perspectives of entrepreneurial “perpetrators” and participated in election integrity partnerships in the last Philippines elections, I struggle with these questions every day and resolve them provisionally.3→Jonathan Corpus Ong and Jason Vincent A. Cabañes, “When Disinformation Studies Meets Production Studies: Social Identities and Moral Justifications in the Political Trolling Industry,” International Journal of Communication 13 (2019): 5771–5790.
→Jonathan Corpus Ong, Ross Tapsell, and Nicole Curato, Tracking Digital Disinformation in the 2019 Philippine Midterm Election (New Mandala, Australian National University, 2019).
As each new and bewildering emergency arises, I look for inspiration to responsible interventionist research set forth by colleagues such as Joan Donovan, who challenges US journalists to practice strategic silence when reporting on media manipulators,4Joan Donovan and danah boyd, “Stop the Presses? Moving from Strategic Silence to Strategic Amplification in a Networked Media Ecosystem,” American Behavioral Scientist, September 29, 2019. and Ethan Zuckerman, who’s been a powerful voice for the regulation of the internet as a public service utility meant to uphold democratic values.

“Southeast Asia is not only a hub of disinformation innovations, but also a hub of bad examples of disinformation regulation.”

In relating three key lessons I’ve learned as an ethnographer studying digital media in Southeast Asia, I invite my Euro-American colleagues and platform representatives to scrutinize the specific challenges in the region. Southeast Asia is not only a hub of disinformation innovations, but also a hub of bad examples of disinformation regulation. Reflecting on this broad spectrum can empower researchers and civil society partners to develop culturally relevant tools and backchannel lobby initiatives that can maneuver around repressive regimes. I also call on fellow Southeast Asian researchers to help build better support systems for each other due to our work with many diverse and imperfect allies.

Many disinformation producers are financially motivated with little ideological investment

Whereas “unholy alliances” among diverse segments of the US far-right have real ideological investment behind the xenophobic and/or misogynist online speech that aligns with their political agenda,5Alice Marwick and Rebecca Lewis, Media Manipulation and Disinformation (Data & Society, 2017). many Southeast Asian media manipulators are driven by almost purely entrepreneurial motivations. For instance, Indonesia’s Instagram clickfarm industry serves a global clientele of lifestyle influencers that was later repurposed for political campaigns in their 2019 elections. In the Philippines, “black campaigning” has emerged from the shadows moving into the boardrooms of advertising and public relations firms, selling their services to the highest bidder. From our ethnographic research with campaigners, influencers, and fake account operators in the Philippines, we discovered that nobody works as a full-time troll;6Jonathan Corpus Ong and Jason Vincent A. Cabañes, Architects of Networked Disinformation: Behind the Scenes of Troll Accounts and Fake News Production in the Philippines (Newton Tech4Dev Network, 2018). the majority maintained “respectable” day jobs in corporate marketing.

There are plenty of disinformation entrepreneurs in Southeast Asia, but they’re often missed in narratives about the region, which quite understandably spotlight authoritarian political leaders attacking liberal institutions. Researchers here have an opportunity to shade in the layers of accountability and complicity within these political regimes and help journalists find more effective tools to mitigate political crises. Journalistic exposés that “unmask the trolls” supporting populist leaders are not enough.

“Engaged researchers in the region have a responsibility to participate and even lead global debates around principles that govern influencer marketing, digital advertising, and the practice of political campaign consultancies.”

Engaged researchers in the region have a responsibility to participate and even lead global debates around principles that govern influencer marketing, digital advertising, and the practice of political campaign consultancies. Before US Democratic candidate Mike Bloomberg enlisted armies of micro-influencers and meme pages for his failed presidential bid in the 2020 primaries, the 2019 Philippines elections had seen parody Twitter accounts and thirst-trap Instagram influencers mobilized for campaigns, circumventing campaign finance regulations.7Ong, Tapsell, and Curato, Tracking Digital Disinformation, 23–24. In Thailand, electoral campaign laws and social media laws have been politicized to such an extreme that opposition politicians are routinely disqualified and harassed, and the enforcement of campaign laws is arbitrary.

Southeast Asian scholars and activists can develop better tools and regulatory frameworks that aim for checks-and-balances and transparency rather than routinely lobbying for more platform takedowns. This involves more robust tracking of election campaign spends, transparency mechanisms identifying PR agencies and political consultants behind campaigns, and lobbying the industry to apply pressure to ensure fair and honest elections. Journalists in the region may themselves be reluctant to antagonize those who control the corporate advertising money that their news agencies depend on so it’s important to institutionalize collaboration across sectors.

Blaming Facebook is easier than seeking local reform—but it doesn’t solve the problem

“It’s undoubtedly important we keep applying pressure to platforms to improve their content moderation of extreme speech and enhance support for the many precariously employed content moderators in the region.”

Platform determinist narratives assign primary blame to Facebook for the crass tenor of partisan debate and “surprise” electoral outcomes. It’s undoubtedly important we keep applying pressure to platforms to improve their content moderation of extreme speech and enhance support for the many precariously employed content moderators in the region.8Matti Pohjonen and Sahana Udupa, “Extreme Speech Online: An Anthropological Critique of Hate Speech Debates,” International Journal of Communication 11 (2017): 1173–1191. It’s also urgent that we demand better representation of Southeast Asia in the Facebook Oversight Board responsible for reviewing content takedown decisions. As legal scholars argue, it’s disproportional that only one Southeast Asian representative is on the 20-person board when global surveys have identified four of the top 10 countries with the most active social media users are in Southeast Asia. This also raises the question of translation labor: Which regional and local experts will eventually be enlisted by Facebook to translate cultural specificities of social media content posts to these 20 board members and will they be compensated properly?

Researchers in the region should hold space for demanding accountability from platforms alongside the hard work of collaborating with lawmakers, journalists, and private industry to develop regulatory interventions that can resist state capture. I’m mindful of the Singapore experience where moral panics around “fake news” and foreign interference in elections ultimately benefited the ruling party as their widely overreaching “anti-fake news law” granted government leaders power over platforms’ content takedowns. Possible alternatives with softer approaches are civil society-led election campaign finance monitoring and self-regulatory mechanisms in advertising councils.

“We found evidence that Facebook was actually a much more active participant in multi-stakeholder initiatives in Southeast Asia compared to Twitter and Google, which failed to attend key meetings with election commissioners and civil society watchdogs.”

What’s also become apparent in our comparative research on election integrity initiatives in Southeast Asia is that platforms are not monoliths and their national-level staff are variably empowered, passionate, and strategic.9Jonathan Corpus Ong and Ross Tapsell, Mitigating Disinformation in Southeast Asian Elections: Lessons from Indonesia, Philippines, and Thailand (Nato StratCom Centre for Excellence, 2020). We found evidence that Facebook was actually a much more active participant in multi-stakeholder initiatives in Southeast Asia compared to Twitter and Google, which failed to attend key meetings with election commissioners and civil society watchdogs. Popular platforms such as Line (in Thailand) and Viber (in the Philippines) have not been part of regulatory discussions at all in spite of being cesspools of dis- and misinformation, including in the current Covid-19 crisis.

Southeast Asia’s China problem exposes the thin line separating disinformation and hate speech

We should be mindful that this urgent fight against “fake news” doesn’t turn us or our allies into the very enemies we vow to fight against. One of the findings in our 2019 Southeast Asian elections study is that disinformation became “democratized,”10Ong and Tapsell, Mitigating Disinformation, 7. and politicians and their supporters who previously decried disinformation campaigning adopted some of these same tactics to try and “fight fire with fire.” While some coordinated tactics productively disrupt racist speech—exemplified when K-Pop fans torpedoed racist hashtags against the Black Lives Matter movement—we should be cautious that some other tactics might reproduce vicious cycles of hateful confrontation.

Even Singapore, with its tradition of restraint and civility in political debate, witnessed an escalation in dirty campaigning from politicians across the political spectrum, and academics sympathetic to the beleaguered opposition party had to call out their own colleagues for behaving like bullies.

“In the Philippines and Indonesia, politicians and digital influencers were occasionally complicit in the surge of anti-China xenophobic speech and conspiracy theories in the wake of Covid-19.”

In the Philippines and Indonesia, politicians and digital influencers were occasionally complicit in the surge of anti-China xenophobic speech and conspiracy theories in the wake of Covid-19. Both countries saw many incidents of digital parody and memes, racial slurs, service refusals to mainland Chinese people on ridesharing apps—occasionally erupting to physical violence. Rather than fact-checking conspiracy theories or calling out extreme speech, some journalists in these countries reproduced this hateful rhetoric in their own personal pages or even national newspapers, such as the Philippine Daily Inquirer. Some journalists and activists justified the use of hate speech as a “weapon of the weak,” resisting Beijing’s increasing economic and political presence in the region.

Academics and journalists should prepare for scenarios where digital disinformation and hate speech converge and leave lasting harm to multicultural relations. Anti-China conspiracy theories and emotionally manipulative speech are political strategies of various political influencers and meme accounts, and we should be quick to call these out in the months ahead.

Antiracism trainings that shed light on the historical and structural roots of racial hierarchies within the region and develop standards around reporting on complex multicultural issues would be important programs for journalists, platform workers, and academics in the region to collaborate on. Disinformation and hate speech have incited religious and racial violence in the case of Myanmar, and civil society actors need to be comprehensive and pre-emptive in calling out abusive speech that deepens social fractures.

“There are far too many people responsible and much more complicit in the expansion of disinformation economies to reduce the fight against disinformation to simplistic good-versus-evil narratives.”

Moving forward, we need better cooperation among academic researchers, journalists, and civil society activists to tackle a multidimensional issue that cannot be solved by technological solutionism (e.g., improved algorithms) or platform determinism (e.g., solely blaming social media platforms). After all, there are far too many people responsible and much more complicit in the expansion of disinformation economies to reduce the fight against disinformation to simplistic good-versus-evil narratives. The challenge ahead is to have a more precise language of responsibility, such that we can sufficiently assign culpability to the diversity of disinformation producers who profit from political campaigns as well as ordinary people who believe in various disinformation narratives.

We will need sustainable infrastructures for deep research and quick interventions that could shed light on disinformation innovations, de-escalate narratives that could lead to violence and harm, disincentivize nontransparent and nonaccountable ways of electoral campaigning, and understand the social and economic anxieties that are being stoked by insidious media manipulators, such that we could address them at their roots.

Banner photo: Patrick Roque/Wikimedia Commons.

References:

2
Sahana Udupa, Igino Gagliardone, Alexandra Deem, and Laura Csuka, Field of Disinformation, Democratic Processes, and Conflict Prevention: A Scan of the Literature (Social Science Research Council, February 2020).
3
→Jonathan Corpus Ong and Jason Vincent A. Cabañes, “When Disinformation Studies Meets Production Studies: Social Identities and Moral Justifications in the Political Trolling Industry,” International Journal of Communication 13 (2019): 5771–5790.
→Jonathan Corpus Ong, Ross Tapsell, and Nicole Curato, Tracking Digital Disinformation in the 2019 Philippine Midterm Election (New Mandala, Australian National University, 2019).
4
Joan Donovan and danah boyd, “Stop the Presses? Moving from Strategic Silence to Strategic Amplification in a Networked Media Ecosystem,” American Behavioral Scientist, September 29, 2019.
5
Alice Marwick and Rebecca Lewis, Media Manipulation and Disinformation (Data & Society, 2017).
6
Jonathan Corpus Ong and Jason Vincent A. Cabañes, Architects of Networked Disinformation: Behind the Scenes of Troll Accounts and Fake News Production in the Philippines (Newton Tech4Dev Network, 2018).
7
Ong, Tapsell, and Curato, Tracking Digital Disinformation, 23–24.
8
Matti Pohjonen and Sahana Udupa, “Extreme Speech Online: An Anthropological Critique of Hate Speech Debates,” International Journal of Communication 11 (2017): 1173–1191.
9
Jonathan Corpus Ong and Ross Tapsell, Mitigating Disinformation in Southeast Asian Elections: Lessons from Indonesia, Philippines, and Thailand (Nato StratCom Centre for Excellence, 2020).
10
Ong and Tapsell, Mitigating Disinformation, 7.