I (Devries) recently reached out to someone important in my life whose politics have been slipping further toward harmful, far-right politics. Our conversations had become increasingly strained because of our shared commitment to opposing politics. This person’s unshakable evangelicalism functions in tandem with their faith in Donald Trump’s political and moral virtue, reflective of Trump’s strong support among white evangelicals. Still, in my growing desperation to counter the “fake news” surrounding this person, I sent them an article about how evangelicals could now step back and think about whether Trump’s character still truly embodied their values, four years into his presidency. I thought this article might provide a safe space for this person not to debate, but to pause in their unquestioned support for Trump, since the article was written by an evangelical, and the author expressed support for some Republican policies. In short, I saw this piece of media as a way in—a means of accessing this person in a way that might afford a political rewiring.

Much to my dismay, however, I received no response except one word: “Wrong.” This somewhat hilariously robotic response struck me not just for its dismissiveness, but for the way it resembled Trump’s classic retort, “wrong.” While this person had always been politically active, this abrupt style hadn’t always characterized our interactions. In fact, it reminded me instead of a different interaction I have quite frequently with non-humans when logging into my various social media accounts:

When we offer the wrong password to a site like Facebook, their servers deny us access. This is because systems like Facebook follow a strict authentication protocol for establishing point-to-point connection between users and company servers. The protocol works as follows: The user offers their credentials (i.e., their password and username) to the Facebook servers, and the servers ensure that the user is who they say they are by matching the user’s credentials with what the servers have in storage. If the user inputs the correct username and password, the servers recognize the user as authentic, that is, as who they claim to be, and enable a communication link between them. If the protocol does not authenticate the user, then the system cannot establish a communication link between its (e.g., Facebook’s) servers and the user.

Practically speaking, the system authenticates the user as who they claim to be by seeking a “match.” Here, authenticity (i.e., the user existing as who they say they are) is not any static authentic essence or true quality of the user’s personal identity. Rather, authenticity is the by-product of an interaction between actors, in this case Facebook and human user, or between the servers that host these two entities. Only after the protocol verifies the identity of the user through their offered credentials can the system trust the user to enter, input data, and make changes without causing harm. In this sense, digital security systems experience authenticity as “trustworthiness,” and trustworthiness as “match.”1Pearson, 2013More Info →

Something similar occurs during interactions like the one described above. When I sent my credentials, i.e., the USA Today article, the recipient worked to verify whether the content matched their protocol for authentication. The user checks for these things as their eyes shift across either the page or just the headline, noting who sent the article, the publisher, its phrasing, and cross referencing this information with information stored in their own mind that marks content as “safe”, “worth reading”, and therefore, “trustworthy” and “truthful.” Importantly, this criterion prioritizes strict sameness; despite my thinking that the article would have enough in common with the recipient, inevitably it did not match enough of their stored criteria to be authenticated as trustworthy content. Therefore, the recipient denied any communicative link between us with a non-conversational response: “Wrong.” Access denied.

“The inherent truthfulness of any fact checking news article that we might share with folks is less important than its ability to match their pre-existing authentication protocol.”

We can think of these instances (inputting a password and sharing an article with someone) both as a process of authentication, in which an entity like a password or news article becomes authentic and experienced as trustworthy through an interactive exchange. In this framework, the inherent truthfulness of any fact checking news article that we might share with folks is less important than its ability to match their pre-existing authentication protocol. It is this match that allows exchange between opposed sides, where defensive guards are subdued, and new information can be allowed in.

This pairing of “trust” with an authenticated match is thus a pervasive concept that resonates in both digital and social systems. This is because the logics that organize digital networks recursively draw from material cultural conditions, which in turn shape the further design of new media technologies. The contemporary encoding of this logic of Trust via Authenticated Match into online infrastructures is related to the decades of rhetoric that swept through discussions about the dangers of an anonymous internet filled with unverifiable users, as well as material for-profit motives.2Wendy Hui Kyong Chun, Updating to Remain the Same: Habitual New Media (Cambridge, MA: MIT Press, 2016), 109. Throughout the early 2000s, corporations like Google and Facebook, who rely on the tethering of online profiles and offline “authentic identity” for their data mining operations, insisted that the verification of recognizable, transparent, authentic online identities was the surest means of promoting safety and combating online aggression.3Oliver L. Haimson and Anna Lauren Hoffmann, “Constructing and Enforcing “Authentic” Identity Online: Facebook, Real Names, and Non-Normative Identities,” First Monday 21, no. 6 (2016). This was culturally welcomed after an abundance of trolls, phishing schemes, spam, cyberbullying, and the supposed ramped spread of “cyberporn” provided an excuse for corporate and state demonization of the anonymity provided by the internet.4→Helen Nissenbaum, “Securing Trust Online: Wisdom or Oxymoron?Boston University Law Review 81 (2001): 635–664.
→Chun, Updating to Remain the Same.
Despite the fact that most aggression and cyberbullying come from non-strangers, for example from the “friends” in our Facebook network, this assumption of the familiar or the friend as safe and trustworthy and the stranger or the non-recognized as dangerous and non-trustworthy has become a cultural truism online.5Chun, Updating to Remain the Same, 111–113. This is not because of its inherent accuracy, but because social media companies profit from the data of verified identities and profiles, and thus organize the construction of for-profit digital networks in ways that define trust as security through authenticated familiarity.

“Who we trust is dependent upon ongoing and historical processes of authentication that recognize and re-establish who is familiar and who is therefore trustworthy.”

This reduction of trust to security via authenticated familiarity in digital networks reinforces the general social presumption that danger, lies, or “fake news” comes from outsiders, or those who are not the same as us and who we do not “match.”6Nissenbaum, “Securing Trust Online: Wisdom or Oxymoron?” Under this logic, “sanctioned, established, powerful individuals or organizations” that we already recognize and trust with our data, such as Facebook or certain political parties, are implicitly coded as always already trustworthy.7→Chun, Updating to Remain the Same.
→Nissenbaum, “Securing Trust Online: Wisdom or Oxymoron?”
Whether we trust alt-tech far-right sites like Parler over Facebook or Republicans over Democrats to give us the truth is therefore not dependent upon their tendency to forward truths or act transparently. Rather, who we trust is dependent upon ongoing and historical processes of authentication that recognize and re-establish who is familiar and who is therefore trustworthy. In my interaction, the USA Today article asked evangelical and Republican readers to question the trustworthiness of Trump, an authority figure already established as familiar, virtuous, and beloved to these groups’ previously written authentication system. However, the very concept of questioning the familiar runs counter to the established techno-cultural axiom of trust as security via authenticated match.

This axiom of Trust as Security via Authenticated Match is derivative of the logic of homophily, a standardized network design logic that assumes a desire for the same drives user behavior.8Wendy Hui Kyong Chun, “Queering Homophily” in Pattern Discrimination, eds. Clemens Apprich, Wendy Hui Kyong Chun, Florian Cramer, and Hito Steyerl (Minneapolis, MI: University of Minnesota Press, 2018), 59–98. We argue that both Homophily and Trust via Match are ubiquitous norms not only in computer network design (consider how algorithms are designed to bring us content resembling what we have already engaged with), but also in social organization and interaction shaped by colonial, segregationist histories.9→Safyia Noble, Algorithms of Oppression: How Search Engines Reinforce Racism (New York: NYU Press, 2018).
→Cathy O’Neil, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (New York: Broadway Books, 2016).
→Chun, “Queering Homophily.”

Sarah Ahmed10Sara Ahmed, “Affective Economies,” Social Text 22, no. 2 (Summer 2004): 117–139. shows how this logic of homophily and trust via match have and continue to work in offline networks when she described how white supremacist movements and far-right logics produce hate through the circulation of love between bodies marked as the same. Similar to how value is not an inherent property of a commodity but emerges through circulation of that commodity, the circulation and exchange of love and affinity between deeply familiar entities (i.e., between white people, the concept of the white race, the nuclear family, God, nationhood, and promised land) produces value within those entities, placing them in need of preservation. However, nonfamiliar (i.e., no match) peoples, religions, or cultures are simultaneously coded as nontrustworthy, marking those who have historically been construed as “Other” as a threat, the very downfall of “Us.”11Chun, “Queering Homophily.”

In this sense, processes that authenticate have always been a part of what constitutes authenticity and the experience of trustworthiness, whether in online or offline interactions. Humans authenticate people, movements, political media, or ideas by seeking a match, as do digital verification interfaces. This considered, the “password failed” notification we get when typing in the wrong Facebook password is not a metaphor for what happens when users reject the articles or ideas we send them. Instead, these two responses are the same type of authentication process that either admits or denies access or communication. By recognizing these as the same process of authentication between different human and non-human actors, we discover the pervasive absorption of Trust via Match both in how our digital networks are designed and in the ways that humans interact with and make sense of the world.

“We think this calls for studies of far/right-wing movements that analyze these processes where authentication happens, instead of static ‘authenticity.’”

When rooted in homophily, authentication processes reinforce walls between political worlds, preventing movement into a more egalitarian existence where we might trust difference, thanks in no small part to the goals of corporate digital systems as well as our social histories of segregation and comfort with familiarity. We think this calls for studies of far/right-wing movements that analyze these processes where authentication happens, instead of static “authenticity.” This means dividing our attention between the substance or semiotics of “fake” news content and the (often affective) material interactions between humans, platforms, and content that produce a match for those who find fake news compelling. Practically speaking, this could look like qualitative and conversational methods that study humans, technologies, and content as interlinked actors with homophilic histories.

For now, we think one way to slow the spread of mis- and disinformation is through tactics designed to disrupt the processes of authentication that make “fake news” trustworthy. When reaching out to the far-right adherents in your life, the trick may be to find content that matches their verification system just enough to slip past security systems and gain us access into the user’s world. From there, we have the chance to rewrite the user/system’s authentication protocol to include new, diverse concepts or ways of thinking, even if slowly or over time. This type of subtly resistive content is not easy to find or produce, but it may provide one more important tool (alongside other macro-social methods) for infiltrating these processes that authenticate fake, misleading, or otherwise harmful media. In the meantime, processes of authentication are powerfully effective, and only need to qualify content as nonfamiliar to mark it as Wrong.

References:

1
Pearson, 2013More Info →
2
Wendy Hui Kyong Chun, Updating to Remain the Same: Habitual New Media (Cambridge, MA: MIT Press, 2016), 109.
3
Oliver L. Haimson and Anna Lauren Hoffmann, “Constructing and Enforcing “Authentic” Identity Online: Facebook, Real Names, and Non-Normative Identities,” First Monday 21, no. 6 (2016).
4
→Helen Nissenbaum, “Securing Trust Online: Wisdom or Oxymoron?Boston University Law Review 81 (2001): 635–664.
→Chun, Updating to Remain the Same.
5
Chun, Updating to Remain the Same, 111–113.
6
Nissenbaum, “Securing Trust Online: Wisdom or Oxymoron?”
7
→Chun, Updating to Remain the Same.
→Nissenbaum, “Securing Trust Online: Wisdom or Oxymoron?”
8
Wendy Hui Kyong Chun, “Queering Homophily” in Pattern Discrimination, eds. Clemens Apprich, Wendy Hui Kyong Chun, Florian Cramer, and Hito Steyerl (Minneapolis, MI: University of Minnesota Press, 2018), 59–98.
9
→Safyia Noble, Algorithms of Oppression: How Search Engines Reinforce Racism (New York: NYU Press, 2018).
→Cathy O’Neil, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (New York: Broadway Books, 2016).
→Chun, “Queering Homophily.”
10
Sara Ahmed, “Affective Economies,” Social Text 22, no. 2 (Summer 2004): 117–139.
11
Chun, “Queering Homophily.”