It’s the summer of 2021. At this point, nobody needs to be told how much the Covid-19 pandemic has disrupted our everyday lives. However, much like how the symptoms of an illness are often a product of the body’s immune response to a more destructive threat, many of the disruptions this past year come not from the SARS-CoV-2 virus itself, but from the public policy interventions that have been put in place to slow its spread and save lives. Measures like closing businesses, restricting travel, mandating masks, and more have reshaped the lives of people all over the world for more than a year, likely saving millions of lives, but at a cost. How do we evaluate whether these life-saving measures were appropriate? Did we do enough? Did we overreact?

On March 15, 2020, as federal, state, and local governments across the United States started applying these policy measures, Dr. Anthony Fauci, head of the US National Institute of Allergy and Infectious Diseases (NIAID), said to a reporter, “If it looks like you’re overreacting, you’re probably doing the right thing.” This statement reflected a key intuition about how these measures would be received by the public: People would believe they were an overreaction, and they would be less likely to comply as a result.

“How do lay people decide whether a public health measure is an overreaction?”

Fauci’s comment raised a critical, but understudied sociological question: How do lay people decide whether a public health measure is an overreaction? An appropriate response or not enough? How do we make that judgment before we know how things turn out? How do we evaluate costly interventions when we know how things turned out? To our surprise, we could not find existing research about these lay judgments, or how judgments of overreaction might be linked to compliance with public health or other measures.

Without knowing how people evaluate whether something is an overreaction, it’s hard to know what the most effective messaging strategy is. Should policymakers and communicators emphasize the severity of the threat or present it as something that can be stopped with action? Focus on how the intervention will fight it, or focus on the intended results? As we saw with responses to the recent halts on administering the Johnson and Johnson and AstraZeneca vaccines, it can be difficult to estimate public perception: Does a potentially overly conservative halt build long-term compliance and trust in governing bodies or does it feed a conspiracy theory narrative that all vaccines are unsafe? These questions are important to answer, not just for the Covid-19 pandemic, but for every crisis that requires major public policy interventions, including future public health crises, natural disasters, and climate change. Here, we focus on understanding 1) how lay people make intuitive judgments about whether a public policy is an overreaction and 2) whether judgments of overreaction are tied to compliance with public health policies.

In recent studies, we have begun to map out some answers in order to inform this much more complex picture. First, we are beginning to understand the mechanics of these judgments “in the lab.” In our work, where we could control the information received, we asked US adult participants about hypothetical scenarios. We then applied what we learned to a new study with US adults, asking for their judgments about the Covid-19 pandemic measures.

Thinking about what could, would, or should have happened

“We engage in what’s called mental simulation: constructing a model of the world in our heads, changing something, and running that model forward to figure out what will happen next, or what would have happened instead.”

One of the more remarkable things about the way people make judgments is that we don’t base our decisions just on things that actually happened. Instead, we think about what could, would, or should happen. We engage in what’s called mental simulation: constructing a model of the world in our heads, changing something, and running that model forward to figure out what will happen next, or what would have happened instead. One recent theory about why we do this posits that it allows us to learn something new without actually needing any new information.1Sara Aronowitz and Tania Lombrozo, “Learning through Simulation,” Philosopher’s Imprint 20, no. 1 (2020): 1–18. We can take what we already know about the world and apply it to whatever new situation we find ourselves in to draw new conclusions, figure out what will and won’t work, or figure out what we should do next time if things didn’t go so well.

Our hypothesis was that judgments of overreaction would be based on these mental simulations as well. People think about what could happen with or without an intervention, or they think about what would have happened after the fact. The problem is that the set of things that could or would happen is infinite, and there’s no way we could consider every possibility. Instead, people seem to simulate a few specific possibilities. Recent research has suggested that in many cases those few specific possibilities tend to be biased toward outcomes that are both likely and good2Jonathan Phillips, Adam Morris, and Fiery Cushman, “How We Know What Not to Think,” Trends in Cognitive Sciences 23, no. 12 (2019): 1026–1040. (with a few exceptions3Falk Lieder, Thomas L. Griffiths, and Ming Hsu, “Overrepresentation of Extreme Events in Decision Making Reflects Rational Use of Cognitive Resources,” Psychological Review 125, no. 1 (2018): 1–32.). This, of course, leads to problematic reasoning: If you think a bad event is unlikely, you won’t think about the possibility that it may happen, and any actions taken to prevent or mitigate it will look like overreactions.

What we found through our project supports the idea that people rarely consider bad outcomes by default. We presented hypothetical scenarios about costly interventions to prevent wildfires or stop a dam from failing and asked our participants to judge those interventions on a 100-point scale in which 0 was “not enough,” 50 was “appropriate response,” and 100 was “complete overreaction.” We asked for these judgments twice. The first time, participants were just told about whether the risk of a bad event (e.g., a destructive wildfire or the dam failing) was high or low. The second time, they were told what actually happened, in other words whether the bad event occurred or not.

https://items.ssrc.org/wp-content/uploads/2021/07/Kominsky_graph1.png
Figure 1. Prospective ratings.

Before they knew the outcome, on average participants judged everything to be an overreaction (see Figure 1). Participants were sensitive to risk (the probability that the event would happen), judging interventions against low-risk bad events as a greater overreaction. However, even when the bad event was high-risk, the average ratings were on the “overreaction” side of the scale. After they knew the outcome, participants in the conditions where the bad event didn’t happen, still judged the intervention to be an overreaction, on average. That is, if the intervention worked, the average judgment was still that it was an overreaction (see the “good” columns in Figure 2). In contrast, only when the bad event happened anyhow, were participants willing to judge the intervention as appropriate (or not enough). That alone tells us that Dr. Fauci was on to something: If it works, it looks like an overreaction! Only when participants were told that the bad event really, definitely happened did they make judgments indicating that they considered the possibility that it happened at all. However, we found one important loophole: If we told participants exactly how the intervention prevented the bad event from happening, then the average judgment was that the intervention was more appropriate (compare the left half of the Figure 2 to the right half). Adding a causal mechanism made it clear that, without the intervention, the bad outcome would have happened, which seems to be critical to these judgments.

https://items.ssrc.org/wp-content/uploads/2021/07/Kominsky_graph2.png
Figure 2. Retrospective ratings.

Overreaction judgments predict compliance with Covid-19 health policies

We then turned to the pandemic. We asked a new group of 450 US residents to make the same overreaction judgments about real public policy that has been used to fight the Covid-19 pandemic. We ran this survey over three days, January 14–16, 2021, just before the vaccination campaign started in earnest. It’s entirely possible that people’s judgments might be different now, or in a year, but this snapshot gave a good glimpse of how things were at one of the worst peaks of the pandemic in the United States.

We found many connections between overreaction judgments and other judgments, but a few key findings stood out. We asked people how bad the pandemic had been in terms of illnesses and deaths, and this judgment was very closely correlated with judgments of overreaction. The worse someone thought it had been, the less they thought policy health measures were overreactions, or even adequate responses. This is similar to what we saw in the studies with hypothetical cases: When the bad event can’t be ignored because it actually happened, measures taken to stop it don’t look like overreactions. These judgments were both closely related to judgments of how much Covid-19 was a threat to the general public, and to the participants themselves.

The most important thing we found is that these overreaction judgments were strongly predictive of people’s self-reported compliance. The more people thought the policy measures were overreactions, the more likely they were to disobey them. This is ultimately a correlation, so we can’t say for sure what the causal link is between these overreaction judgments and people’s behavior, but it tells us what public policy researchers need to try next: If we can convince people that these sorts of policies aren’t overreactions, are they more likely to follow them?

Lessons for crisis communication and challenges for future research

Convincing people that costly interventions aren’t overreactions is an uphill battle. Even if something worked in the past, people won’t consider the possibility that something bad could have happened unless they are really pushed to do so. What our results say is that, at least for US adults, the best bet is to emphasize the risk when it is high, find ways to make people think about the possibility that bad events will actually happen, and talk about exactly how a policy will prevent a bad event.

“It’s clear that understanding these judgments is critical if we want people to embrace costly but necessary policies to keep everyone safe.”

These recommendations haven’t been tested in the wild yet, and there’s a lot we still don’t know. It’s worth noting that getting people to think about the possibility that the bad event will actually happen is not trivial. Previous research on causal reasoning has sometimes simply asked people to describe the consequences of specific outcomes, which seems effective in changing their causal judgments,4E.g., Jonathan F. Kominsky and Jonathan Phillips, “Immoral Professors and Malfunctioning Tools: Counterfactual Relevance Accounts Explain the Effect of Norm Violations on Causal Selection,” Cognitive Science 43, no. 11 (2019). but has never (to our knowledge) been examined in the context of policy communication. In addition, we didn’t look at the cost of the intervention as a separate factor or how judgements influenced longer-term beliefs. Because these are some of the first studies to ever look at how people make judgments of overreactions, they won’t tell the whole story. Nonetheless, it’s clear that understanding these judgments is critical if we want people to embrace costly but necessary policies to keep everyone safe.

Banner photo: Steve Eason/Flickr.

References:

1
Sara Aronowitz and Tania Lombrozo, “Learning through Simulation,” Philosopher’s Imprint 20, no. 1 (2020): 1–18.
2
Jonathan Phillips, Adam Morris, and Fiery Cushman, “How We Know What Not to Think,” Trends in Cognitive Sciences 23, no. 12 (2019): 1026–1040.
3
Falk Lieder, Thomas L. Griffiths, and Ming Hsu, “Overrepresentation of Extreme Events in Decision Making Reflects Rational Use of Cognitive Resources,” Psychological Review 125, no. 1 (2018): 1–32.
4
E.g., Jonathan F. Kominsky and Jonathan Phillips, “Immoral Professors and Malfunctioning Tools: Counterfactual Relevance Accounts Explain the Effect of Norm Violations on Causal Selection,” Cognitive Science 43, no. 11 (2019).