Where do we go from here? And how do we get there? Those are questions I have been asking myself over the past few years as I’ve reflected on the various crises that have been plaguing the social sciences. I began my graduate training two years after the now famous “false-positive psychology”1Joseph P. Simmons, Leif D. Nelson, and Uri Simonsohn, “False-Positive Psychology: Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Significant,” Psychological Science 22, no. 11 (2011): 1359–1366. paper triggered a crisis of confidence about the state of evidence in psychology and the other social sciences (e.g., experimental economics)2John Bohannon, “About 40% of Economics Experiments Fail Replication Survey,” Science, March 3, 2016. that engaged in similar meta-scientific reflections. Then came 2020. After a few years spent working through strategies to address that crisis of evidence,3Neil A. Lewis, Jr., “Open Communication Science: A Primer on Why and Some Recommendations for How,” Communication Methods and Measures 14, no. 2 (2020): 71–82. the Covid-19 global pandemic and the temporary reckoning about racial and social justice reignited the crisis of relevance.4Roger Giner-Sorolla, “From Crisis of Evidence to a “Crisis” of Relevance? Incentive-based Answers for Social Psychology’s Perennial Relevance Worries,” European Review of Social Psychology 30, no. 1 (2019): 1–38. In addition to debates about how to improve (quantitative) methods, social scientists also debated about whether the kinds of knowledge our fields were producing were actually useful for speaking to pressing issues in society.5Roy F. Baumeister, Kathleen D. Vohs, and David C. Funder, “Psychology as the Science of Self-Reports and Finger Movements: Whatever Happened to Actual Behavior?Perspectives on Psychological Science 2, no. 4 (2007): 396–403.

Taken together, it seemed that we had somehow gotten ourselves into a situation in which, despite having decades of research under our collective belts, we had great difficulty understanding the nature of behaviors well enough to make good predictions about how to change them. This state of affairs has been particularly concerning due to its implications for our readiness to respond during moments of crisis.6Hans IJzerman et al., “Use Caution When Applying Behavioural Science to Policy,” Nature Human Behavior 4 (2020): 1092–1094. Part of the reason for this status quo seems to be the history of studying a narrow sliver of humanity in a limited set of circumstances,7Joseph Henrich, Steven J. Heine, and Ara Norenzayan, “The Weirdest People in the World?Behavioral and Brain Sciences 33, no. 2–3 (2010): 61–83. which has inhibited our ability to learn about the range of factors that influence people’s thoughts, feelings, and behaviors.8Christopher J. Bryan, Elizabeth Tipton, and David S. Yeager, “Behavioural Science is Unlikely to Change the World without a Heterogeneity Revolution,” Nature Human Behavior 5 (2021): 980–989. Moreover, research projects are often developed without input from the people whose lives the work is intended to represent or influence.

These issues are not new; they have been written about extensively for decades.9David O. Sears, “College Sophomores in the Laboratory: Influences of a Narrow Data Base on Social Psychology’s View of Human Nature,” Journal of Personality and Social Psychology 51, no. 3 (1986): 515–530. Moreover, over the course of my own career, every major conference I have attended has devoted at least one session to talking about them. For a long time, I wondered whether they would be like some of the other issues I’ve encountered in academia—issues that we merely discuss ad nauseum, form task forces to write reports about, but seldom attempt to change.10Neil A. Lewis, Jr., “What Universities Say Versus Do about Diversity, Equity and Inclusion,” Nature Human Behavior 6 (2022): 610. This is a concern that has been growing the longer I have been in the field.

My growing skepticism was recently tempered, though, by a new program designed to address some of these issues. My collaborators and I recently received funding from the Mercury Project—a global consortium of researchers working on improving public health interventions. Our specific project is focusing on social and logistical factors that contribute to inequities in vaccine uptake. We are, of course, grateful for the funding to do the research, but the money is not what prompted me to write this post. It’s the other things that the Mercury Project is doing that excites me about its potential to address both the crisis of evidence and crisis of relevance that have generated so much discussion over the past decade.

First, rather than allocate funding and leave it to each research team to try and figure out what might or might not be helpful, the Mercury Project took a different approach to building a base of rigorous and relevant evidence. Each team submitted research proposals, as is typical with other funding mechanisms, but the decision to fund was not the end of the feedback process. Before projects got started, the Mercury Project brought each funded team together at a convening with researchers, methodological experts, representatives from communities that would be affected by the research, and policymakers who might ultimately use the research, all came together to give constructive and critical feedback on each research design to ensure that the projects would not only meet high evidentiary standards, but would also be designed in ways that would produce useful evidence for relevant stakeholders.

Psychologist Lisa Fazio, who studies how people do or don’t update true and false beliefs, discusses research design with decision science researcher Sami Horn and the author. Photo by Heather Lanthorn.

One of the things I found particularly helpful from the convening was hearing the perspectives of policymakers. I often read papers in which scientists conduct studies, write them up for academic journals, then end their paper with a paragraph about what policymakers should do with their findings. Because many social scientists have no training and limited experience working with policymakers, and therefore do not have insight into how the policymaking process works at different levels, such statements at the end of papers often have limited utility—they make recommendations that, frankly, do not make sense given how policy (and practice) actually work. Because of that, it was helpful to have policymakers at the table as the research was being designed so that they could give direct feedback about what kind of evidence would (and would not) be useful. That feedback allowed each research team to ensure we were measuring relevant variables, weighing benefits and costs of different approaches appropriately, and more generally, thinking critically about the theoretical and practical significance of the work we are doing for both the scientific community and the broader societies that would be affected by our work.

Dr. Antony Ngatia, who has been working to combat Covid-19 with the Clinton Health Access Initiative in Kenya, addresses the practical side of promoting accurate health information and vaccinations. Photo by Jeff Mosenkis.

Another invaluable aspect of the Mercury Project approach was the diversity of the teams and other stakeholders that were brought together for the convening and broader work. The Mercury Project teams hail from 20 different countries and are doing research in 17 different countries, as well as online. One of the benefits of having such diverse teams is that the experiences that people bring with them from the lives they’ve lived in a variety of places provide tremendous insights into factors that are important to consider for the research to be done well, and that are important for figuring out whether and when research generated from one context can be applied to another. Social scientists and statisticians have been writing about the importance of understanding heterogeneity in order to improve both our theories and their practical relevance.11Bryan, Tipton, and Yeager, “Behavioural Science Unlikely to Change the World.” But I have never been in another context that crystalized those ideas for me more clearly than the Mercury Project convening. For example, one of the conversations that I am still thinking about weeks later, is a conversation I had with other researchers, practitioners, and policymakers about why a health intervention that works really well in one country might not in another. That conversation forced us to think through the structural, cultural, and political processes that affect the effectiveness of health interventions, factors that are important to understand to generate good theories of health behavior that could serve as useful theories of change.

Public health researcher Dr. Marin Atela describes the research design for studying health ambassadors in Côte d’Ivoire, Senegal, Malawi, and Zimbabwe. Photo by Jeff Mosenkis.

The large epistemic and practical problems that the social sciences have been debating and discussing will not be solved by scientists working in isolation in ivory towers. The solutions require programs and structures like the one I have described in this post to bring together researchers, community members, practitioners, and policymakers to think critically together about the kinds of knowledge we create,12Neil A. Lewis, Jr., “What Counts as Good Science? How the Battle for Methodological Legitimacy Affects Public Psychology,” American Psychologist 76, no. 8 (2021): 1323–1333. and the implications of that knowledge for society. If we create more opportunities for engagement and collaborations like the ones I just described, then maybe, just maybe, after decades of talking about these issues, we might make substantial strides on improving both our scientific evidence, and its relevance, at the same time.

Banner image: Participants in the Mercury Project convening—representing 12 research teams from 20 countries—funders, and public health experts. Photo by Nadia Gilardoni.

References:

1
Joseph P. Simmons, Leif D. Nelson, and Uri Simonsohn, “False-Positive Psychology: Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Significant,” Psychological Science 22, no. 11 (2011): 1359–1366.
2
John Bohannon, “About 40% of Economics Experiments Fail Replication Survey,” Science, March 3, 2016.
3
Neil A. Lewis, Jr., “Open Communication Science: A Primer on Why and Some Recommendations for How,” Communication Methods and Measures 14, no. 2 (2020): 71–82.
4
Roger Giner-Sorolla, “From Crisis of Evidence to a “Crisis” of Relevance? Incentive-based Answers for Social Psychology’s Perennial Relevance Worries,” European Review of Social Psychology 30, no. 1 (2019): 1–38.
5
Roy F. Baumeister, Kathleen D. Vohs, and David C. Funder, “Psychology as the Science of Self-Reports and Finger Movements: Whatever Happened to Actual Behavior?Perspectives on Psychological Science 2, no. 4 (2007): 396–403.
6
Hans IJzerman et al., “Use Caution When Applying Behavioural Science to Policy,” Nature Human Behavior 4 (2020): 1092–1094.
7
Joseph Henrich, Steven J. Heine, and Ara Norenzayan, “The Weirdest People in the World?Behavioral and Brain Sciences 33, no. 2–3 (2010): 61–83.
8
Christopher J. Bryan, Elizabeth Tipton, and David S. Yeager, “Behavioural Science is Unlikely to Change the World without a Heterogeneity Revolution,” Nature Human Behavior 5 (2021): 980–989.
9
David O. Sears, “College Sophomores in the Laboratory: Influences of a Narrow Data Base on Social Psychology’s View of Human Nature,” Journal of Personality and Social Psychology 51, no. 3 (1986): 515–530.
10
Neil A. Lewis, Jr., “What Universities Say Versus Do about Diversity, Equity and Inclusion,” Nature Human Behavior 6 (2022): 610.
11
Bryan, Tipton, and Yeager, “Behavioural Science Unlikely to Change the World.”
12
Neil A. Lewis, Jr., “What Counts as Good Science? How the Battle for Methodological Legitimacy Affects Public Psychology,” American Psychologist 76, no. 8 (2021): 1323–1333.