Kenneth Prewitt performs a useful service by emphasizing the increased demand for accountability in social science research, especially by policymakers and the public, and how social scientists should respond. The historical background he provides demonstrates that the demand for accountability is nothing new, and extends back at least to the early nineteenth century in the United States and to other nations today such as Germany and the United Kingdom. However, in recent decades several factors—claims by social scientists that they can help solve social problems, failure to replicate studies, social scientists acting like lobbyists for their favorite programs and research funding, et al.—have upped the ante on the demand for accountability.


In his essay, Prewitt argues that social scientists should take three actions to respond to this rising demand. In the following, I will expand upon these three points in order to arrive at what I believe is a viable path forward. The first is to open some space in the demand for accountability so that social scientists can pursue research that is not focused on immediate outcomes and impacts. In the old days, we often talked about “basic” or “pure” research as contrasted with “applied” research. But Prewitt recommends that we set aside this worn distinction and create a “more telling” narrative of “science being used and science waiting to be used.” An important justification for this distinction is that it is impossible to know when research on basic processes will reveal a practical application. Discussion of the mission of social science should begin with the assumption that good “basic research” will have application in the real world—it is only a question of when. If social scientists were to make this distinction popular and accepted among policymakers and the public—not an easy task—it would create an advantage for social science because the hidden assumption would be that all good research has applications, even if we don’t yet recognize them.

It would be difficult to imagine a framework more likely to support funding of what used to be known as basic research. Prewitt even invents a clever term to capture the distinction between immediate and longer-term application of the findings of social research. He says we should employ the term “USBAR science”—that is, “unintended social benefits appreciated retroactively.” Again, to the extent that notions such as this become accepted, the message that basic research has practical application now or later will become more widely heard.

Prewitt’s argument is by no means against accountability for immediate outcomes. In fact, the second action social science must take is to emphasize that business, government, and civil society have people who have the “experience and expertise to judge how and when scientific evidence can be used to make a better commercial product or government policy or social practice.” Instead, it is that perhaps the most important application of social science occurs when it enters an arena that is simultaneously occupied by science and non-science considerations. Prewitt doesn’t say so, but in matters of social policy, social science will always play its role in debates in which decisions are made on the basis of science and non-science considerations. Policy decisions are shaped by budgets, the views of constituents, the views and official positions of political parties, policymakers’ direct experience with the problem under consideration, and a host of other factors. In recent years, evidence from social science, and especially from program evaluations, seems to be playing a bigger role in social policy decisions than in the past. Prewitt does not object to this development; he simply argues that an exclusive focus on practical application of social science to business and policy issues is inadequate.

The third action that social science should take to meet the demand for increased accountability is to ensure that the metrics we use to determine impact “must be clear about what can be reasonably and reliably measured.” This wise and cautious injunction has application beyond metrics. Of course, the various disciplines of social science should not claim they can have impacts on behavioral outcomes that are beyond their ability to demonstrate. Prewitt cites the example of the frequently heard claim that we invest in education so we can produce “good citizens for the nation’s future.” Because this aspiration has no supporting evidence, it has no place in a system of performance metrics. The broader lesson here is not to overpromise. My own view is that we should emphasize what has already been achieved and is supported by rigorous evidence rather than make great claims about what social science can produce in the future. The best example of the contribution of social science to policy is the scores of social intervention programs that have been shown by rigorous evaluations to produce impacts on the social problems they address. These include programs in preschool, K–12, and post-secondary education, prevention of social problems, adolescent pregnancy prevention, and parenting. If policymakers would direct funding to these programs rather than many of the evidence-free programs we now support that have never been evaluated, the nation would make greater progress in reducing its social problems.

Prewitt ends by claiming that social science is vulnerable because social scientists have not conducted the research that will demonstrate our ability to “strengthen the USBAR narrative” and “create appropriate assessment metrics.” But these two recommendations, though important, ignore two of the most potent weapons social science is now mounting to increase understanding of the nation’s social and economic conditions and to develop and test programs that are successfully attacking the nation’s major social problems. Yes, as a field, social science needs to strengthen its accountability, but what better way to demonstrate accountability than by showing both increased understanding of social phenomena and how to produce social impacts?


In the last decade or so, social scientists have increasingly employed two major weapons from its arsenal to promote both understanding of social phenomena and how to effectively intervene to address social problems. In the former case, I’m referring to the use of administrative data both as a source of outcome variables in rigorous evaluations and as descriptive data to study important social issues as well as to explore their correlates. In the latter, I’m referring to the use of randomized controlled trials (RCTs) to provide definitive tests of whether social intervention programs produce their intended effects. These two weapons are already providing important contributions to understanding basic social and economic processes, thereby meeting Prewitt’s emphasis on “basic research” as well as demonstrating that we can develop and deploy interventions that effectively address social problems.

David Card, Raj Chetty, Martin Feldstein, and Emmanuel Saez have recently written a paper for the National Science Foundation extolling the virtues of administrative data and imploring public and private agencies to make their de-identified datasets available for research purposes because they have huge sample sizes (sometimes entire populations) and “have far fewer problems with attrition, non-response, and measurement error than traditional survey data sources.” These data sets include those owned by schools, federal and state executive agencies, the Census Bureau, the IRS, and many others. They even argue that the US, traditionally a leader in social research, is falling behind other nations such as the UK, Australia, and Canada, which have made their administrative data more easily available to researchers. If the US is to follow their recommendation, a host of barriers, especially legal barriers controlling access to data sets primarily to protect privacy, must be overcome. The newly appointed federal Commission on Evidence-Based Policymaking may provide a set of recommendations (and perhaps even draft legislation) that would translate the Card et al. recommendation into action.

The work of Raj Chetty and his colleagues, based on the tax records of millions of Americans, suggests the astounding potential of big data to advance social science understanding of economic and social issues. His work with various coauthors has shown, among many other important findings, that there is huge variation in upward economic mobility across metropolitan areas in the US. Scholars often lament the fact that economic mobility in the US is less robust than that of other nations with advanced economies. But several of the metro areas with high mobility such as Contra Costa, CA; Fairfax, VA; and Montgomery, MD, have rates of upward mobility that are comparable with nations with high mobility rates. Moreover, Chetty and his colleagues identify several characteristics of communities that are associated with upward mobility. These include less segregation by income and race, lower inequality, and fewer single-parent families. Correlation is not causation, but the correlations between community characteristics and upward mobility suggest a number of lines for further investigation.

The RCT is the second social science weapon that can be unleashed to strengthen the claim that social science is a worthy investment of taxpayer dollars. The contributions of RCTs to the accumulation of knowledge about preventing or ameliorating social problems are widely recognized. Arguably the most important contribution of RCTs, as suggested above, is the identification of social intervention programs that can produce significant impacts when implemented under real-world conditions. In fact, the last decade has seen a substantial increase in such programs in the fields of delinquency prevention, high school graduation, college persistence, employment and training, prevention of unintended pregnancy, and many others. If federal grant funds were spent primarily on social programs supported by strong evidence, the nation would immediately make greater progress in solving its social problems. The Obama administration has taken significant steps in this direction.

If social science continues to uncover the types of correlations found by Chetty and his colleagues by exploiting big data, and to develop and implement more and more effective social intervention programs, policymakers will continue to provide adequate or even increased resources. Show them the evidence and impacts and funding will follow.