To the practicing academic social scientist the question, “Can social science matter?” is so lacking in nuance that it is best ignored; or, if responded to, answered by supplying nuance: “Matter to whom?” or “It depends” or “Yes, but not in predictable ways.”

Nuance, however, is not always foremost in the thinking of those with influence over the funding, the purposes, and the use of social science. They are increasingly saying: “Yes, social science can matter, but only if there is accountability”—preferably based on benchmarks, targets, and performance metrics. Moreover, if research universities resist, there are alternatives—think-tanks, contract houses, consultancy firms, and other new for-profit players.

This turn to measurable accountability is broader than the social sciences. Colleagues in the UK have issued a report titled “The Metric Tide,” 1 Wouters, Paul, Mike Thelwall, Kayvan Kousha, Ludo Waltman, Sarah de Rijcke, Alex Rushforth, Thomas Franssen, and P. Wouters. “The Metric Tide.” (2015). focused on how metrics are used to assess and manage scientific research. Powerful currents, they write, are whipping up the tide. Among these currents are growing pressures for audit and evaluation of public spending on higher education and research; demands by policymakers for more strategic intelligence on research quality and impact; increases in the availability of real-time big data on research uptake, and the capacity of tools for analyzing them.

“But what do we mean by ‘accountability’ for science?”

Although the movement toward measurable accountability is being felt across all the sciences, I focus on social science. For social science, impact and uptake commonly mean improved policy—an outcome that was promised even before the late nineteenth century establishment of the social sciences in the US as we know them today. Early in the nineteenth century, rudimentary descriptive statistics highlighted America’s social problems. In the 1850s, race science was cited in congressional policy debates, inaugurating practices that now travel under such labels as “evidence-based-policy.” Today the Social Science Research Council writes that its mission is to “mobilize knowledge on important public issues” and promises to “link research to practice and policy,” as does every other social science actor in the country, probably in the world. In this context, measurable accountability sounds reasonable. If social science urges government to make use of its research, it is then reasonable for government to ask whether funding that research is a good use of tax-payer dollars.

After all, accountability is a good thing. Democracy rests on holding politicians accountable for their errors of judgment or corrupt practices. Financial markets and commercial enterprises are accountable to share-holders and consumers, trade unions to their members, and professors to their students and peers. Why should science not be accountable to its funders?

But what do we mean by “accountability” for science? Does “pure” science fit into a twenty-first century science policy? It seemed to have a place a half-century ago. A commissioned history of the NSF’s formative years is tellingly titled A Patron for Pure Science. 2 England, James Merton. A Patron for Pure Science: The National Science Foundation’s formative years, 1945-57. Vol. 82, no. 24. National Science Foundation, 1983. We would be surprised to find the purity terminology so casually invoked today. Today NSF terminology circles around impact, consequences, and outcomes, especially on behalf of national security and economic prosperity and especially targeted on social science. Philanthropic funding exhibits similar tendencies, with its increasing pressure for practical outcomes, preferably measurable ones. At mid-twentieth century, the “ivory tower” metaphor was innocently deployed to celebrate research universities as places protected from political interference or commercial pressures. Now, Webster defines the ivory tower as “an impractical, often escapist, attitude marked by aloof lack of concern with or interest in practical matters or urgent problems […] a place where people make and discuss theories about problems […] without having any experience with those problems.”

The US is not alone in navigating a shift in science policy. To cite one example: in Germany, what is labeled “the curiosity narrative” is being challenged by “transformative wissenschaft” (wissenschaft signals that all scholarly disciplines are involved). The transformative vocabulary pulls scholarly research toward a social action project based on an agenda shared with government and civil sector stakeholders. One observer writes: “Transformative science forces representatives of the science system to make a decision: either to openly admit the transformative character of all science and its deep entanglement with society, or to hide the transformative character of science by continuing to defensively claim that science is mainly about creativity and curiosity.” 3Wolfgang Rohe. Remarks prepared for a conference on “Preserving Principles in the New Landscape of Higher Education” held at the Rockefeller Foundation Bellagio Center, March 16-19, 2015


In the decade immediately following WWII, America’s science policy took for granted that scientists, including social scientists, would give their best if granted a significant degree of academic freedom and autonomy. During the war effort, scientists from leading universities had moved in great numbers to federal labs and government agencies, where they demonstrated that they placed public good over private gain, that they had procedures to reward quality and police malpractice, and that they could produce—from weapons to intelligence. Public trust of science was high; the phrase “patron for pure science” did not invite skepticism.

Over the last seven decades, public trust in science has weakened. Such a complicated and unanticipated development has many causes. Here I cite only two, selected because they will guide us back to the starting point: Can social science matter?

“The more social science matters, the more society wants a say in what it does.”

First, there are self-inflicted wounds. In Congress and even in the public eye, academic scientists are often seen as just another lobby group agitating for ever more taxpayer dollars. Tenured professors in elite universities live well while their students go into debt. Self-policing is less robust than promised—fraud, failure to replicate, conflicts of interest, and other doubts about ethics and self-serving practices are in the news with enough frequency to attract the worried attention of the National Academies of Sciences. Adding further difficulty is the charge that scientists have at times become advocates, especially in the area of climate change and food and drug regulation. Much of this generalized complaint is unwarranted; it is there nonetheless. It adds up to a more mixed picture of science than what launched NSF as “a patron of pure science.”

The second consideration moves in a different direction. It also, however, contributes to a science policy increasingly tilted toward accountability. For more than a century, modern social science has steadily pushed its way into polity, economy, and society. Its research enterprise is relevant across a broad spectrum of public policies and social practices: how to stay healthy, identify potential terrorists, tackle inequality, improve economic productivity, raise a child, network the world, win an election, fight a war, etc.

There is a sizable enterprise dedicated to bringing social science to bear on public policy choices and evaluations; an enterprise that emerged in the 1960s, when social science grew in size, reach, and resources, bringing with it a sizable network of institutions and funders focused on policy influence. When the Great Society needed large data sets, including social experiments, and used the Request For Proposals (RFP) process from defense procurement to purchase analysis relevant to education, public health, and social service; when foundations established policy think-tanks at a quickening pace; when adventurous students preferred policy schools over law or business; and when the federal funder of basic science, the NSF, gradually included social science. A recent National Academies of Science report 4Schwandt, Thomas A., Miron L. Straf, and Kenneth Prewitt, eds. Using Science as Evidence in Public Policy. National Academies Press, 2012. estimates that, as of 2011, $1.3 billion in funding went to the social sciences from federal sources, to which can be added foundations and private funds for nearly two thousand think tanks and activist NGOs. Similar developments hold across Western Europe, in the wealthier Asian countries, and, if to a lesser degree, in Latin America and Africa.

If social science is steadily pushing its way deeper into our lives, it should come as no surprise that political and economic actors are pushing their respective agendas and interests back into the sphere of science. The more social science matters, the more society wants a say in what it does.


The answer to “Can social science matter?” then comes more sharply into focus. It can matter—but not in the taken-for-granted manner of a half-century ago. There are new considerations: more skepticism toward social science, more focus on near-term, practical (and measurable) outcomes, more tendencies to micro-manage, more accountability, less autonomy, etc. Under these conditions social science is more likely to matter if it does three things.

First, social science research must modify and develop in more detail a familiar narrative. Social science—similar to the natural sciences, the humanities, the arts, engineering, business innovation, and any other activity dependent on creativity—needs space in which to fail, in which serendipitously to stumble across unlikely, but revealing, connections, in which to pursue knowledge whose uses are not initially predictable, but later come as a welcome surprise.

Social science would do well to set aside the “basic versus applied” dichotomy, and replace it with a much more telling dichotomy: science being used and science waiting to be used. Physics has great examples. In the 1920s quantum physics had not yet found its useful applications. In 1965, Moore’s law dramatically announced the scope of quantum physics being used. Today the discovery of the Higgs boson is waiting to be used; someday it will be. It is knowledge too fundamental not to be put to use. Biochemistry also has strong examples. DNA’s double helix was discovered more than a half-century ago; now we have the beginnings of personalized medicine. The foundational science is still basic even when, decades later, it is applied, showing up in practices, products, and policies.

In the social sciences, messier by far than physics and biochemistry, there are examples. The importance of early childhood intervention was basic science well before it changed parenting practices and education policy. Behavioral economics was a fundamental critique of rational choice theory. Later the UK government, the World Bank, and now the Obama administration, adopt nudge theory. Sociology was doing network theory as a basic science when the search engine industry used it to design algorithms. Call it USBAR science: Unintended Social Benefits Appreciated Retroactively.

The argument is familiar, but should move away from its roots in ivory tower metaphors or “science for the sake of science.” When public funds are at stake, it is not a good place to be. This is especially so for social science, which was never intended as a science separate from its social purposes. It was always conceived of as two joined projects: to build a better science and to build a better world. It should be clear that the USBAR narrative leaves performance metrics out of the picture, at least as they are conventionally applied.

“Social science matters when it enters a space simultaneously occupied by science and non-science considerations.”

The narrative has a chance of success only if linked to the second task. If it is a serious mistake for science policy to deny social science an autonomous space, it is equally a mistake for it to deny a place for measurable accountability. The accountability regime should be based on the premise that there are people in the spheres of business, government, and civil society who have the experience and expertise to judge how and when scientific evidence can be used to make a better commercial product or government policy or social practice. If the city council is considering whether to build a bridge, engineers can tell them what to do to keep it from collapsing—but not that they should build it.

Social science matters when it enters a space simultaneously occupied by science and non-science considerations. Here there is a legitimate role for performance metrics. They don’t focus on science itself, but on its contribution to society. The task is to negotiate them to everyone’s satisfaction. Negotiation rules are necessary, which at a minimum should include first, metrics used to assess the contributions of science in sectors beyond science, no less than practices internal to science must be designed to avoid self-deception; that is, to detect and correct for bias, fraud, error, weaknesses, flaws, and failures. If in the practice of science itself we find a human weakness to exaggerate the importance of a research finding, then that weakness is multiplied many times over when it appears in claims that “we contribute to society.” It is an exaggeration to be avoided.

“The bottom line: it is not that science is unused, it is, rather, that we understand little about the space where and how use occurs.”

Second, social science, and those who call for metrics to determine its impact, must be clear about what can be reasonably and reliably measured. The rate and causes of school drop-out can be. Value-added in the classroom is inadequately measured at present, but not out of reach. But consider this assertion: “we invest in education in order to produce good citizens for the nation’s future.” That’s an aspiration for which we have no evidence, and therefore it has no place in system of performance metrics. The point is obvious. Metrics, which are used to assess the extent to which government funded science contributes to society, cannot game the system.

The third task is embedded in the phrase introduced above: “a space simultaneously occupied by science and non-science considerations.” This is where use of social science happens, or doesn’t. Social science has spent less effort understanding that space than is warranted. This is a serious failure, an observation first made in the NAS report mentioned earlier: Using Science as Evidence in Public Policy. This report took note of the knowledge-utilization literature, finding it helpful on typologies of use but lacking explanation of use, misuse, or non-use. It reviewed the two communities’ discussion, finding it helpful on communication strategies, but weak on explaining which strategies have made a difference under what conditions. Similar deficiencies were found in the evidence-based policy and “what works” initiatives, as limited to a narrow range of policy challenges. These initiatives have merit in their own terms, but the reader is left asking for more.

The bottom line: it is not that science is unused, it is, rather, that we understand little about the space where and how use occurs. The NAS report concludes with a research agenda focused specifically on “use of science in public policy,” emphasizing that use is poorly understood across all the scholarly enterprise—natural or biological sciences, engineering, social sciences, or the humanities. The report stresses that use is a social phenomenon. It is what people do, mostly in group settings. Social science has methods and theories—cognitive theory, group dynamics, complexity models, for example—relevant to research on use. Unless and until that research gets underway, social science is vulnerable on both tasks summarized above: strengthening the USBAR narrative and creating appropriate assessment metrics.

This essay benefits substantially from participants in events sponsored by the Future of Scholarly Knowledge Project, generously funded by Sage Publications and directed by the author, Kenneth Prewitt.