The changing landscape of scientific knowledge generation, dissemination, and use demands new ways of thinking about science policy (in the present case, social science policy). That landscape is marked by a new social contract backing the relationship between the public and the scientific enterprise (as noted by David Demeritt in “The New Social Contract for Science,” published in Antipode). 1David Demeritt, “The New Social Contract for Science: Accountability, Relevance, and Value in US and UK Science and Research Policy,” Antipode 32 no. 3 (2000): 308–329. No longer is science unquestionably regarded as a public good, and no longer are scientists given complete discretion over decisions about the allocation of research resources, or decisions about research quality. The linear-reservoir model of science (Basic, Applied, Development, Application, and Societal Benefit), in which societal benefits are to be found downstream from the reservoir of scientific knowledge, once characterized (although not totally accurately) the relationship between science and decision making in society. This model is now seriously challenged by stakeholder models claiming a role for users in the production of scientific knowledge (as noted by Roger Pielke, Jr. in The Honest Broker). Market-based and customer-oriented notions of accountability now press the research enterprise to demonstrate return on investment in terms of measurable economic and societal benefits.
Kenneth Prewitt, surveying this landscape with a very experienced, keen, and perhaps wary, eye argues that the accountability of social science to society (that is, how social science can matter) can be improved by more careful attention to the story it tells about its character and purpose, to developing smarter metrics for measurement of its impact, and to the political realities of its use in decision making. Of course, the account we relay to fellow scientists and to the public about what social science does, the means we adopt to appraise the quality and value of the science we do, and the way we envision the uptake and use of that science in policy and practice are all intertwined.“Social science needs to be shaken up and develop a more radically different narrative.”
The narrative of social science that Prewitt would have us do away with is the classic basic-versus-applied distinction—and quite rightly so. He would instead replace it with a more telling tale of social science being used and social science waiting to be used—unintended social benefits that we come to appreciate retroactively. Perhaps this narrative will be found convincing by stakeholders, but I am doubtful. It is too closely tied to the idea of research as a product, that is, the notion of “research as a retail store,” in which researchers are busy filling shelves in their storefront with a comprehensive set of relevant studies that a decision maker may some day drop by to purchase. 2Jonathan Lomas, “Connecting Research and Policy,” ISUMA: Canadian Journal of Policy Research (Spring 2000): 140–144. It is too insensitive to the idea of research as a process with multiple stages wherein public input might be relevant. Social science needs to be shaken up and develop a more radically different narrative.
That narrative is evolving and yet to be formed in a coherent way, but it is taking shape as influenced by several important considerations. First, by repeated (and contested) calls from within the social sciences (e.g., Craig Calhoun, Herbert Gans, Michael Burawoy, Alice O’Connor) for a more public social science, as well as for social science that is more problem oriented (as opposed to theory and method oriented) and context sensitive.
Second, the emerging narrative is being shaped by growing attention to the phenomenon of public participation in the coproduction of scientific knowledge; again, hardly without controversy, but stimulating and often ignored by mainstream social science—more on this when discussing “use” below.
A third contributor to a new narrative lies in those efforts (perhaps too few) to move university social science faculties out of their disciplinary comfort zones and silos to work in interdisciplinary teams on “wicked” social problems of poverty, inequality, consumer behavior, social cohesion, security, the causes of crime, sustainable environments, and the social determinants of health. That such problems can be addressed successfully only through interdisciplinary team science is the assumption behind the Grand Challenges research initiatives launched at several of our major public research universities.“Transformative science is a type of science that intervenes actively in the process of social change.”
Prewitt alludes to a fourth important antecedent for a new narrative with his reference to the interest in transformative wissenschaft in the German science landscape. Transformative science (not to be confused with NSF’s Transformative Research initiative) is a type of science that intervenes actively in the process of social change. It requires knowledge of stakeholder participation in science, systems thinking, and knowledge about how to provoke change that is context specific and socially robust. Social scientists deploying systems approaches (e.g., systems dynamics, soft systems methodology, critical systems heuristics) in researching social problems and developing means of adaptive management of social change have an edge here in understanding and promoting transformative social science.“We can do better than traditional measures of research quality in academic settings.”
But a new view of metrics for a different social science narrative needs to go beyond this kind of internal fix. The task here, as elsewhere, faces significant (although not insurmountable) obstacles. First, there is considerable disagreement about what research impact actually means other than that it is broadly concerned with the social, cultural, environmental, and economic returns from publicly funded research (see, for example, Penfield et al.’s “Assessment, Evaluations, and Definitions of Research Impact” published in Research Evaluation or Lutz Bornmann’s “Measuring the Societal Impact of Research”).3Teresa Penfield, Matthew J. Baker, Rosa Scoble, and Michael C. Wykes, “Assessment, Evaluations, and Definitions of Research Impact: A Review,” Research Evaluation 23, no. 1 (2013): 1–12); Lutz Bornmann, “Measuring the Societal Impact of Research,” European Molecular Biology Organization EMBO Reports 13, no. 8 (2012): 673–676.]
Second, to borrow a well-known idea, it has long been held that the republic of science, acting as a collective of relatively autonomous self-governed disciplinary communities, has the authority to determine the purposes and value of research. Those communities control the assessment of the scientific merit of research that, in turn, is assumed to be both a necessary and sufficient condition for research impact. We are now coming to understand that scientific merit may be necessary, but it is not sufficient. Moreover, we have little reason to believe that the metrics of research impact, as judged in academic settings, are the metrics most useful for judging societal benefits of research (see Sarewitz’s chapter, “Institutional Ecology and the Social Outcomes of Scientific Research,” in The Science of Science Policy).4 Daniel Sarewitz, D. 2011. “Institutional Ecology and the Social Outcomes of Scientific Research”. In K.H. Fealing, J. Lane, J.H. Marburger, & S. Shipp, (Eds.), The Science of Science Policy. (Stanford University Press, 2011, pp. 337-348). Criteria for judging the value and impact of research are seen to depend on a continuum of value-laden criteria ranging from traditional, scientific values (criteria focused on ends internal to science) on one end to user-oriented values (focused on achieving ends external to the research itself) on the other.5 Elizabeth McNie, Adam Parris, & Daniel Sarewitz, (n.d.) “A Typology for Assessing the Role of Users in Scientific Research: Discussion Paper.” Available at http://cspo.org/research/user-inspired-research/ Moreover (as is acutely recognized particularly in the international community that funds research for development) the impact of research will be affected by a number of factors including the maturity of the research field; risks in the political, data, and research environments; whether the research is equity focused and gender responsive; and whether, in its conceptualization and design, research attends to user contexts, the accessibility of research findings, and strategies to integrate potential users into the research process itself wherever this is feasible (as noted in the RQ+ project).“Rethinking metrics goes hand-in-hand with rethinking the phenomenon of use.”
Rethinking metrics goes hand-in-hand with rethinking the phenomenon of use. The bugbear here is the traditional narrative that scientific information, knowledge, and evidence flows one way from science to society. However, we are coming to realize that the space in which science is used, to use Prewitt’s phrase, is best characterized as a two-directional, dialogical space. And we are coming to understand, albeit imperfectly, something important about what use means in that space. There is much more at stake in this space than effective science communication: presenting scientific knowledge to non-experts. The dialogical space is understood in several ways. For example, it is viewed as a site of linkage and exchange between researchers, policymakers, research funders, and knowledge purveyors (e.g., think tanks, knowledge brokers) where research and policymaking are regarded as processes not products. Researchers and decision makers look for points of exchange at various stages of their respective processes, and that mutual involvement leads to enhancing the “use” of social science.
Another version of the dialogical space emphasizes what is often spoken of as “upstream” public engagement.6James Wilsdon, Brian Wynne, and Jack Stilgoe, The Public Value of Science: Or How to Ensure That Science Really Matters, (London: DEMOS, 2005). Available at http://www.demos.co.uk/files/publicvalueofscience.pdf In this perspective, the dialogical space is defined not simply by public or decision-maker input into scientific deliberations, or by improved means of communicating scientific findings to the public or decision makers, or by consideration of ethical, legal, and social issues entailed in some scientific undertaking. Rather, it consists of joint discussion about the ends and purposes of science itself. The “use” of science is recast within a new architecture of public policy making; one that is a deliberative engagement of a diverse set of stakeholders that includes citizens, scientists, and policymakers.“Social science can matter, but only if we expand the social science imagination.”
Nothing is by any means settled or agreed to here, but these ideas continue to surface again and again and gain traction. The tripartite agenda—narrative, metrics, and use—as sketched here is met with considerable worries that must be taken up by the social scientific community: How do we guard against the politicization of science in the dialogical space where use occurs? How do we recast the social role of experts in forming public policy? How do we design measures of the social impact of research in particular disciplines or fields of research—impact that may be harder to document directly but is more relevant and useful to society? How do we effectively argue for the importance of the objectivity of scientific claims (as rigorously acquired and warranted assertions) while acknowledging that commitments to social values characterize policymakers and researchers alike? Social science can matter, but only if we expand the social science imagination.
References [ + ]
|1.||↑||David Demeritt, “The New Social Contract for Science: Accountability, Relevance, and Value in US and UK Science and Research Policy,” Antipode 32 no. 3 (2000): 308–329.|
|2.||↑||Jonathan Lomas, “Connecting Research and Policy,” ISUMA: Canadian Journal of Policy Research (Spring 2000): 140–144.|
|3.||↑||Teresa Penfield, Matthew J. Baker, Rosa Scoble, and Michael C. Wykes, “Assessment, Evaluations, and Definitions of Research Impact: A Review,” Research Evaluation 23, no. 1 (2013): 1–12); Lutz Bornmann, “Measuring the Societal Impact of Research,” European Molecular Biology Organization EMBO Reports 13, no. 8 (2012): 673–676.]|
|4.||↑||Daniel Sarewitz, D. 2011. “Institutional Ecology and the Social Outcomes of Scientific Research”. In K.H. Fealing, J. Lane, J.H. Marburger, & S. Shipp, (Eds.), The Science of Science Policy. (Stanford University Press, 2011, pp. 337-348).|
|5.||↑||Elizabeth McNie, Adam Parris, & Daniel Sarewitz, (n.d.) “A Typology for Assessing the Role of Users in Scientific Research: Discussion Paper.” Available at http://cspo.org/research/user-inspired-research/|
|6.||↑||James Wilsdon, Brian Wynne, and Jack Stilgoe, The Public Value of Science: Or How to Ensure That Science Really Matters, (London: DEMOS, 2005). Available at http://www.demos.co.uk/files/publicvalueofscience.pdf|