“Curation”—“to pull together, sift through, and select for presentation”— is a capacious and engaging term, and the perfect point of entry for my contribution to this ongoing conversation. Writing a book on peer review in the social sciences and humanities helped me think through how changing digital contexts are influencing evaluative practices and their impact on the making of reputations.
I studied fellowship peer review by looking closely at the process through which scholars “sift through and select” fellowship proposals around 2002. What I learned about five prestigious national competitions is described in my book How Professors Think (2009). This book concerns differences in disciplinary evaluative cultures (worlds apart if one compares English and economics), where evaluators believe excellence resides (in the proposal itself, or in the eye of the beholder), and how objective they believe evaluation is. Beyond disciplinary differences, I learned that much is shared among peer reviewers when it comes to views concerning customary rules of evaluation (what violating the rules means) as well as formal and informal criteria of evaluation (the former concern originality, social and intellectual significance, and feasibility while the latter include clarity, elegance, using the appropriate amount of theory, and the ability to generate excitement).
My interviews and ethnographic observations also revealed that deliberation is at the center of this world of peer review. Circa 2000, deliberation was accomplished primarily through face-to-face conversation within a group of highly selected and certified experts assembled in a meeting room situated in a typical Manhattan skyscraper or DC office building. The selves of these experts were profoundly marked by their faith in the fact that years of scholarly research have prepared them to competently and fairly draw the line between winners and losers. Thus they could distribute highly prestigious awards in good faith, with full knowledge that these awards would contribute to defining a researcher as particularly talented or meritorious and as worthy of tenure. They lived in a world of evaluative practices that they largely controlled and where they could act as (almost) uncontested kings and queens.
But there have been numerous changes since the early 2000s. Some concern peer review directly, while others are reshaping the broader academic and publishing environment. To mention only a few such changes: face-to-face deliberation is being replaced by individualThey lived in a world of evaluative practices that they largely controlled and where they could act as (almost) uncontested kings and queens.rankings on web platforms in some quarters (the National Institutes of Health, for instance); the open access movement is complicating and somewhat redefining the meaning of peer-review publications and how they should figure into evaluation and promotion; “clicks” and “likes” have become a well-known form of validation in various types of online publications (see the work of Angele Christin on this subject); finally, a growing number of junior academics, as well as some more senior scholars, are spending considerable time on social media engaging research, debating what is worthy of attention and why, or broadcasting their own achievements and certifying the work of others. This situation has reached such a critical point that the American Sociological Association created a committee (on which I served) charged with considering how social media activities should factor in promotion decisions (the answer was: proceed with caution).1Leslie McCall et al., “Report of the ASA Subcommittee on the Evaluation of Social Media and Public Communication in Sociology,” American Sociological Association, 2016.
We may want to ask ourselves whether we have slowly (perhaps glacially slowly) been moving toward a democratization of academic evaluation, or a more open curating market. In my own discipline and in others, one can easily think of half a dozen scholars (typically below forty) whose social media activities are playing a prominent role in giving them a highly public intellectual profile, which comes to serve as a (perhaps unstable) bedrock for their academic reputation.
From these observations spring several questions concerning changes in how academic reputations are becoming established in this (somewhat) new era. These questions should be amenable to empirical inquiry.
A first question is whether the reputations of today’s public intellectuals have different bases than those of public intellectuals of yesteryear. More specifically, how would we compare the New York Review of Books to digital literary magazines such as N+1 when it comes to prestige allocation? Does the fact that N+1 is a digital medium mean that being published or reviewed in this journal is less performative as a status symbol than was being published in NYRB when Richard Sennett (to take only one random name) was making his reputation? Print publications face a zero-sum situation when it comes to space attribution. Not so for digital media, which may affect their ability to act as a medium for reputation making and provide symbolic capital. At the same time, by being able to publish more scholars and to potentially reach more readers at lower cost, such digital media may be more effective as a medium for establishing reputations. Research is needed before we can draw a conclusion.
A second question concerns journals as “instances of consecration,”2Pierre Bourdieu, “The Market for Symbolic Goods,” in The Field of Cultural Production: Essays on Art and Literature (New York: Columbia University Press, 1984). as Pierre Bourdieu used to call contexts of evaluation where quality is “black-boxed”—to use a term from Bruno Latour. While they are proliferating in the digital age, they are not as restricted by the profit motive as they were previously.3Matthew Clair, “The Limits of Neoliberalism: How Writers and Editors Use Digital Technologies in the Literary Field,” in Communication and Information Technologies Annual, Studies in Media and Communications, volume 11, edited by Laura Robinson, Jeremy Schulz, Sheila R. Cotten, Timothy M. Hale, Apryl A. Williams, and Joy L. Hightower (Emerald Group Publishing Limited, 2016). The annual financial cost of entry has been lowered, which may have a direct effect in democratizing evaluation as well as access to such venues. With the proliferation may emerge a broader diversity of criteria of evaluation and a greater diversity of intellectual output, a heterarchy of sort.4Michèle Lamont, “Toward a Comparative Sociology of Valuation and Evaluation,” Annual Review of Sociology 38, no. 21 (2012): 201–221.
A third question concerns the impact of the open source movement, which decouples to some extent the evaluation of research from its publication. As more research is published on the Internet, less of it is subjected to the close scrutiny of experts. What comes to determine whether knowledge survives and continues to be referenced may be less the quality of the work itself than the diffusion practices in which authors engage and whether they are able to make others aware of their work through social media. This may lead to “the blind leading the blind” or a general decline in quality of the knowledge produced. Alternatively, in a less constructivist and more “supply and demand” vein, one could hypothesize that consumers of knowledge will spontaneously converge on high-quality work and will not pay attention to the lesser product.
A fourth question has to do with the unintended consequences of the audit society for academia.5Michael Powers, The Audit Society (New York: Oxford University Press, 1997). This term refers to the proliferation of a culture of accounting for performance, not only in higher education and academia, but also in public administrations and other bureaucratic structures.6Wendy Nelson Espeland and Michael Sauder, Engines of Anxiety: Academic Rankings, Reputation, and Accountability (New York: Russell Sage Foundation, 2016). We are seeing an increase in reciprocal monitoring, through repeated and easy consultation of rankings in Amazon book sales, on Academia.edu, ResearchGate, or Google Scholar, or in the numbers of “likes” received on Facebook.Will traditional forms of scholarly evaluation become more marginal in the world of evaluation in the face of the growing popularity of such “scoreboards”?These may be regarded as “objective” or market-driven scoreboards for how authors are doing relative to others in their field, whether in terms of number of citations or individual popularity. With such a proliferation of automatized monitoring tools, we should ask ourselves, what is their impact on the culture of evaluation for academia, science, and intellectual life in general? Will traditional forms of scholarly evaluation (tenure letters, journal peer review, book reviews, etc.) become more marginal in the world of evaluation in the face of the growing popularity of such “scoreboards”?
I would speculate that their use is likely to lead to an increase in competitiveness and encourage individuals to spend more time maintaining their public visibility as a goal per se, as opposed to producing original and important scholarship, which may (or may not) eventually lead to visibility. Concretely, this would translate to an exponential increase in the use of social media as well as in the hiring of agents and publicists. Behaviors leading to “attention grabbing” are likely to raise challenges to scholarship, as reputation making becomes an objective, as opposed to a consequence of scholarly contribution. One can easily expect important generational divides along these lines, especially since more senior scholars often play a gate-keeping role in relation to younger ones.
Finally, we want to ask ourselves how academic trajectories will be affected by the proliferation of such objective scoreboards. Will the Matthew effect7 Robert K. Merton, “The Matthew Effect in Science,” Science 159, no. 3810 (1968): 56–63. operate with even more intensity, leading to a greater concentration of resources at the top? Or will it lead to greater diversification, democratization, and heterarchies?8Lamont, “Toward a Comparative Sociology.” Will non-elite producers experience a more even terrain? Will members of stigmatized groups be better able to legitimate and diffuse alternative views, perhaps through websites and blogs that target subgroups whose access to shared influence had been more restricted?
More generally, we may want to ask ourselves, what will be the role of expertise in assessing scholarly quality in this brave new world? Will there be new types of warrant for the quality of evaluative authority? One could imagine that having a popular blog may qualify one as a gatekeeper, even if this attribute is decoupled from the proven ability to produce significant scholarship. Will the former attribute translate into being invited to write tenure letters by leading universities? How much autonomy in establishing their reputation will the younger generations who are more digitally connected from the older generation under these new conditions?
These questions, and many more, should become the object of a sociology of digital evaluation and higher education that is only now emerging. How what counts as “frontier knowledge” gets to be recognized as such is likely to be produced under very different circumstances in the twenty-first century than it was in the twentieth century. This short memo only scratches the surface of what should become a much broader research agenda.