The summer research training institute

In the summer of 1958, a remarkable group of social scientists gathered at the RAND Corporation in Santa Monica, California, for an intensive three-week research training institute on the “simulation of cognitive processes.” Under the auspices of the Social Science Research Council’s Committee on the Simulation of Cognitive Processes, the seminar was organized and directed by Herbert Simon (Carnegie Mellon) and Allen Newell (RAND Corporation and Carnegie Mellon). The organizers were assisted by J. C. Shaw and Fred Tonge (RAND), Carl Hovland (Yale), George Miller (Harvard), and Marvin Minsky (MIT).

“The committee organized the summer institute in order to introduce a select group of social scientists to the tools and methods of this new field.”

The Council first convened this committee to explore the application of computers to simulating human cognitive processes. The committee organized the summer institute in order to introduce a select group of social scientists to the tools and methods of this new field. During their time together, the 20 participants and seven staff examined computer programs that simulated complex cognitive tasks; studied computer programming techniques, including the Information Processing Language IV (IPL-IV) created by Newell, Simon, and Shaw; and surveyed the work being done at institutions making pioneering use of computer simulation.

Computer simulation was a distinctly novel approach in the social sciences at the time, as embracing it required bringing together two very new approaches at once: one needed to believe that mathematical modeling was important for the social sciences, and one needed to think of the digital computer as a tool valuable for mathematical modeling. In short, one needed to think of the digital computer as a general-purpose information processor—as a device that processed symbols according to logical rules, not just numbers according to formulae. Understanding the computer in this way was very new at the time, as was the embrace of mathematical modeling in the social sciences. This seminar, thanks in part to the number of prominent participating scholars, played an important role in introducing digital computers to mathematically oriented social scientists and in exposing the participants (and institute staff) to this new way of thinking about computers. In so doing, it was a perfect example of a new, high modern social science.

What I call “high modern social science” was the product of a concerted effort to redefine the central concepts, methods, tools, practices, and institutional relations of postwar social science. While the movement toward a new social science took different forms in different fields, there was a common theme amid the disciplinary variations. This common theme was the embrace of a new perspective on science and nature, one that conceived of all things in terms of organization, structure, system, function, and process. At its apogee, between 1955 and 1970, this new perspective was both more widely accepted and more precisely specified than before the war, with its exponents framing all subjects of study as complex, hierarchic systems defined more by their structures than by their components. The goal of science, in this view, was to construct formal models of system behavior, and its chief method was to develop models that would enable one complex system, such as the digital computer, to simulate the behavior of another, such as the human mind. The rise of this new outlook was closely linked to the Organizational Revolution in American society, which provided new sets of problems, new patrons, and new control technologies as “tools to think with” for researchers in this period. The SSRC-RAND summer institute stands as an exemplar of high modern social science in action.

The state of the field in the mid-twentieth century

In order to understand the significance of this summer seminar, one first has to recall the state of computing, and of mathematical modeling in the social sciences, in the late 1950s. At the time, IBM was in the middle stages of developing its SAGE air-defense computing system (think NORAD, which went into operation in 1957), and it was working with RAND and its spinoff, the Systems Development Corporation, to develop the programs that would run on those computers. These were by two orders of magnitude the most complex programs written to date. A whole series of new technologies were developed in the process, including the first debugging techniques, as visual inspection by humans no longer was sufficient. As a result, by some estimates, roughly 1 out of every 8 “programmers”—to use a somewhat anachronistic term—in the world worked for RAND or the SDC around that time.

“Computers were unique things: every computer was a one-of-a-kind and every program had to be written specifically for an individual computer.”

Digital computers did exist at major universities, but they still were rare things, especially since the advent of time-sharing systems with remote terminals was several years in the future. Furthermore, computers were unique things: every computer was a one-of-a-kind and every program had to be written specifically for an individual computer. Not an individual line of computers, an individual computer.1This situation changed radically during the 1960s, especially with the introduction of IBM’s System 360 in 1964, which came to dominate world markets in large part because a program written to work on one IBM System 360 would work on any. The System 360 line, of course, was helped enormously by the $2 billion in research and development money that came its way via the SAGE project.

Computing, like many newborn technical fields, largely existed in its applications, rather than in a well-developed disciplinary core. In addition to calculations related to ballistics and to nuclear weapon design, these applications often included statistical analyses, with a fair number of such analyses being related to psychology, especially psychoacoustics. The Harvard Psychoacoustics Lab, often in conjunction with MIT’s Lincoln Labs, was particularly fertile ground for such work. Despite such early applications in psychology, thinking of a computer as a general-purpose information processor rather than as an aid for performing mass counts and repeated routine calculations was new to all but a few, even among mathematically sophisticated social scientists.

In addition, while the postwar generation of social scientists generally had much higher levels of mathematical training than the prewar cohort, modeling as a distinct mode of analytic practice still was quite new. In my book Age of System, I surveyed the flagship journals for economics, political science, anthropology, sociology, and psychology between 1925 and 1975, sampled every five years, and found that less than 7 percent of the articles in the sample before 1950 used the world “model” at all.2Johns Hopkins University Press, 2015More Info → Only a tiny handful of those engaged in anything remotely like what a social scientist of the 1960s or 1970s would call “modeling.” (Perhaps 12 out of over 1200 do so, if one is fairly loose with the definition of modeling.)

Simon, Miller, and Newell were eager to change this situation, especially in psychology; they saw computer simulation as a way to transform psychology’s experimental practice by relocating our internal mental processes, formerly accessible only by introspection or inference, into the workings of a machine, whose every operation could be inspected. Thanks in part to the success of the summer institute, by 1960, the number of articles engaging in modeling had risen to roughly 25 percent in those journals, an average that masked a wide variation: from 18 percent in the American Journal of Sociology and the Psychological Review to 25 percent in the American Anthropologist, 36 percent in the American Economic Review, and 48 percent in the American Political Science Review. Roughly a decade later, their novel vision had become standard practice in economics, political science, sociology, and anthropology, with over 60 percent of the articles in those fields’ flagship journals in that year engaging in modeling. Psychology remained an outlier, with only 31 percent of articles in the Psychological Review engaging in modeling, though that changed markedly over the 1970s: roughly two-thirds of articles in that journal engaging in modeling by the 1980s. Significantly, modeling grew so rapidly in 1970s that the styles and types of modeling began to differentiate: as modeling became a universal technique, it became a whole universe of techniques.

“To Newell and Simon, the Information Processing Language (IPL) they developed with RAND’s J.C. Shaw was a new, potent formal language for stating psychological (or other) theories.”

At the time of the seminar, few social scientists applied mathematical modeling in this way. In 1957, Newell and Simon had written “the program is the theory,” a telling phrase that made little sense to those without significant hands-on experience with digital computers as modeling tools.3Allen Newell, J.C. Shaw, and Herbert Simon, “Elements of a Theory of Human Problem Solving,” Psychological Review 65, no. 3 (1958): 151–66, 151. To Newell and Simon, the Information Processing Language (IPL) they developed with RAND’s J. C. Shaw was a new, potent formal language for stating psychological (or other) theories. To the uninitiated, the IPL was as incomprehensible as Newell and Simon’s claims about programs being theories. That’s a lot of new for anyone to digest, but the effects could be career-changing if one were ready to take a bite of the digital apple.

The institute’s impact

Not everyone found the seminar life-changing, of course: roughly half of the participants largely continued to do the same kind of work that they had been doing before, only with greater familiarity with digital computers. (None appear to have rejected the computer as a research tool or mathematical modeling as a valuable practice in the decade after the seminar.) One cannot rewind history to see how many of the participants would have pursued the use of digital computers as tools for modeling and simulation without the seminar. It is likely some would have gone on to work in the area as they were already interested enough in the subject to apply for the seminar. But a noticeable difference can be seen in the topics and modes of research nearly half the participants (including staff) conducted in the years before vs. the years after the seminar.

Indeed, if one looks at the publication records for the participants and staff of this summer institute, one finds a good bit of evidence that the experience was eye-opening. For example, Robert Abelson, whose previous work had focused on the statistical analysis of learning, especially scaling and measurement theory, began to discuss computer modeling and simulation almost immediately afterwards. By 1961 he had launched the Simulmatics project, which led to publications on the computer simulation of communication, of belief systems, and of presidential elections. James S. Coleman (later of Equality of Educational Opportunity fame), who mostly had written on mathematical political sociology in the 1950s, began using computers to carry out more complex data analysis after the seminar. By 1965 he had written a valuable survey article on “The Use of Electronic Computers in the Study of Social Organization.” Bert Green, like Abelson, had been doing work in mathematical psychology, mainly related to questions about measurement, factor analysis, and “latent structures.” After the seminar, he began writing on computer models of cognitive processes and authored a 1963 book on Digital Computers in Behavioral Science. Later, Green came to Carnegie Mellon to work with Simon in its Psychology Department. Gil Krulee, Roger Shepard, and Gerald Shure similarly shifted their research programs to incorporate computers not just as tools for data analysis, but as tools for modeling and simulation.

Perhaps the most significant evolution came in the work of one of the staff, George A. Miller. He already was a prominent mathematical psychologist, known first for his influential work in psycholinguistics, then for his key role in introducing information theory into psychology (especially in the study of language and communication). His landmark 1956 article, “The Magical Number 7,” established Miller as one of the early leaders of the burgeoning “cognitive revolution” in psychology. No stranger to complex mathematical ideas about language at the time of the seminar, having just coauthored an important paper on “Finite-State Models of Perception” with Noam Chomsky, Miller’s experience in the summer seminar helped him develop his ideas in a new, information-processing framework. The power of this new framework can be seen most vividly in the book he coauthored with Eugene Galanter and Karl Pribram, Plans and the Structure of Behavior, published in 1960. Plans was one of the most influential works in the cognitive revolution; many took it to be the paradigm-setting work for the field in the 1960s.

“Simon’s collaborative partnership with Newell, already several years old, became even stronger in subsequent years, with Newell leaving RAND to work with Simon at Carnegie Mellon.”

The arrow of influence between Simon and Miller ran both ways: while Miller clearly began to embrace much of Simon’s perspective on the computer as a simulation device, Simon learned a great deal about experimental psychology, especially the psychology of language acquisition, from Miller. This support for Simon’s development as a psychologist was important: at the time of the seminar in 1958 Simon had a political science PhD and extensive experience in mathematical economics but comparatively little experience or training in experimental psychology, the field in which he would do most of his work over the rest of his long career. Even more, Simon’s collaborative partnership with Newell, already several years old, became even stronger in subsequent years, with Newell leaving RAND to work with Simon at Carnegie Mellon.

It is a bit harder to discern the extent of the seminar’s influence on Marvin Minsky, as he appears to already have been where the seminar was intended to lead people. Still, it should be noted that Minsky received his PhD in 1954, only four years earlier, in mathematics, not psychology. His dissertation had explored “Neural-Analog Reinforcement Systems” and the “brain-model” problem—i.e., he was a mathematician familiar with some key problems in the embrace of an information processing model of the mind. Even more than Simon, he was not an experimental psychologist. He had worked with John McCarthy and Claude Shannon to put together the famed 1956 Dartmouth conference that is sometimes called the moment of birth for AI research, and he already had some interest in “heuristic programming,” (one of Simon and Newell’s key contributions to computer science), but he had barely begun to tackle the task of connecting his ideas about brains, minds, and programs to actual experimental research.

Conclusion

The success of this summer institute was no accident. The time was right and the people were well chosen for their abilities as scholars—and their potential to serve as evangelists for computer simulation. In addition, it should not be forgotten that the network of institutions and patrons supporting the seminar involved some of the most important organizations (and biggest dollars) in the social sciences: the Ford Foundation, RAND, and the SSRC. A much wider network of patrons and research centers ramified out from the participants as well. Green, Hovland, Miller, and Simon, for example, all played important roles in getting the National Institutes of Mental Health (NIMH) to fund the acquisition and use of digital computers for psychological research in the 1960s.

This web of patrons, institutions, tools, methods, and ideas was characteristic of high modern social science in the United States, not only in its goals—to create a new kind of social science appropriate to the postwar world—but also in its blind spots: every single participant in the seminar was male, for example, and their vision of a “universal man” looked a whole lot like the participants in the seminar.

References:

1
This situation changed radically during the 1960s, especially with the introduction of IBM’s System 360 in 1964, which came to dominate world markets in large part because a program written to work on one IBM System 360 would work on any. The System 360 line, of course, was helped enormously by the $2 billion in research and development money that came its way via the SAGE project.
2
Johns Hopkins University Press, 2015More Info →
3
Allen Newell, J.C. Shaw, and Herbert Simon, “Elements of a Theory of Human Problem Solving,” Psychological Review 65, no. 3 (1958): 151–66, 151.