American Panorama is “an historical atlas of the United States for the twenty-first century,” bringing together a variety of geospatial and data visualization methods to create interactive maps of American historical data. Parameters sat down with the editors of the project, Robert K. Nelson and Edward L. Ayers, to talk about their experience with creating Panorama, as well as the journey from early digital history projects—like Ayers’s 1993 The Valley of the Shadow—to current methodologies and the future of digital scholarship.
Interviewer: Your first major digital history project—and in fact one of the first projects to make use of what we would now call digital humanities tools and methods—was the Valley of the Shadow Project, which came out in 1993 and contrasted Civil War–era written documents from Virginia and Pennsylvania. How do you think that project has held up today, and have digital practices in history changed in the 24 years you’ve been working in the field? Are there trends in digital history you’re excited about?
EA: As it turns out, I’ve given The Valley of the Shadow a good workout over the last 18 months as I’ve written a second book from it—The Thin Light of Freedom, out from W. W. Norton this fall. To my relief, everything still worked. All the databases, all the text, all the images, and even the Flash map work just as they were designed and built back in the twentieth century.
If we were building the Valley project today, of course, we would build it differently. When we began the project, within the first year of the web, PDF files did not exist and programmers had to write scripts to display the large TIFF images of newspapers. XML did not exist and everything had to be hand coded in HTML. OCR could not make sense of any documents we needed. The manuscript census had not been captured in any digital form, so the population, agricultural, manufacturing, and slaveowner censuses of the two counties had to be digitized from microfilm and transcribed by hand, as did thousands of diaries, letters, and manuscript government documents.
The changes since the Valley was built highlight the changes and continuities in the digital humanities. The Valley proudly bears the marks of its time. Its relative paucity of images in its design marks a time when we worried about bandwidth and processing power on the user’s end. On the other hand, the Valley’s austere white and black design (predating Apple’s turn to that aesthetic) has held up well and the trademark three-octagon navigational device still works well.
In some ways, new technologies and techniques cannot replicate what we did a couple of decades ago. The remarkable Chronicling America project of the Library of Congress and the National Endowment for the Humanities was begun in 2005, just as we were finishing The Valley of the Shadow. That project has scanned millions of pages of 2,228 newspapers covering the years from 1836 through 1922. The OCR of those papers, though quite uneven in its accuracy, helps enormously, but even at its best does not provide any information on the internal structure of the papers. It merely reads words, out of context.
A project I have just overseen at the University of Richmond shows how much has changed and how much has not. On one hand, thanks to Chronicling America’s excellent Virginia subset (Virginia Chronicle) and thanks to the development of Omeka and to a skilled ally in the university’s library, Chris Kemp, we were able to build a site that does everything the Valley is able to do with its newspapers and more—and to do so with 16 freshmen in one semester. Starting with the OCR-transcribed pages of the Richmond Daily Dispatch from January 1866 through April 1877, the students identified over 1,800 articles and corrected the transcriptions in Virginia Chronicle. The students also summarized those articles and supplied titles and metadata, making the digitized text much more useful than it would have been otherwise. Thanks to that work, the site, called Reconstructing Virginia, is able to provide the same browsing and searching available in the Valley but also to be plotted on a complex timeline, converted into a word cloud, presented with a randomizer. It took us years to create the same capabilities in the Valley.
The person most responsible for overhauling the Valley for its years of unattended use—Bradley Daigle of the University of Virginia—jokes that the site is like a classic car with a Prius engine. The surface is the same, but the machinery behind the scenes is economical, quiet, and requires little maintenance.
All of this is to say that I don’t think I would change the Valley in any fundamental way if I were starting it over again. That’s certainly not because it’s perfect but because it’s a product of its time. As a historian, I have no problem with that.
Int: Your current project, American Panorama, uses different tools than The Valley of the Shadow to examine other historical moments and movements of people, including the forced migration of enslaved people, the travel patterns of presidents and secretaries of state, and the canal systems of America. What was the journey from a document-based project like the Valley to something like Panorama?
EA: When I left the University of Virginia for the University of Richmond in 2007 I knew several things. First, I knew I wanted to keep my hand in digital scholarship. It is the most interesting and exciting development of our time as scholars. Second, I knew that as president [of the University of Richmond] I would have even less time to invest in it personally than I had at UVA, when I was a dean. Fortunately, the model we had developed for the Valley worked well: I located resources, suggested ideas, found talented people, and encouraged the effort as best I could. Because I have been fortunate in my allies, we have been able to do exciting things in the Digital Scholarship Lab (DSL) at Richmond even though we are a small liberal arts university rather than a research university. Rob Nelson carries on the work begun by Andrew Torget, the lab’s founding director. The DSL’s work would not have been possible, too, without the skill and dedication of Scott Nesbit, Nathaniel Ayers, and Justin Madron.
Starting the DSL, I knew that the model of The Valley of the Shadow, in which primary documents were slowly gathered from many archives and individuals and then slowly transcribed, tagged, analyzed, and presented, would not work in a relatively small shop. Therefore, I thought of projects that used sources already digitized and machine-readable. I also thought that visualization would be the best way to get the biggest return on our effort. Putting those ideas together, we built Voting America, which mapped the nation’s county-level voting data from 1840 to 2008. The site is due for a refresh, but as a model of what could be done efficiently and usefully, it still works.
We developed a series of other maps over the next few years and then decided that we would build American Panorama, a digital atlas of American history. We were fortunate that the Andrew W. Mellon Foundation shared our enthusiasm for the idea. Beginning by digitizing the last historical atlas of the nation based on original research, Atlas of the Historical Geography of the United States, from 1933, we developed a strategy that I will invite Rob Nelson to describe for us.
RN: One of the reasons that the editors of the Atlas, Charles Paullin and John K. Wright, were interested in geography is that it allowed them to explore social history, not just the history of elite politicians or writers or industrialists but of farmers and church-goers and voters. That’s one of the major attractions of mapping for us too, and much of what we’ve done to date is to try to use maps to convey something about the histories of immigrants or enslaved people or urbanites. This, I expect, is something Ed would agree connects the Valley and the Atlas. Both are efforts to paint a portrait of the past that is broad and inclusive.
A flip side of this is that we’ve also been attracted to mapping as a way of reaching broad public audiences. The ubiquity of interactive maps on our phones seems to have increased an appetite for them. If we can feed that appetite while helping people learn about American history, great.
And I’d add that while Panorama is an atlas and maps clearly take center stage, one of the things we’ve come to appreciate is how much mapping in a digital environment rather than on a printed page enables us to include and feature text. “The Overland Trails” map is built from and foregrounds thousands of entries made in migrants’ diaries. “The Forced Migration of Enslaved People” places on the map hundreds of entries from slave narratives where enslaved people described the harrowing experience of the domestic slave trade. “Mapping Inequality” connects neighborhoods to the thousands of area description surveys collected by the Home Owners’ Loan Corporation, many of which include often lengthy “clarifying remarks.” Users of American Panorama spend a lot of time looking at maps, but they also have plenty of opportunities to read primary sources. The maps are all data-rich, and that data is as likely to be qualitative and textual as it is quantitative.
Int: What are the compelling reasons behind mapping these specific data sets? What research questions can you answer with these maps that you would not be able to answer otherwise?
EA: American Panorama is less about answering specific questions than it is about prompting questions we would not have known to ask otherwise. It is about showing as wide an audience as possible the patterns of the American past, whether the heartening stories of the overland trails, the building of the nation’s canal system, and the remarkable impact of immigration across the generations or the disheartening stories of the vast domestic slave trade and the racial inequalities in federal housing policy in the 1930s.
The goal is to use visualization not to demonstrate what we already know—the ways maps have traditionally been used in historical writing—but to surprise us with patterns we did not know existed.
Int: Last summer Anne Kelly Knowles wrote for Parameters about her experience mapping the history of the holocaust and the challenges of incomplete, biased data (in this case files kept by the Nazi regime). Have you run into similar challenges of sourcing? What other challenges have you run into while mapping these data, and do you have any tips for someone interested in attempting a mapping project like Panorama that you wish you’d known before you began?
RN: Building an historical atlas of a nation-state necessarily is going to involve using the resources of that state, data that were obviously collected not primarily with later historians’ interests in mind but with the state’s immediate needs. To take one example, before emancipation the census counted enslaved people in large part because, as we all learned in high school, the three-fifth’s clause of the Constitution made counting them necessary for political apportionment in the House and in the electoral college. But, in large part due to slaveholders’ objections, the census never captured where enslaved people were born. Not having that data, historians have always had to estimate the number of people traded through the slave trade and where then came from and where they went to.
We’re no different in that respect. In our map on the “Forced Migration of Enslaved People” we rely upon a simple equation that historian Frederic Bancroft used nearly a century ago in his study of the slave trade but with a key difference: we’re able to use the computation power to GIS to make these calculations around the county level rather than at the level of the state. The result is what is almost certainly the most detailed map of forced migration produced to date, but still one that’s necessarily built from modeled data given the limitations of the census.
Our “Foreign-Born Population” map reflects a different limitation of the census. Just as in the nineteenth century the federal government deemed it either unnecessary or unwise to capture information about enslaved people, throughout the first half of the twentieth century the census only collected information on the white foreign-born population. 1960 is a notable and singular outlier where they collected data on “foreign-born stock,” which counted the foreign-born and their native-born children. Thus while our map enables some comparisons over time, the US government’s changing sense of who should be counted need to be kept in mind.
Int: What has been the most interesting or unexpected thing you learned from mapping these data so far? What would you ideally like to map, if the database existed? What gaps exist in historical databases that make them more difficult or impossible to map?
EA: For me, the most interesting, and heartening, thing is how much of American history IS mappable if you assemble people with the right skills and curiosity. That discovery only feeds my desire for dozens of maps that can connect with one another, talk with one another. I would like to map the ways that American culture evolved out of the interaction of all the ethnic and regional cultures that have always marked American history. That might begin by devising ways to unlock the patterns in the billions of words in Chronicling America.
RN: One of the great joys of this project so far has been delving into topics outside of my particular area of expertise. The historical profession trains us and typically rewards us for specialization. Pursuing a project about the whole of American history is thus a daunting prospect, but it’s been enormously enjoyable and rewarding to learn about areas I wasn’t as familiar with when exploring how best to map them. The foreign trips of presidents and secretaries of state was a topic I knew little about before mapping it, and seeing the rise and fall of visits to different geopolitical regions certainly gave me a much more detailed sense of the foci of twentieth-century American foreign policy.
While certainly not unexpected, for me the most important conclusion that building these maps has reinforced is the extent to which the American state in coordination with the private market handicapped communities of color from accumulating wealth in the twentieth century through housing policies and real-estate market norms. While obviously it was far from the only cause, the redlining in the 1930s that we map in “Mapping Inequality” and the urban renewal project of the 1950s and 1960s that we’re exploring in our next map certainly substantially contributed to the huge gap between wealth inequality today, with the median wealth of white households roughly 13 times that of black households. By publishing these maps we hope that that in some small way we can draw some attention to this history and to some of the sources of wealth and racial inequalities today.
In terms of what I’d like to map, I’d say I’d love to persuasively map phenomena as immaterial as culture and ideology over time. If the data were available, I’d like to map not just where people were and how we categorize them but what they thought and believed, the histories of sentiments we often affix “isms” to: nationalism, environmentalism, antimodernism, nativism, sectionalism, secularism, etc. One way we try to address that is, as I mention, including a lot of primary sources that convey nuances and individual experiences that can’t easily be mapped. And we continue to think about what kinds of data and what kinds of creative techniques we can use as windows into that element of the past.
Int: Thank you so much. As a final question, do you have any thoughts on the best way for someone to get involved in this kind of mapping work, or to incorporate it into their teaching?
EA: Rob and I will offer two quite different answers to this question, one born of naïve optimism and the other born of experience. First, the optimistic take. Mapping is probably the easiest way to get satisfying results from a digital humanities project. You are guaranteed a result from an effort to arrange things spatially and people will understand what you are showing them, unlike a more abstract representation. Thanks to free software, moreover, you can do so yourself. I think it’s a good way to begin thinking and working digitally.
RN: Thanks to enormous labor, we have released six maps, considerably fewer than we had proposed. In part this difference stemmed from the increasing ambitiousness of some of the projects. For example, georectifying the more than 150 maps in “Mapping Inequality” involved adding more than 72,000 control points (points associating a particular pixel on the raster map with a geographical coordinate). Creating polygons for the more than 7,500 neighborhoods involved marking nearly 230,000 vertices. Currently with our research partners we’ve transcribed more than 125,000 pieces of data from the area descriptions, which will be around a million when we transcribe them all. All of which is to say that developing these maps has involved more research, more GIS work, and more application development than we’d initially expected.
Student interns in the DSL have done most of the work of georeferencing and data entry for all of these maps. It’s hard to imagine how we would have created American Panorama without them. They, in turn, have learned a lot about American history, about GIS, and about the digital and public humanities.