This project aspired to take America’s historical pulse by assessing public perceptions of, and engagement with, the discipline of history and the past. Americans are clearly interested in history: consider for example the mere existence of a “History Channel” on television, the persistence of genealogy in popular culture, the ubiquity of history books on bestseller lists, and the unflagging popularity of films and video games that engage historical topics. Project leaders sought to lend some precision to the cluttered landscape of assumptions about the breadth and depth of this interest. We hope that our conclusions will help cultural organizations, K–12 and higher education institutions, state humanities councils, journalists, policymakers, and others better understand their audiences and broaden the relevance of historical work to public culture.

A partnership between the American Historical Association and Fairleigh Dickinson University, with generous funding from the National Endowment for the Humanities, the project measured the American public’s perceptions and uses of history through a national poll. But this was not the first time such a survey has been conducted. Over 20 years ago, Roy Rosenzweig and David Thelen published a systematic attempt to measure the apparent disconnect between academic historians and the American public on experiences with and utilization of the past. The Presence of the Past: Popular Uses of History in American Life (1998) drew on a survey of 1,453 Americans, queried by telephone about their connections to the past and how those connections influenced daily life and hopes for the future. Far from being a “gotcha” quiz to show how ignorant Americans are of history (which itself has a long and sordid past), Rosenzweig and Thelen were primarily concerned with what people do know about history, why it is important, how they use it, and why it seems to diverge from academics’ own understandings and uses.

Rosenzweig and Thelen’s work deeply influenced how historians thought about reaching audiences beyond the classroom and the academy for two decades. But it is precisely because of this influence that their study needed to be reconsidered, reconceptualized, and redone. This is especially important as history increasingly becomes a political football and social wedge issue, while approaching the 250th anniversary of the nation’s independence and an incumbent commemorative agenda that is responsive to public culture and historical consciousness.

Although our initial intent was to provide a straightforward update to Rosenzweig and Thelen’s findings, it quickly became apparent that a neat replication would not be optimal. Too many changes had affected the public’s interactions and interests with the past, while other issues first raised by Rosenzweig and Thelen called for more in-depth investigation. Consultations with our diverse advisory board, as well as focus group sessions conducted at the 2019 annual meeting of the American Association of State and Local History (AASLH), resulted in a host of new issues that both puzzled and intrigued those working in history-related fields. A new survey instrument, albeit with overlaps with Rosenzweig and Thelen’s work, was needed to answer these novel questions and concerns. (More detailed information on this instrument, as well as how and to whom it was administered, can be found in Appendix A and Appendix B.)

A new survey was all the more important since the general public now receives information about the past in ways that were only nascent (e.g., websites, 24-hour news channels) or simply did not exist (e.g., social media, podcasts, mass consumer DNA testing) when Rosenzweig and Thelen did their work. Approaches to teaching history have likewise changed considerably in the interim, with some experimentation in moving from a predominant “coverage” methodology to a growing school of “historical thinking.” Such developments and changes might have had an impact on the American public’s perceptions of the past and/or engagement with history, but no one had attempted to measure them systematically. Growing political polarization, racist violence, and “history wars” to control the past and the teaching thereof added to our perception that we were operating in a climate considerably changed from the 1990s. If our findings and discussion sometimes seem to tilt toward these issues and the demographics that animate them, it is because of this unique moment in history that we are living through.

In conducting such a survey, we found ourselves in good company. Two related reports issued in 2020 were also products of national-level investigations. More general in scope was The Humanities in American Life, published by the American Academy of Arts and Sciences, based on a poll of 5,015 adults. Though encompassing all humanities disciplines, the report’s findings are largely complementary to our own. A history-specific report from AASLH, Communicating about History: Challenges, Opportunities, and Emerging Recommendations, seeks ways to breach the divide between professional historians and the general public. In taking up the mantle of Rosenzweig and Thelen’s earlier study but shifting from a survey to a focus groups approach, AASLH’s work likewise has much in common with the present project. Researchers involved in both projects were tapped to serve on our own advisory board.

Through the advisory committee meetings and focus groups and having logged more virtual sessions than we now care to remember, we ended up with a survey instrument addressing roughly 10 main issues that form the basis of this report. We aimed to provide as much survey data in user-friendly format as possible for these topics, including well over 150 charts illustrating our results. These include not only topline findings, but cross-tabulations for selected demographic groups or for correlations between poll questions. An alphanumeric code always appears in parentheses (D1, for example), referring the reader to the survey question in Appendix B upon which the illustrated data are based. Figures highlight notable differences, but sometimes point to similarities as well, which can be equally important. We hope these visualizations are more useful than a blizzard of numbers on a spreadsheet could ever be. Still, there are literally thousands of possible cross-tabulations from our data, and our report displays charts for only a fraction of them. For those with the desire and technical acumen, the complete raw survey data are available on the AHA website.


Report Overview

First, we sought to explore how the public defines “history.” Rosenzweig and Thelen made a strategic decision in their survey from the 1990s to substitute “the past” for “history,” on the basis that “history” was perhaps too formal, while “the past” might better resonate with respondents. Although true, the researchers ended up with personal experiences that sometimes strained the distinction, often recounting stories that clearly meant something to respondents but revealed evidence of limited understandings of broader contexts. As practitioners in the discipline, we wanted to know what history, with all its attendant baggage, means to the public. Our results in Section 1 show not only broad consensus on the matter, but more far-reaching implications for Americans’ curiosity about, and empathy for, the past and other people.

The questions undergirding Section 2 were geared toward determining why the public cares about history, if it cares at all. Here, we found that various factors drove people to want to know more about the past, including learning for learning’s sake, entertainment, and possibly for legacy reasons. Yet, we also discovered that a sizable proportion of the public has no interest at all in learning history. Cross-tabulations reveal interesting variations between demographic groups on all these factors.

Where people turn to for historical information is the focus of Section 3. As educators, we are susceptible to assuming an air of self-importance, expecting classrooms and teachers to be the go-to sources for anyone wanting to know more about the past. But of course, the public has a diverse and ever-expanding menu of possibilities, many of which bypass formal education settings and, for better or for worse, any sort of quality control. Knowing the relative frequencies of people’s utilization of these sources is vital to understanding what the public knows about the past, how and why it knows it, and how those in the history field might better engage broader society.

Section 4 ascertains how trustworthy those diverse sources of the past are—at least in the public’s mind. In doing so, we follow in the footsteps of Rosenzweig and Thelen, who likewise measured trust in sources, though our own list of informants on the past is considerably expanded. We learned that levels of trust are often functions of respondents’ ages, races and ethnicities, and political party affiliations. We also found that the most trusted sources were not always the most utilized, suggesting that pursuit of truth is not necessarily at the top of the public’s mind when seeking historical information.

We turned our attention in Section 5 to how the public prefers to learn history. Specifically, do people desire an unmediated experience, one where they personally consult texts and artifacts and draw their own conclusions? Or do they favor a more passive approach, one where an assumed expert does the heavy lifting of interpretation and simply reports it? Along with those measurements, we investigated the role of entertainment in the history learning process, especially whether learners felt amusement is an asset or detriment. Though ours were measures only of people’s learning preferences, not actual outcomes, the findings should provide educators and public historians with valuable information on audience expectations, realistic or otherwise.

The state of history education at the high school and college levels comes under scrutiny in Section 6. In particular, do these settings emphasize knowledge of facts or historical thinking skills? Have these experiences been positive, negative, or incomplete? And what effect do these educational encounters have on learners’ desire to learn more about the past? As often happened, we found that respondents’ backgrounds often correlated with attitudes, setting up interesting cause-and-effect questions about the role of formal history education settings in shaping society’s views and values.

Which aspects of history most interest people? Section 7 takes a two-pronged approach to this issue by first determining which sources of information most and least motivate the public to learn more about the past. Once again, the correlations between source utilization, trustworthiness, and ability to spark interest are not always in alignment. From there, we measured whether respondents view a variety of topics as equally or more important to baselines, and the degrees to which a selection of historical subjects intrigue them.

Those working in the history discipline produce knowledge for the public, but is that information necessarily what people want to know? If not, where are the gaps? Section 8 provides some intriguing, if at times frustrating, answers to these questions. By comparing respondents’ views on the value of history education versus learning about other fields, we see where knowledge of the past rates. We then asked whether historians seem to be giving adequate attention to a range of topics, with the results varying considerably by demographic group.

Section 9 tackles the hot-button issue of historical revisionism. Do people expect our knowledge of the past to change? If so, why? And are the answers respondents gave a function of beliefs about what constitutes history? We then turn to the tendentious questions of whether history education should celebrate or question the nation’s past, and whether it is acceptable to make others uncomfortable by teaching about painful subjects. As one can imagine, there was considerable disagreement among subgroups, but some unexpected commonality, too.

We close out the report in Section 10 with the question of how various outlooks on the past guide people’s civic engagement. Are there causal links? Are overlaps simply coincidental? Or are other factors even more important to fostering a participatory public? The survey data provide valuable food for thought.

Each section begins with a summary of our findings and ends with the problems and opportunities we detect arising out of it. We have primarily focused on presenting the survey results clearly, drawing comparisons with other findings when appropriate, and making observations as they occurred to us. We suspect that readers will see many things that we did not, or will be able to explain some of our findings in ways that eluded us. When that happens, we hope those people will share their insights with the wider history community.

The Plusses and Pitfalls of Polling

Educators are well aware of the pros and cons of multiple-choice tests. On the plus side, the format allows one to assess familiarity with a broad range of subject matter quickly and efficiently. But a downside is that it is difficult to dig much below surface-level knowledge, to ask follow-up questions based on responses, and to understand the “how” and “why” of a person’s thinking.

So it was with our poll, administered to an online probability panel of 1,816 adults. Take the public’s uses of, and trust in, sources of the past. The survey quite efficiently measured respondents’ relative utilization of a variety of sources, as well as people’s thoughts on those sources’ ability to convey truthful information. Although we learned that documentary films and television are the most frequently consulted fonts of historical content, we cannot be sure why that is the case, how much time people spend watching them, or what viewers learn from these sources. Moreover, we do not know which programs and films people are watching, let alone whether we would classify them as history-based or even documentaries. A follow-up survey or focus group sessions could tease out that information.

The confidence that people place in sources of the past raises a related problem. Our poll measured only perceptions of trustworthiness, which should not be equated with an empirical measure of actual reliability. The same dynamic was attached to many other survey items, where having people register their attitudes was the goal. This does not mean that perceptions are unimportant. People make decisions all the time based on emotional responses, while believing their judgments are objective and evidence-driven. The past itself and the discipline of history can be both casualty and beneficiary of this phenomenon.

At base, this is a matter of direct versus indirect measures. Take the survey’s finding that most respondents believed they learn better when history is presented as entertainment. Although people clearly thought that to be the case (an indirect measure of learning), only scrutiny of their actual learning (a direct measure) could tell us whether the belief is grounded in reality. Once again, perceptions matter. But readers of this report should exercise caution when basing decisions on such findings. For instance, when a majority of people tell us that they actively investigate issues that conflict with extant knowledge, we are wary: confirmation bias is a powerful force. So, too, are the sophisticated algorithms used in search engines and social media to push users into like-minded spheres. There is a good chance that many respondents’ answers here are more aspirational than real, and that some survey answers reflect what the respondents think they should respond rather than what they actually believe.

Conducting our survey in the fall of 2020, during the worst global pandemic in over a century, presented unique challenges. Eighty percent of our respondents indicated that movements in their communities had been somehow restricted, while plans to visit a museum or historic site were curtailed for 34 percent of them. Making things more complicated was the nation’s careening toward a hotly contested and divisive presidential election, further exacerbated by national protests and violence in the wake of George Floyd’s murder in Minneapolis. In the final analysis, it is impossible to disentangle our results from those factors.

Yet as historians, we knew that no pristine moment of harmony, no golden oasis of reconciliation, was forthcoming, even if delaying the poll had been an option (it was not). We thus reasonably adjusted things as needed, such as extending the timeframe back to January 2019 (well before COVID-19 was on the radar) when asking about visits to museums and historic sites. Other results, such as partisan views on the celebration or questioning of US history, likely bear the marks of circumstance, but this is true of anything that records a snapshot in time. Indeed, one may find added value in this survey’s results as an artifact of the tumultuous context in which it was carried out.

Pitfalls aside, we are hopeful, even confident, that our poll results offer valuable insights into the public’s views on, and uses of, history. That said, we are likewise aware that our efforts constitute just a small step on a never-ending journey, since any single fortuitous answer here inevitably leads to several other questions, with the latter often more vexing than the original. Perhaps we should recall the English writer and lexicographer Samuel Johnson as he reflected on his monumental dictionary project. Unable to achieve perfection or finality, he compared his plight to the ancient Arcadians chasing after the sun. For, try as they did, whenever they reached the crest of the hill upon which the sun appeared to sit, they found that it was still the same distance away.

 Next section →