Published Date

October 29, 2025

Resource Type

AHA Resource, Congressional Briefing Resource, For the Classroom

Thematic

Medicine, Science, & Technology, Political

AHA Topics

AHA Initiatives & Projects

About the Briefing

This handout was created for the AHA’s October 29, 2025 online Congressional Briefing on the history of artificial intelligence. Panelists Sarah Igo (Vanderbilt Univ.), Aaron Mendon-Plasek (Purdue Univ.), and Rebecca Slayton (Cornell Univ.) discussed the historical context of privacy and national security issues that are being transformed by AI. Kathryn Cramer Brownell (Purdue Univ.) served as moderator.

 

Technology, Privacy, and Security

  • New technologies have regularly presented challenges to Americans’ privacy and security, for policymakers and ordinary citizens alike. 
  • In the late 19th century, a host of innovations in communications and media made virtual intrusions as important as physical ones for the first time in US history, prompting calls for a legal right to privacy.
  • In the more than a century since Americans have weighed how to balance the many efficiencies and conveniences that new technologies—from the telephone and instantaneous photography to chatbots and deepfakes—provide with their capacity to compromise individuals’ physical, psychological, biometric, financial, and data security.
  • Although generative artificial intelligence (AI) appears to harbor entirely new threats, technologies of exposure, surveillance, interception, capture, and transmission have long shaped the conditions for, and understandings of, individual privacy in the United States.
  • At key moments in the last 150 years, debates over the privacy risks posed by new technologies generated novel legal and policy responses. This history permits a view on the regulatory roads taken and not taken, allowing an assessment of the efficacy of existing US frameworks for public oversight.

Early Computing in Public Policy

  • In the United States, public debate about privacy, security, and computers first emerged in the 1960s, when much of the infrastructure that motivates current concerns about AI was developed
  • In the 1970s, Congress passed laws to protect the privacy of citizens and consumers but declined to create a federal privacy agency or enact other recommended protections. Individual privacy largely fell by the wayside as the internet was commercialized in the 1990s.
  • Current AI trends focus on “machine learning.” What most distinguishes it from past approaches is its dependence upon vast amounts of data that are produced and gathered through the internet.
  • While many AI applications are useful and do not violate privacy, the specific developments that threaten privacy and security today are largely enabled by explicit choices to forego privacy protections. In this sense, we can think not just about how AI is impacting privacy, but also about how a lack of privacy protection has shaped the evolution of AI.

Machine Learning

  • Historical accounts of AI frequently downplay the contributions of those communities critical to the creation of contemporary forms of machine learning. This paucity has entrenched an oversimplified narrative of technological development, which, in turn, has been leveraged by technologists to forcefully argue for the inevitability and superiority of certain forms of AI.
  • These particular visions, often couched in the discourses of “efficacy” and “innovation,” have reorganized beliefs about technology transfer, the value(s) of science, and the ways technology facilitates economic development. Such concerns explicitly inform conversations about national security even as they spur the reimagining of “privacy.”
  • Several historical communities of practice engaged in “machine learning” research suggest how disunified research efforts spurred the proliferation of specific contemporary forms of machine learning.
  • The interweaving of privacy and national security has been a distinctive feature of many historical efforts to use forms of machine learning to make decisions given contradictory information.

Participant Biographies

Kathryn Cramer Brownell is professor of history and director of the Center for American Political History and Technology at Purdue University. She is author of Showbiz Politics: Hollywood in American Political Life (2014) and 24/7 Politics: Cable Television and the Fragmenting of America from Watergate to Fox News (2023), which won the Eugenia M. Palmegiano Prize Award from the American Historical Association and the PROSE Award in Media and Cultural Studies from the Association of American Publishers. She also serves as senior editor for the “Made By History” column at TIME Magazine.

Sarah E. Igo is the Andrew Jackson Chair in American History at Vanderbilt University. She teaches and writes about modern US cultural, intellectual, legal and political history, with special interests in the human sciences, the sociology of knowledge, and the public sphere. Her most recent book, The Known Citizen: A History of Privacy in Modern America, traces US debates over the meaning of privacy, beginning with “instantaneous photography” in the late 19th century and culminating in our present dilemmas over social media and big data. Her first book, The Averaged American: Surveys, Citizens, and the Making of a Mass Public, explores the relationship between survey data—opinion polls, sex surveys, consumer research—and modern understandings of self and nation. She is also a co-author of Bedford/St. Martin’s American history textbook, The American Promise.

Aaron Mendon-Plasek is an assistant professor of history at Purdue University. His first book project, tentatively titled The Ill-Defined World: A History of Machine Learning and Novel Political Knowledge, examines how little-known communities of transnational researchers sought to build learning machines that linked “efficacy” to visions of subjectivity. The book traces how these schemes of quantification would go on to remake contemporary AI, scientific inquiry, and the ways that societies know themselves.

Rebecca Slayton is associate professor of science and technology studies at Cornell University. Her research and teaching examine the relationships between and among risk, governance, and expertise, with a focus on international security and cooperation since World War II. Her first book, Arguments that Count: Physics, Computing, and Missile Defense, 1949–2012, shows how the rise of a new field of expertise in computing reshaped public policies and perceptions about the risks of missile defense in the United States. In 2015, Arguments That Count won the Computer History Museum Prize. Her second book project, Shadowing Cybersecurity, examines the emergence of cybersecurity expertise through the interplay of innovation and repair. She is also working on a third project that examines tensions intrinsic to the creation of a “smart” electrical power grid—i.e., a more sustainable, reliable, and secure grid.