Perspectives Daily

Don’t Stop Worrying or Learn to Love AI

A Plea for Caution

Stephen Jackson | Nov 6, 2023

In late 2022, after the highly touted release of ChatGPT showcased the stunning progress of generative artificial intelligence (AI), there were a number of breathless commentaries declaring the end of college writing as we knew it. But by mid-2023, a new wave of articles by and for educators changed course, describing ways to use generative AI in the classroom. These works, including a recent article in Perspectives Daily, acknowledge ethical problems inherent to this new technology but nevertheless insist that, since AI is out there, we must use it in our classrooms. I’d like to offer a counterpoint: caution and patience are the best strategies for most faculty right now. Historians should be wary until we have a better grip on the problems and possibilities generative AI presents. There are two major reasons why caution should be the order of the day: the true scope of the changes wrought by AI will take time to sort out, and the ethical implications for historians are too grave to brush aside.

A close-up picture of a computer board.

Caution should be the historian’s watchword when it comes to bringing artificial intelligence into the classroom. Dave Sutherland/Flickr/CC BY-NC-SA 2.0

Without good guidelines for classroom use, we are all scrambling in the dark. Like any other shift in pedagogy, anecdotal stories from well-intentioned individual instructors are no substitute for informed, research-based strategies that will eventually establish best practices for using generative AI in the classroom. These have begun to trickle out, and they suggest caution. UNESCO released one of the first guidance statements for the pedagogical use of generative AI only in early September 2023. The report noted the myriad problems AI poses for educational purposes in the absence of effective national or international regulation. While state-level regulation will be important, professional organizations like the American Historical Association will also need to develop policies and best practices at the disciplinary level. (Editor’s note: The AHA has appointed a committee to explore these issues.)

After the development of guidelines will come the more difficult step: training instructional personnel in appropriate methods of utilizing AI. Educators will need to become well-versed in the theory and practice of AI pedagogy before they can deliver quality instruction with the technology. Such training will need to include a practical and methodological introduction to the evolving field of generative AI. This will necessitate even more time and, crucially, institutional resources to achieve. In other words, it will require a structural change to how we approach education.

Admittedly, the ubiquity of chatbots makes patience particularly challenging at this moment. ChatGPT gets all the press, but the swift proliferation of generative AI is truly breathtaking, and our students have easy access to the technology. There is, then, a powerful argument that historians and history educators have a professional responsibility to immediately incorporate generative AI into our classrooms. But what if this concern for our students causes more harm than good? Without proper guidance and training, we risk providing educational content that is inadequate, misleading, or downright wrong.

Historians should be wary until we have a better grip on the problems and possibilities generative AI presents.

The ethical challenges posed by AI are, if anything, even more thorny. An easy historical comparison for the impact of generative AI is to liken it to the introduction of the handheld calculator in classrooms of the 1970s. The optimistic comparison suggests that this technology freed humans to do more advanced work by taking on a good deal of the burdensome, mundane work of calculation. But this analogy is problematic. Generative AI trains on vast amounts of data available on the internet to create new content based on algorithmic predictions of likely linguistic patterns, all of which occurs without attribution to the creators of the original content. And, when faced with something it does not know, generative AI famously “hallucinates.” It might be more helpful to imagine handheld calculators that routinely produced false results like 2 + 2 = 5.

The problems of misattribution, plagiarism, and hallucinations are particularly serious for historians. At present, the technological basis of generative AI technology seemingly stands at odds with the nature of quality historical work, which relies on accurately and reliably citing all sources of information. The AHA’s Statement on Standards of Professional Conduct includes a list of shared values and emphasizes that “a reputation for trustworthiness” is arguably the “single most precious professional asset” of historians. According to the statement, “all historians believe in honoring the integrity of the historical record. They do not fabricate evidence.” To maintain our credibility as a discipline, it must remain our top priority to ensure the trustworthiness of historical work. It is not yet clear how we can do this while using generative AI in our scholarship and teaching.

That leads to yet another major concern with AI: systemic bias.

The problems of credibility and integrity are just two among a host of ethical problems to consider when evaluating the use of generative AI. It is clear that students can use this technology to quickly and largely undetectably cheat. For some, this problem is exacerbated by the fact that our normal response to cheating concerns—increased surveillance, now packaged in the form of AI detectors—is unreliable and may be biased against non-native English speakers. And that leads to yet another major concern with AI: systemic bias. Since AI programs are trained using content from the internet, they often reproduce systemically discriminatory content. We might also factor in the environmental cost of the massive computing infrastructure necessary to train and sustain new AI programs.

As things currently stand, we have neither a consensus on the principles nor the professional training required to offer responsible AI-based instruction to our students. Patience and caution should therefore be the order of the day. Instead of rushing as an individual to be one of the first to include AI in your classroom, consider instead lobbying for well-crafted policies at the institutional level, calling for additional guidance at professional associations, and advocating for enhanced professional training that our academic communities desperately need to get this right.

With due diligence and caution, I have every faith that the professional community of historians will find ways to responsibly adjust to this new era of generative AI. It’s just going to take time.


Stephen Jackson is assistant professor of the Historical, Social, and Cultural Foundations of Education at the University of Kansas and winner of the AHA’s 2023 Eugene Asher Distinguished Teaching Award. Find him on X (formerly Twitter) @stomperjax or on Bluesky @stomperjax.bsky.social.


Tags: Perspectives Daily Digital History K-16 Education


Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. Attribution must provide author name, article title, Perspectives on History, date of publication, and a link to this page. This license applies only to the article, not to text or images used here by permission.

The American Historical Association welcomes comments in the discussion area below, at AHA Communities, and in letters to the editor. Please read our commenting and letters policy before submitting.


Comment

Please read our commenting and letters policy before submitting.