From the Editor

Townhouse Notes

Ghosts in the Machine

L. Renato Grigoli | Feb 7, 2023

The AHA TownhouseThose who keep abreast of social media, and particularly those who have been preparing courses for the spring term, have probably heard about ChatGPT, the newest assault on academic integrity. With a few clicks, panicked commentators breathlessly tell us, your students can have an artificial intelligence (AI) generate an essay that will pass any plagiarism checker. Long-form written evaluations are now useless, and we should all revert to oral examinations. Machines can think, automation has come for the humanities, and now all is well and truly lost. Probably.

But concerns over AI software like ChatGPT are misplaced. The software doesn’t do what people are saying it does, and it certainly can’t write a strong analytical essay. When I entered essay prompts I have used, it returned a banal set of generalities with no synthetic analysis. For example, when asked a question from a course I TA’d on the history of captivity—“During the 12 years he spent in the South, did Solomon Northup think of himself as a slave or as a captive?”—ChatGPT returned

In his book, “Twelve Years a Slave,” Solomon Northup writes about his experiences as a slave and describes himself as a slave throughout the book. He writes about the dehumanizing treatment he faced and the ways in which he was treated as property. He also describes the constant fear of punishment and the lack of control he had over his own life.

This response appears competent but is not an answer to the question. It does not show a mastery of any of the learning outcomes the question is designed to test. The AI does not craft and argue for a distinction between captivity and slavery, nor can it grasp the nuance between the legal reality of Northup’s enslavement and his own internal self-conception. The final two sentences, while factually true, are not evidence for Northup’s self-conception as either captive or slave, and could, with the right structure, be made to argue either position. It is, in short, not an argument but a series of facts assembled in a way that encourages the reader to create an analytic argument from them. It tricks the reader into believing there is intelligence by having the reader do all the analytical work.

It reads like the essay of a student who has not yet been taught the difference between lining up facts and making an argument.

The AI’s response reads like the essay of a student who has not yet been taught the difference between lining up facts and making an argument, a student who has been taught to repeat but not to think. ChatGPT is quite good at summarizing information into an easily readable form. If asked a question to which the answer is a sequence of facts, or to which the answer might be lifted from Wikipedia with only a little effort, it delivers solid copy. The AI simply does the arduous but formulaic task of turning Google search results into a paper.

And herein lies the issue: AI writing bots cannot write good essays, but they can reveal weaknesses in our teaching and in how we evaluate student work. If ChatGPT or a similar AI can provide a passing answer to a question, the standards of evaluation are such that fact repetition is sufficient, the question itself requires no critical thinking skills, or both. Learning to repeat information is not the same as learning to think, and the latter is what humanists claim to provide. Nevertheless, many evaluations of learning outcomes across both the humanities and STEM instruction instead look only for information retention—criteria that, to be fair, are at least partially the result of an instructor’s exhaustion, overwork, and a university’s need to retain students.

This is why students can make use of AIs, why there is so much fodder for panicked think pieces. Of course ChatGPT can provide functioning code to answer a question on a computer science test. Ask someone who writes computer code, and they’ll tell you that most of their work is searching the web for similar code and figuring out how to implement it for the specific problem they have. In fact, for this reason, AI is probably a greater threat to engineering than it is to the humanities. And of course ChatGPT, if given the chance, can pass parts of the bar or medical licensing exams; they are tests of how much information one can retain and recall, not intelligence. But writing a cogent, engaged essay requires much, much more than the ability to search for an answer or three and then assemble them into a pleasing shape.

As my father, a computer engineer, says, the problem with computers is that they can only do exactly what we tell them to do. AI is a tool like any other, and if we choose to see the development of AI writing as yet another pedagogical approach rather than an existential threat, we can harness its ability to deliver clean prose and focus on teaching our students how to think and argue as historians. Already, some teachers have begun to create assignments where students leverage the benefits of AIs like ChatGPT on a final project, leaving the writing in the hands of the machine and the argument in the mind of the students. Such an assignment forces students to be both more deliberate in the questions they pose to the AI and more thoughtful in their editing. It does this by placing students in the position usually reserved for the teacher, one from which they can note the AI’s errors and logical leaps—as I have done with the AI’s response to the above question about Solomon Northup—and then edit and expand it to create something better. This kind of assignment can allow us to evaluate how well a student has learned to ask questions and analyze the information given in response—key skills of critical thinking.

But if the development of artificial intelligence results in the death of the humanities, then it will be because it will have shown that the emperor has no clothes—that we were not, contrary to the claims of our most strident defenders, teaching students how to think, and that we were unwilling to do so. This demise would be self-inflicted, but it won’t be because of a machine. As humanists know, humans, not their tools, are the root cause of humanity’s problems.


L. Renato Grigoli is editor of Perspectives on History. He tweets @mapper_mundi.


Tags: From the Editor Teaching & Learning


Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. Attribution must provide author name, article title, Perspectives on History, date of publication, and a link to this page. This license applies only to the article, not to text or images used here by permission.

The American Historical Association welcomes comments in the discussion area below, at AHA Communities, and in letters to the editor. Please read our commenting and letters policy before submitting.


Comment

Please read our commenting and letters policy before submitting.