Publication Date

June 7, 2010

Perspectives Section

Perspectives Daily

AHA Topic

Publishing

One of the most interesting discussions at the recent The Humanities and Technology Camp concerned the future of peer review in the humanities, and whether it can and should continue in its current form.

In the eyes of many participants in the session, the current peer review system promotes conservatism about the form and content of scholarship, and fails to use available technologies to speed up and democratize the system. These problems seem particularly acute in the digital humanities, where scholars have to struggle with the added challenge of creating new programs to facilitate and disseminate their discoveries. But the alternatives remain quite hazy. They often seem to consist of cataloging the flaws in the current system and asserting that technology can cure all ills. The challenge lies in developing new forms of peer review better fitted to the online environment, both before publication (in the development and assessment stage) and after publication (as a means of validating the value and quality of the work).

In its traditional form, we tend to think of peer review narrowly, as the gatekeeping function before an article or monograph is accepted for publication. Properly understood, however, this is only a part of the process of developing something for publication. As the acknowledgments in most works of scholarship indicate, drafts tend to be shared with and reviewed by a wide network of peers before publication, who offer their advice and suggestions on how the manuscript might be improved. The reviewers that serve a gatekeeping role for a journal or press differ only in the (not inconsiderable) power they have to determine when and where the manuscript will be published.

While the gatekeeping process has frustrated many authors through the years, it is not immutable. At the American Historical Review, for instance, the current form of peer review has only been around for about half the journal’s run. The notion of sending a manuscript out to expert reviewers developed in the early 20th century as part of an effort to professionalize and open the journal out beyond the small circle of editorial board members, their students, and friends. In mid-century this process was further refined to include the “double-blind” process, where neither the author nor the reviewer knows the other. This was intended to open the journal to scholars previously excluded by various forms of discrimination—because they practiced the wrong religion or taught at the wrong schools. The process did not always work as intended, but it did create a more neutral space somewhat free of elitism and discriminatory tendencies.

Over time, these anonymous reviewers have been assigned a fairly specific set of criteria, including originality in content and interpretation, contribution to the body of knowledge and the field, and clarity of thought and expression. But putting these principles into practice has generated a number of objections about the tendency toward conservatism in assessments, lack of transparency, and the inevitable costs and delays. Meanwhile, the double-blind process has grown increasingly tenuous thanks to online full-text search engines such as Google. It is easy enough to type in a string of text and get a pretty good approximation of who the author might be.

As an alternative, there is growing enthusiasm for a new system of post-publication peer review (with or without publishers) based on a variety of online metrics and reviews. As regular users of Google and Amazon know, systems for harnessing thousands of discrete transactions, rankings, and links can produce some remarkably accurate perceptions of relationships and value.

Post-publication peer review is not entirely novel in our discipline, as the book review is of signal importance in legitimating new works of scholarship and anchoring many tenure portfolios. But the traditional book review has not kept up with the new forms of scholarship, creating a significant hazard for anyone interested in working in the digital humanities. In the Gutenberg-e project, for instance, we found that journal editors often had little idea what to do with an e-mail link to a web site, and reviewers had little better notion of how to review the digital aspects of the publication. There is little doubt that online scholarship needs a new or expanded system of review.

But a system of post-publication peer review based on online metrics and rankings is not necessarily the cure to all ills. Within the scientific and library communities there is already ample literature about the way citation rankings can be manipulated and abused. Currently the objections tend to be loudest among the journals themselves, which face these kinds of rankings in Europe and elsewhere. It is easy to imagine similar objections from individual scholars who feel their personal prestige or subject field may limit the attention their work receives and knock down their ratings. And it is not terribly difficult to imagine situations where senior scholars could use their superior positions and larger networks of colleagues and students to knock down the ratings of junior historians with alternative interpretations.

Another issue of potential concern to humanities scholars is the length of time that their articles tend to have value to readers. Articles in science journals generally start to lose value (and citations) after as little as five years, which is why the ISI journal citation rankings are based on a three-year time window. In contrast, history articles tend to continue to grow in value and citations past the ten-year window included in ISI. An adequate post-publication peer-review system for the humanities would seem to require some temporal dimension, in order to capture the way an article or other publication may rise or fall in value over time.

Finally, in constructing a system of post-publication review we need to decide who has standing in the system. For instance, should it give different weights to the opinions of experts in the subject field, other scholarly peers, and the general public? As an extreme case, consider what happened to Michael Bellesiles, author of Arming America. Regardless of how one feels about the facts of the case, the situation clearly demonstrated how slowly scholarly forms of post-publication peer review work in comparison to the free-wheeling discussions on the internet. How can (or should) a system of post-publication review balance the desire for more democratic assessments with the inevitable effects of wider exposure to the varied interests and opinions of a wider public?

All that is just to say that developing new systems will not be simple or easy, but then few changes in scholarly practice are. However, new and emerging technologies can clearly improve the system of scholarly communications. A perfected system would:

  • Help scholars improve their work before publication;
  • Speed up publication and facilitate wide access;
  • Elevate exceptional scholarship above the noise;
  • Fairly assess the value of new work, both at the time of publication and over time; and
  • Facilitate the evaluation of a wider array of forms of scholarship, beyond the traditional forms of the print journal article and monograph.

The goals seem clear enough. The challenge lies in developing the necessary technologies, assessing the potential challenges, and mapping the way forward.

This post first appeared on AHA Today.

Robert B. Townsend
Robert B. Townsend

American Academy of Arts & Sciences