Is There Any Value in U.S. News Rankings of History Grad Programs?
The latest iteration of the U.S. News and World Report rankings of history graduate programs appeared yesterday, prompting fresh questions about their value for the discipline.
As a measure of the relative merits of any particular department, the rankings should be viewed with considerable skepticism—especially by students applying to or selecting one program over another. For the prospective doctoral student, the primary considerations should be the fit between their intended area of research and faculty in the department, followed by the available levels of funding support. A few decimal points in a small survey should not sway your decision in one direction or another.
The rankings are based on a poll of department chairs and directors of graduate studies at history doctoral programs last fall, which asked them to rate the programs at 151 schools on a five-point scale—from 5 (“outstanding”) to 1 (“marginal”). But the faculty who fill out and receive the survey often complain that the instrument is fairly overwhelming. As one former chair reported, “when as chair I was asked to rank a whole string of departments it was evident to me how little I knew about what was going on in most of them, and how impressionistic my responses were.” That might explain why the response rate for the discipline this past year was quite low (just 19 percent).
While the rankings may be poor measures of the relative value of one program in comparison to another, they can be quite useful for heuristic purposes. When the rankings are broken into wide bands, they can be useful for demonstrating the differences—or lack thereof—between programs at the top of the disciplinary hierarchy and those at the bottom.
By breaking the rankings in to quartiles, I have been able to show that programs at the top of the rankings tend to be older and larger than the programs in the bottom quartiles. They are also much more likely to hire PhD recipients from their own tier of programs. At the same time, I have used the rankings to show that the differences in student completion rates and even hiring into the four-year colleges and universities of the Directory are relatively small (though certainly not insignificant in today’s competitive environment).
In the end, you should take the rankings for what they are—the rankings of a small number of history faculty about the perceived value of other departments. To the extent they provide a glimpse of the status hierarchy in the discipline, and some insights into the possible effects of that hierarchy, the rankings can be useful. To the extent they drive departments or schools to change their behavior to try to game the rankings, or students to select a specific program, they probably do a bit more harm than good to the ecology of the history discipline.
This post first appeared on AHA Today.
Please read our commenting and letters policy before submitting.