NRC Plans for New PhD Program Assessment
Robert B. Townsend, September 2002
The National Research Council (NRC) has undertaken a wide-ranging review of the processes of assessing and rating PhD programs in advance of their next decennial survey, slated for 2005.
While the NRC's surveys are given much more credence than the oft-reviled rankings by U.S. News and World Report, their last report, Research Doctorate Programs in the United States: Continuity and Change (1995) received a great deal of criticism for its use of similar reputational survey data. At the same time, their data on history PhD programs included a number of misleading quantitative measures on the productivity of history faculty by lumping the field in with the social sciences (for an analysis of the 1995 survey, see Robert Townsend, "NRC Study Offers Wealth of Data on PhD Programs," Perspectives, April 1996, 13–14).
In light of these and other complaints, the NRC formed a 14-member Committee to Examine the Methodology of Assessing Research Doctorate Programs to assess the strengths and weaknesses of the earlier study, and offer recommendations on new standards of assessment.
The committee held its first meeting on April 15, 2002, and heard from representatives of a number of interested societies and organizations, including the AHA, the Modern Languages Association, and the Consortium of Social Science Associations (for text of the testimony offered on behalf of the AHA, see page 18). The members of the committee pressed the representatives from each organization to offer a better method for assessing the effectiveness and quality of PhD programs in their respective fields.
The chair of the committee, Jeremiah Ostriker (Princeton and Cambridge Universities) was forthright about the concerns the committee would try to address, observing at the meeting that the rankings often affect the behavior of particular departments and institutions, though not always in positive ways. According to Charlotte Kuh, staff director of the project, the committee is seeking to "encourage quality improvement and accountability while deemphasizing the 'horse race' aspects of the study."
While the discussion and comment at the meeting were all quite preliminary, a couple of key themes emerged-that this study is an essential means of assessing doctoral programs both within and across disciplines, and that the issue of outcomes (particularly the ability of programs to meet their own goals) should be considered as an important measure in the next study.
The committee also spent a good deal of time exploring and eliciting comment about shifts in the disciplines, the emergence of new disciplines over the past 10 years, and the problem of assessing emerging cross-disciplinary fields. And a number of speakers expressed concern that the assessment reflected the norms and qualities of the past, and may not be a reliable gauge of the present or future state of a particular program.
The committee is expected to publish its recommendations later this year or early next, so that data collection can be done in the 2003–04 academic year (if the committee recommends proceeding with a study).