Report on AAHE Assessment Forum and Applications for History
In early April 1992, I was invited to represent the AHA and OAH as one of ten observers at the seventh annual conference on assessment sponsored by the American Association for Higher Education in Miami. The intent was to promote discipline-specific assessment. Each participant was asked to read a packet of recent articles and printed reports, attend individual presentations and workshops, and meet four times in common with an assessment specialist for discussion. Following the conference we were each to prepare a four-part report for our colleagues describing current practices and relating them to our particular discipline; explaining how assessment might help improve teaching and learning, and redefine scholarship; offering recommendations to our professional societies regarding their own involvement; and indicating ways that AAHE might work more effectively with the learned societies. My thoughts combine the intensive conference preparation and experience with several years of reflection about institutional and professional assessment in history.
When James Madison University began to study assessment, I telephoned a number of colleagues seeking their collective advice. Most confessed to minimal activity—usually a senior test or else requiring the GRE of all majors—enough to satisfy zealous administrators or statewide mandates. Little thought went into the process and little was anticipated in the way of helpful results. Assessment was yet another trend to be endured until supplanted by the next panacea.
What I witnessed at the AAHE conference, however, provides evidence that the assessment is not a passing fad and can, if properly thought through and implemented, benefit history programs. While current campus practice still seems to emphasize measuring student academic achievement through the use of norm-referenced standardized tests, more and more sessions advocated multifaceted assessment. In his opening address, Theodore Marchese, AAHE vice president, defined assessment as the systematic gathering, interpreting, and using of information about student learning for purposes of improvement and indicated a preference for a variety of approaches over extended periods of time. Subsequent speakers stressed that faculty are central to any successful program. They must initiate, refine, and control assessment, and use the results to enhance programs, instruction, and student learning. To reach these goals, however, faculty must first acknowledge the possibility of potential benefits.
Like our fellow humanists, historians have seldom been assessment pioneers or enthusiasts. Caryn McTighe Musil, senior fellow, Association of American Colleges, captured some of our objections when she observed that we are alienated by the language, claims of precision, and generally uninteresting questions associated with assessment. Terms such as "performance standards," "reliability," and "variability" are mere jargon, and like language seems to flatten reality, ignore ambiguity, and demand simplistic, goal-oriented models. Faculty see assessment as something done to us, an external challenge calling into question our academic integrity. To counter these legitimate concerns, Musil urged scholars to take charge of assessment, translate its awkward terminology into their own language, and focus on questions we wish to explore using the methods of our particular disciplines. Her strategy seems especially sound for historians. It encourages us to examine systematically what we value about how and what we teach and how our students learn.
Since research and writing are fundamental to any undergraduate major, student portfolios have great potential for history assessment. Containing examples of student work prepared at different stages of a program, portfolios illustrate that learning is cumulative, not discrete. As part of its senior assessment, Kenyon College requires all students to place their three best papers in a portfolio. One paper is then completely rethought and revised. For the remaining two, the students are asked to describe how they would recast each and to explain their individual progression as historians. This assessment compares the personal, the analytical, and the written, and the students themselves are responsible for preparing their portfolios for evaluation. Other types of historical writings might well be substituted and different questions posed if this method was adopted by another institution. Students could be asked to rewrite a paper using a specific historical interpretation or a blend of theories and perspectives. The portfolio allow students to see the major as a coherent whole.
Capstone seminars are a second variety of assessment that underscores student research and writing over time. In such seminars students also pull together work done throughout the major. At the University of Colorado, history majors write an extended intellectual memoir in which they reflect upon all their courses, including their own expectations in each, examining how they have matured and who has been instrumental in their intellectual development. The memoir is more introspective and comprehensive than the exit interviews or questionnaires used at many institutions because it calls upon students to review a complete program and to present their thoughts in a finished format. Students may also invite the faculty who have helped guide them to attend the oral presentation of their memoirs. Like portfolios, capstones offer an opportunity for peer evaluation as part of assessment. Students can be paired or else put into small groups to provide preliminary evaluations. These activities may then become a part of an assessment of their own evolution as historians. A junior qualifying examination to measure what students know and can apply to the major has been successfully tested at Reed College. The results assist faculty there in planning senior schedules with an eye to filling gaps in student learning. It is a better exam than an outcomes one administered prior to graduation because it gives individual students time to overcome deficiencies while they are still undergraduates rather than simply recognize that problems exist and can be addressed for future students. Nearly three-fourths of all assessment tests, whether given early in the major or late, are locally developed because of the idiosyncrasies of individual departments and programs. Since assessment results are designed to assist departments in improving their programs, such tests are infinitely preferable to standardized national examinations.
These assessment activities, and others like them detailed at the conference, offer departments valuable insights about their strengths and limitations. It is possible to learn what is getting across to students and what is not, what skills they possess and which need to be improved. Can our students use library technology efficiently to locate sources; analyze and interpret evidence; organize their work; and write effectively? Whatever combination is selected by faculty, assessment is a significant tool to enhance program development as well as improve individual teaching.
One of the attractive aspects of the programs at Kenyon, Colorado, and Reed is the relationship each establishes between what the students put into their major program and the anticipated result. Unfortunately, too many assessment programs concentrate exclusively on student outcomes; that is, what the students are supposed to have learned. It is at least as significant to know what students put into their studies. How long do they study? With what intensity? How well do they retain material? How well do they mature as they move through a program of study?
Though there is much that individual departments can do to assess student learning, the AHA should augment its substantive work in this area to promote a richer dialogue within the discipline. Its Task Force in the Association of American College's Project on Liberal Learning, Study in Depth, and the Arts and Science Major, leading to Liberal Learning and the History Major (1990) and subsequent publications, provides an outstanding model of the essential elements for the undergraduate history degree program. Full debate of its guidelines and suggestions, or how to assess them, is not yet reflected in either the program of the annual meeting or AHA publications. The organization should acknowledge and encourage serious scholarship underway by regularly including sessions, panels, or workshops, and reporting on programs and their assessment in Perspectives, in printed pamphlets, or online. Many of our member departments have competed successfully for national and local grants; however, I know of no network for sharing the results with the profession at large. A registry of recent grant recipients, together with summaries of results, would give individual departments the names and affiliations of contacts and help avoid duplication of effort. Departments should be asked to submit information about assessment grants to the AHA annually to facilitate preparing such a registry. The same mechanism could also be used to announce upcoming workshops.
Because of its experience in assessment, AAHE is a valuable resource for the national organizations. It can provide considerable expertise and guidance. At the same time, it should work with the professional associations to sponsor workshops and conference sessions involving new historians and recognizing some of the solid work already in progress. AAHE should continue and expand the representation of the learned societies in planning and executing its annual programs. Our sessions were informative and enlightening; but, they were rather one-sided. Although the case for assessment is compelling, in the interest of objectivity, the skeptics and critics must be included at the table and their objections heard and debated.
In closing, assessment has come a long way and appears to be a permanent part of academic life. Historians should bring their collective wisdom to bear on this issue on individual campuses and in the profession at large. We must also work with our colleagues beyond the discipline to guarantee the quality of assessment and insure that it honestly reflects what we are teaching and what we trust our students are learning. If we fail, others will assess for us in accordance with their agenda and our programs and students will suffer the consequences.
—Michael J. Galgano is the head of the Department of History at James Madison University.
Please read our commenting and letters policy before submitting.