Forum on Assessment

Resistance is Futile: One Approach to Program Assessment in History

James I. Matray | Mar 1, 2009

Assessment has been a central preoccupation of the administration at California State University at Chico since the academic year 2004–05. Initially, chairs in every department on campus were under tremendous pressure to develop a comprehensive process for program assessment (that is, to develop procedures to measure the department’s success in training history majors to master standards of knowledge and performance), with the focus now on providing evidence of effective implementation. From 2005 to 2007, there were a series of meetings and workshops that chairs were expected to attend to educate them about policies and procedures related to assessment. This included what I considered valuable and enlightening advice from a knowledgeable expert in the field. Although upper administration denied any connection with accreditation, the upcoming Western Association of Schools and Colleges (WASC) first phase review (conducted in 2006–07) always seemed to become a prominent part of discussions about assessment procedures across the campus. Before May 2007, the All University Responsibility for Assessment (AURA) Committee supervised and monitored compliance with directives for developing policies and procedures for assessment at the department level, requiring submission of a brief annual report recording progress toward achieving specified benchmarks. This has paid dividends for Chico State because it now is clear that assessment will remain at the heart of the WASC accreditation process.

We worked hard—prior to completing my final term as chair in May 2008—to make sure that our department met university expectations. Any resistance would have had a negative impact on the allocation of resources for new hires to replace a series of expected faculty retirements. Beginning in the fall semester 2005, I was able to persuade my faculty to approve, in accordance with university directives, a mission statement, learning goals, and student learning outcomes (SLOs) as the foundation for assessment of our success in training history majors. These efforts began in our Curriculum Committee with the development of draft proposals relying on guidance about specifics that I provided based on my participation in the training sessions covered above. My colleagues then permitted me to draft a specific assessment plan that they approved after revisions. All of these policies and procedures are posted on the Chico State history department web site at www.csuchico.edu/hist.

To minimize work and maximize faculty cooperation, we developed a strategy of “embedded” assessment in existing courses, rather than opting for portfolios or exit exams. Implementation began in the spring semester of 2006, when a new three-person assessment committee evaluated the work of our history majors completing the senior seminar, our capstone class, during their final or penultimate semester before graduation. AURA judged my department in the spring semester of 2007 as “exceeding expectations” in developing and implementing a program for assessing the major. Oversight shifted to the colleges in the fall (following the dissolution of AURA). Each department has selected a faculty member who currently serves as assessment coordinator for each program, receiving a one-course reduction each semester as compensation. Emphasis now has shifted to preparations for WASC’s second phase review in spring 2009, when the focus will be on examining how each department has used the results of its assessment process to secure improvements in its program(s), as well as progress toward implementation of a parallel system of assessment for the general education program and all graduate programs.

Some faculty members in my department were reluctant to participate in the assessment process. They contended that they already practice assessment—they give grades. I tried to weaken their resistance by explaining that the usual grading is “summative” (comprehensive and final) assessment rather than the “formative” (capable of alteration through growth and development) process we must develop. Formative assessment involves instructors testing and evaluating student performance during the course of the semester and altering teaching techniques to improve learning. Summative assessment measures the knowledge students have gained after completing the course.

Such faculty resistance to assessment may arise in other departments as well. How can one mitigate it? Three useful strategies come to mind. First, stress that if a department did not cooperate in meeting university expectations for assessment it surely would suffer a loss of resources. Second, use various incentives to recruit faculty assistance, but be prepared to do much of the work yourself, if needed. Third, point to the two main reasons that make assessment a wise and worthwhile practice: determining what weaknesses our graduating majors shared as historians would provide guidance for changes in curriculum and pedagogy that could move us closer to achieving academic excellence in our program, and second, faculty participation in the joint enterprise of assessment would lead to a sharing of ideas about curriculum and pedagogy that could result in the development of a more coherent vision about how to structure the major so that graduates would have the knowledge and academic abilities expected of trained historians.

Nearly all of my colleagues accused me of being naive. Like many other faculty members, their cynicism about assessment derives in large part from the way most of us who have been around for a while were first introduced to it in the 1990s. Back then an effort was made to compel the faculty to accept it. Worse, it was clear that outsiders intended to use assessment data to judge how well instructors and departments were doing their jobs, usually to satisfy state legislatures who saw only extravagance, waste, and arrogance in higher education. But at Chico State at least, this new iteration of assessment has moved gradually and sought to empower departments in development of procedures suited to their own disciplines and individual circumstances. AURA’s (faculty always found the acronym hilarious—me too) main concern was that each department had a credible assessment system and was implementing it, leaving it to the faculty to examine and learn from the results. Thus far, assessment at Chico State, as AURA intended, has proceeded from the bottom up rather than the top down.

In conclusion, my advice to chairs facing this challenge is to try to persuade your faculty to cooperate in developing and implementing a system of assessment that works for your department (both pragmatically and idealistically) and satisfies the expectations of higher administration. It is very important for faculty to understand that doing so is the best way to short-circuit efforts to extend No Child Left Behind to higher education. Reading the Spellings Report makes clear what this would mean. Refusal to participate, in my judgment, would be the worst of all courses of action for any academic department to follow at this point in time. And since resistance is futile in any event, faculty should embrace examining the results of their assessment process to learn meaningful ways to improve their programs and produce better-trained history majors.

James Matray is professor of history at California State University at Chico, where, until recently, he chaired the history department.


Tags: Resources for History Departments The History Major


Comment

Please read our commenting and letters policy before submitting.