What should every undergraduate major know? ls there a core of information, skills, and values we expect our students to have mastered before graduation? As teachers of history, we have grappled with these questions on an individual level. Each of us has established guidelines to judge achievement in our classes and award grades. Similarly, we have—more or less—agreed upon criteria for evaluating the promise of particular students when preparing letters of recommendation to graduate or professional schools. We tick the appropriate boxes, rating competence in the subject specialty, overall intelligence, capacity to do independent research, writing proficiency, and the likelihood for success in the field, then append more detailed, descriptive narratives extolling these qualities. In both instances, we have established workable systems which reflect our own highest professional standards.
The history department at James Madison University began to consider the issue of assessment of student achievement in the same spirit last year. What we have learned and how we have weathered the change may assist other departments as they confront account ability-minded legislatures and governing bodies, for I believe systematic assessment of academic competency will be as permanent as grades and classrooms to higher education in the future. Before addressing how the department has responded to the challenge, some background about how assessment first came to our university is in order. Senate Joint Resolution 125, passed by the 1985 Virginia General Assembly, turned limited discussions to sustained commitment across the state. Reacting to a spate of national and regional studies, the State Council of Higher Education was instructed to explore methods for measuring learning “to assure the citizens of Virginia the continuing high quality of higher education in the Commonwealth.” The resulting document, completed within a year and presented to the General Assembly, led to a mandate requiring all state-supported colleges and universities to cooperate with the Council in setting guidelines for assessment programs and reporting the results to the public through biennial revisions of The Virginia Plan for Higher Education.
Anticipating the General Assembly’s action, JMU had already embarked upon an extended five-year review of its curriculum and programs, including assessment. Committees proliferated like contemporary bureaucracies and took hold with the tenacity of kudzu enveloping every dimension of university life. Fully one-fourth of the faculty was involved in some study, and meticulous reports were prepared, revised, accepted, digested, and are presently being implemented. The faculty committee responsible for evaluation and assessment piloted four national models in separate programs to determine their suitability to our campus. The models included: Discrepancy Evaluation, which allows the faculty to set its own standards to measure student achievement and determine the gap between performance and objectives; Value Added assessment, based on the Northeastern Missouri State University program, which permits external comparisons between an institution and its peers and isolates the school’s influence on student learning; the Alverno College model, which emphasizes use of diagnostic tests to measure student development, guide course selection, and assess problem-solving tasks; and the Tennessee Performance Funding Program, which employs standardized and locally spawned tests to determine achievement and award state resources.
Anticipating the General Assembly’s action, JMU had already embarked upon an extended five-year review of its curriculum and programs, including assessment.
The committee’s final report recommended a value-added or outcomes approach with focus on student growth over time rather than performance on any single, absolute scale. To measure what the university has contributed to a student’s education and to report the results, the committee further urged creation of an Office of Student Assessment. As implementation has unfolded, the assessment office administers entry level performance tests to incoming freshmen during orientation. The data collected supplement SAT scores and other materials regularly used for ad mission. They provide us with a detailed profile of all matriculants. At the conclusion of the sophomore and senior years respectively, students are retested by Student Assessment to measure how they have advanced in the Liberal Studies core and the common course objectives (writing, critical thinking, and problem solving), as well as their moral, social, and emotional maturity. Standard tests like COMP and the Pace Instrument are blended with others developed on campus and given on designated assessment days to selected samples of students. Since these tests measure the presumed common content for all baccalaureate programs, individual departmental faculty have had limited input into their creation or administration. By contrast, the more specialized assessment instruments associated with majors or academic disciplines have been designed by the faculty in the programs concerned.
Following formal application last April, eight majors were chosen to participate in the second round of assessment activities for the academic year 1987–88. The announcement that ours was one initially drew unbridled silence from my colleagues. Most, my self included, were skeptical. As autonomous faculty, with strong teaching credentials compiled over many years and a viable academic program, we were reluctant to invest time to evaluate a major we all knew to be sound. An OAH/FIPSE team had recently studied our history major, finding great strength, vitality, and administrative support. Respected campus colleagues praised us as a “strong” department with an “excellent” program of under graduate study. Enrollment was high, the number of majors increasing steadily, and our graduates entering solid advanced programs or rewarding careers. Majors neither drove taxis nor waited tables. What could assessment possibly do for us? The OAH/F!PSE report had already outlined some of our limitations, and we were at work rectifying them. Furthermore, how would the information collected be used by our administration and the State Council? We were not paranoid, just unconvinced.
For Frank Gerome, the department’s assessment coordinator, selection meant a summer training workshop, reading and assimilating educational reports far less enticing than the pleasures of con tent history and scholarly research, orientation sessions with coordinators from previously assessed departments and staff from the Student Assessment Office, and finally, a year’s reduced teaching load to conduct the study and prepare a final report. For the rest of us, it meant a plethora of extra meetings and exposure to a new, seemingly Orwellian lexicon.
Once these first impressions and ap prehensions settled, the department opened serious, reflective discussions in September centering on the nature of the major and the most reliable ways to evaluate progress in the curriculum. From the outset, the tone was positive because departments, who were best acquainted with their own curriculum, were wisely given complete freedom to set their own goals and objectives, then create instruments (tests) to measure how effectively they were actually meeting them. This critical point convinced many that assessment might strengthen the quality of the major and was potentially more than a hollow exercise to satisfy administration, State Council, General Assembly, and Secretary Bennett.
Early meetings dealt with objectives and were marked by a civility characteristic of the discipline itself and of those introspective scholars generally attract ed to it. There were neither the pitched battles nor blood feuds we had been told occurred elsewhere on campus. Some faculty participated because of loyalty to colleagues and the subject, while others adopted postures of amused boredom. Nonetheless, all were good-natured and gave freely of their ideas and energies. Frank Gerome prepared an agenda with a workable timetable and skillfully diffused our best efforts to digress. The talks brought out several common points about teaching and highlighted our diversity of methods, approaches, and priorities regarding the major. They offered a good opportunity for free debate about the program and its purposes.
After many hallway and office encounters, three formal meetings, and consideration of a working draft, we reached agreement on eight objectives. Divided into three categories, they spoke to issues of substance, values, and skills. In our report, approved by the Director of Student Assessment, T. Dary Erwin, on November 16, we outlined the following goals for all majors: a knowledge of American and world history and geography; a knowledge of historical thinking, interpretations, and processes; an awareness of enduring values and ethics; a love for reading; an ability to do historical research; an ability to think critically with historical perspective and insight; an ability to communicate effectively, both verbally and in writing; and a proficiency in computer use. Because these objectives will be rated externally after our own review, each will have to be refined and fleshed out more completely. We will have to define such terms as theme, perspective, and use of evidence.
Our primary objective emphasizes the absolute necessity for all history students to be broadly and deeply educated in the “stuff” of the discipline. Majors should know the significant issues, questions, problems, events, and the conflicting forces which shaped the American people and their relations within the world community. In addition, they need to comprehend how the ebb and flow of world history has formed and shaped their lives. Owing to the flexibility of our curriculum, factual knowledge will necessarily differ from student to student.
The second acknowledges that history is far more dynamic than the sum of past facts. Students must master basic historiography and know the principal contributions of scholars in the subject and theoretical fields. Our third objective stresses the significance of values and ethics for informed citizenship and intelligent political choices in a democracy. Students should be able to identify, understand, and appreciate the principles and ideals of a democratic society through a comparative study of human experience in space and time.
A love of reading may seem some what esoteric as a departmental objective; however, we think it a critical value to nurture among majors. We seek, without pretension, to intensify a sensitivity to the powerful vitality of intelligent research expressed with aesthetic care. Perhaps our collective finger, like the Dutch boy’s, merely hopes to plug the dike, but we were unanimous in our decision to include reading as an objective.
The remaining objectives recognize significant skills, some more traditional than others. To be practicing historians, undergraduates should obviously know how to locate and evaluate resources, organize their findings, and present them in a coherent manner. Finally, to facilitate research and writing, majors ought to be increasingly at home at keyboard and monitor. Most of our objectives are fairly standard, and, while they might be expressed or underscored differently, they reflect common perceptions about history. They would ready any major for a future in or out of the discipline.
Having worked through and recorded objectives, the department’s next task, starting last November, was to construct methods of assessment, especially for graduating seniors. Guidelines from the Director of Student Assessment suggested that we review existing, externally created instruments and, if we found them unsatisfactory, develop our own. We were urged to be cognizant of reliability and validity (handled fortunately by his office), and make use of peer review. We rejected the GRE Subject Test as inappropriate both because of its narrowness and its design. If used extensively as an assessment instrument for majors throughout the nation, the test will soon lose its legitimacy as an indicator of probable success in graduate school. The cost of the examination was another deterrent. Also, because our major is diverse with students taking only three common courses (two semesters of American history and a seminar in methodology), no standardized test could fairly evaluate each senior. Therefore, to parallel our eight objectives, better reflect the freedom of choice inherent in our program, and preserve symmetry, the department opted for a combination of eight separate methods. Taken collectively, they should provide the necessary information to assess our students and improve the program.
One of the most promising and easiest to administer incorporates specific questions geared to our objectives into regularly scheduled course tests through the semester. The faculty will compile a fixed body of questions which can be used in multiple classes. Some will test fundamental factual knowledge and others casual relationships, interpretations, or computer ability. Skills like research competence, critical thinking, and writing will be assessed first in the methodology seminar, normally taken at the end of the second year, and once again in the senior year to measure growth. Mixing these two approaches should provide steady classroom reinforcement of crucial objectives and frequent feedback to faculty.
To supplement common questions and skills, we plan to implement a com prehensive senior examination. Taken in the final semester, the keystone of our assessment effort will include essay questions from all courses and fields of history. There will be sufficient choice to allow each major, no matter what course of study pursued, to demonstrate proficiency in selected fields. Sample essays are presently being drafted by each of us. The department prefers this format for three reasons. First, our normal testing across the curriculum relies primarily upon essays; there fore, it seems logical that the assessment format should mirror current custom. Second, the knowledge, values, processes, and skills we anticipate our students should acquire are best measured in essays. Third, since the discipline is vast and varied, it is impossible to devise a single, objective examination to test what we want to learn about our students and program. After the questions are agreed upon by the faculty, they will be sent to three peer reviewers at other institutions for their reaction. The types of common questions mentioned earlier will also be evaluated externally to assist in validation.
Assessment methods will further in corporate information gathered about our majors past and present. I will con duct exit interviews with each graduating senior to explore perceptions and attitudes (including values and love of reading). We will continue to survey those who supervise our student interns and begin to collect systematic information from those who employ our graduates. This data will guide the department in those aspects of the program related to career development. We plan to survey graduate schools which have admitted our students to learn more about how our colleagues regard academic preparation at James Madison. To complete the picture, we will survey alumni in an attempt to better understand how effectively we prepared them.
By the end of the spring term, Frank Gerome will have amassed an archive of raw data, made sense of it, and reported our findings to the university. The least meaningful part of assessment for the department will conclude at that moment, and our significant work will commence as we apply the results to improve the history program. Even in its preliminary stages, the assessment process has helped us raise some important questions. Should our program have greater structure? That is, would more carefully defined area requirements in sure a broader understanding of the world? Within the contexts of fiscal and personnel restraints, what additional opportunities can be set up to promote greater undergraduate research? How realistic are faculty expectations in sur vey and upper division courses? What resources must the university provide to help the department better serve student needs?
In conclusion, the assessment experience, although it has involved a great commitment of time and effort, has been more than a quixotic tilt. Depart mental autonomy to establish objectives and measurements designed for self improvement is the key to effective and enduring assessment.