Publication Date

March 1, 1988

Perspectives Section

Viewpoints

AHA Topic

The History Major, Undergraduate Education

What should every undergraduate ma­jor know? ls there a core of informa­tion, skills, and values we expect our students to have mastered before grad­uation? As teachers of history, we have grappled with these questions on an individual level. Each of us has estab­lished guidelines to judge achievement in our classes and award grades. Simi­larly, we have—more or less—agreed upon criteria for evaluating the promise of particular students when preparing letters of recommendation to graduate or professional schools. We tick the ap­propriate boxes, rating competence in the subject specialty, overall intelli­gence, capacity to do independent research, writing proficiency, and the like­lihood for success in the field, then append more detailed, descriptive nar­ratives extolling these qualities. In both instances, we have established workable systems which reflect our own highest professional standards.

The history department at James Madison University began to consider the issue of assessment of student achievement in the same spirit last year. What we have learned and how we have weathered the change may assist other departments as they confront account­ ability-minded legislatures and govern­ing bodies, for I believe systematic as­sessment of academic competency will be as permanent as grades and class­rooms to higher education in the future. Before addressing how the depart­ment has responded to the challenge, some background about how assessment first came to our university is in order. Senate Joint Resolution 125, passed by the 1985 Virginia General Assembly, turned limited discussions to sustained commitment across the state. Reacting to a spate of national and regional stud­ies, the State Council of Higher Educa­tion was instructed to explore methods for measuring learning “to assure the citizens of Virginia the continuing high quality of higher education in the Com­monwealth.” The resulting document, completed within a year and presented to the General Assembly, led to a mandate requiring all state-supported col­leges and universities to cooperate with the Council in setting guidelines for assessment programs and reporting the results to the public through biennial revisions of The Virginia Plan for Higher Education.

Anticipating the General Assembly’s action, JMU had already embarked upon an extended five-year review of its curriculum and programs, including assessment. Committees proliferated like contemporary bureaucracies and took hold with the tenacity of kudzu enveloping every dimension of university life. Fully one-fourth of the faculty was in­volved in some study, and meticulous reports were prepared, revised, accepted, digested, and are presently being implemented. The faculty committee responsible for evaluation and assess­ment piloted four national models in separate programs to determine their suitability to our campus. The models included: Discrepancy Evaluation, which allows the faculty to set its own standards to measure student achieve­ment and determine the gap between performance and objectives; Value­ Added assessment, based on the North­eastern Missouri State University pro­gram, which permits external compari­sons between an institution and its peers and isolates the school’s influence on student learning; the Alverno College model, which emphasizes use of diag­nostic tests to measure student develop­ment, guide course selection, and assess problem-solving tasks; and the Tennes­see Performance Funding Program, which employs standardized and locally spawned tests to determine achievement and award state resources.

Anticipating the General Assembly’s action, JMU had already embarked upon an extended five-year review of its curriculum and programs, including assessment.

The committee’s final report recom­mended a value-added or outcomes ap­proach with focus on student growth over time rather than performance on any single, absolute scale. To measure what the university has contributed to a student’s education and to report the results, the committee further urged creation of an Office of Student Assess­ment. As implementation has unfolded, the assessment office administers entry­ level performance tests to incoming freshmen during orientation. The data collected supplement SAT scores and other materials regularly used for ad­ mission. They provide us with a detailed profile of all matriculants. At the con­clusion of the sophomore and senior years respectively, students are retested by Student Assessment to measure how they have advanced in the Liberal Stud­ies core and the common course objec­tives (writing, critical thinking, and problem solving), as well as their moral, social, and emotional maturity. Stan­dard tests like COMP and the Pace Instrument are blended with others de­veloped on campus and given on desig­nated assessment days to selected sam­ples of students. Since these tests mea­sure the presumed common content for all baccalaureate programs, individual departmental faculty have had limited input into their creation or administra­tion. By contrast, the more specialized assessment instruments associated with majors or academic disciplines have been designed by the faculty in the programs concerned.

Following formal application last April, eight majors were chosen to par­ticipate in the second round of assess­ment activities for the academic year 1987–88. The announcement that ours was one initially drew unbridled silence from my colleagues. Most, my­ self included, were skeptical. As autono­mous faculty, with strong teaching cre­dentials compiled over many years and a viable academic program, we were reluctant to invest time to evaluate a major we all knew to be sound. An OAH/FIPSE team had recently studied our history major, finding great strength, vitality, and administrative support.  Respected campus colleagues praised us as a “strong” department with an “excellent” program of under­ graduate study. Enrollment was high, the number of majors increasing steadi­ly, and our graduates entering solid advanced programs or rewarding ca­reers. Majors neither drove taxis nor waited tables. What could assessment possibly do for us? The OAH/F!PSE report had already outlined some of our limitations, and we were at work rectify­ing them. Furthermore, how would the information collected be used by our administration and the State Council? We were not paranoid, just uncon­vinced.

For Frank Gerome, the department’s assessment coordinator, selection meant a summer training workshop, reading and assimilating educational reports far less enticing than the pleasures of con­ tent history and scholarly research, ori­entation sessions with coordinators from previously assessed departments and staff from the Student Assessment Office, and finally, a year’s reduced teaching load to conduct the study and prepare a final report. For the rest of us, it meant a plethora of extra meetings and exposure to a new, seemingly Or­wellian lexicon.

Once these first impressions and ap­ prehensions settled, the department opened serious, reflective discussions in September centering on the nature of the major and the most reliable ways to evaluate progress in the curriculum. From the outset, the tone was positive because departments, who were best ac­quainted with their own curriculum, were wisely given complete freedom to set their own goals and objectives, then create instruments (tests) to measure how effectively they were actually meet­ing them. This critical point convinced many that assessment might strengthen the quality of the major and was poten­tially more than a hollow exercise to satisfy administration, State Council, General Assembly, and Secretary Ben­nett.

Early meetings dealt with objectives and were marked by a civility character­istic of the discipline itself and of those introspective scholars generally attract­ ed to it. There were neither the pitched battles nor blood feuds we had been told occurred elsewhere on campus. Some faculty participated because of loyalty to colleagues and the subject, while others adopted postures of amused boredom. Nonetheless, all were good-natured and gave freely of their ideas and energies. Frank Gerome prepared  an  agenda with a workable timetable and skillfully diffused our best efforts to digress. The talks brought out several common points about teaching and highlighted our diversity of methods, approaches, and priorities regarding the major. They offered a good opportunity for free debate about the program and its purposes.

After many hallway and office en­counters, three formal meetings, and consideration of a working draft, we reached agreement on eight objectives. Divided into three categories, they spoke to issues of substance, values, and skills. In our report, approved by the Director of Student Assessment, T. Dary Erwin, on November 16, we outlined the following goals for all majors: a knowledge of American and world history and geography; a knowledge of historical thinking, interpretations, and processes; an awareness of enduring values and ethics; a love for reading; an ability to do historical research; an abili­ty to think critically with historical perspective and insight; an ability to com­municate effectively, both verbally and in writing; and a proficiency in comput­er use. Because these objectives will be rated externally after our own review, each will have to be refined and fleshed out  more completely. We will have to define such terms as theme, perspective, and use of evidence.

Our primary objective emphasizes the absolute necessity for all history stu­dents to be broadly and deeply educated in the “stuff” of the discipline. Majors should know the significant issues, ques­tions, problems, events, and the conflict­ing forces which shaped the American people and their relations within the world community. In addition, they need to comprehend how the ebb and flow of world history has formed and shaped their lives. Owing to the flexibili­ty of our curriculum, factual knowledge will necessarily differ from student to student.

The second acknowledges that history is far more dynamic than the sum of past facts. Students must master basic historiography and know the principal contributions of scholars in the subject and theoretical fields. Our third objec­tive stresses the significance of values and ethics for informed citizenship and intelligent political choices in a democ­racy. Students should be able to identi­fy, understand, and appreciate the prin­ciples and ideals of a democratic society through a comparative study of human experience in space and time.

A love of reading may seem some­ what esoteric as a departmental objec­tive; however, we think it a critical value to nurture among majors. We seek, without pretension, to intensify a sensi­tivity to the powerful vitality of intelli­gent research expressed with aesthetic care. Perhaps our collective finger, like the Dutch boy’s, merely hopes to plug the dike, but we were unanimous in our decision to include reading as an objec­tive.

The remaining objectives recognize significant skills, some more traditional than others. To be practicing historians, undergraduates should obviously know how to locate and evaluate resources, organize their findings, and present them in a coherent manner. Finally, to facilitate research and writing, majors ought to be increasingly at home at keyboard and monitor. Most of our ob­jectives are fairly standard, and, while they might be expressed or underscored differently, they reflect common per­ceptions about history. They would ready any major for a future in or out of the discipline.

Having worked through and record­ed objectives, the department’s next task, starting last November, was to con­struct methods of assessment, especially for graduating seniors. Guidelines from the Director of Student Assessment sug­gested that we review existing, externally created instruments and, if we found them unsatisfactory, develop our own. We were urged to be cognizant of reli­ability and validity (handled fortunately by his office), and make use of peer review. We rejected the GRE Subject Test as inappropriate both because of its narrowness and its design. If used ex­tensively as an assessment instrument for majors throughout the nation, the test will soon lose its legitimacy as an indicator of probable success in gradu­ate school. The cost of the examination was another deterrent. Also, because our major is diverse with students tak­ing only three common courses (two semesters of American history and a seminar in methodology), no standard­ized test could fairly evaluate each sen­ior. Therefore, to parallel our eight objectives, better reflect the freedom of choice inherent in our program, and preserve symmetry, the department opted for a combination of eight sepa­rate methods. Taken collectively, they should provide the necessary informa­tion to assess our students and improve the program.

One of the most promising and easi­est to administer incorporates specific questions geared to our objectives into regularly scheduled course tests through the semester. The faculty will compile a fixed body of questions which can be used in multiple classes. Some will test fundamental factual knowledge and others casual relationships, inter­pretations, or computer ability. Skills like research competence, critical think­ing, and writing will be assessed first in the methodology seminar, normally tak­en at the end of the second year, and once again in the senior year to measure growth. Mixing these two approaches should provide steady classroom rein­forcement of crucial objectives and fre­quent feedback to faculty.

To supplement common questions and skills, we plan to implement a com­ prehensive senior examination.  Taken in the final semester, the keystone  of our assessment effort will include essay questions from all courses and fields of history. There will be sufficient choice to allow each major, no matter what course of study pursued, to demon­strate proficiency in selected fields. Sample essays are presently being draft­ed by each of us. The department pre­fers this format for three reasons. First, our normal testing across the curricu­lum relies primarily upon essays; there­ fore, it seems logical that the assessment format should mirror current custom. Second, the knowledge, values, process­es, and skills we anticipate our students should acquire are best measured in essays. Third, since the discipline is vast and varied, it is impossible to devise a single, objective examination to test what we want to learn about our stu­dents and program. After the questions are agreed upon by the faculty, they will be sent to three peer reviewers at other institutions for their reaction. The types of common questions mentioned earlier will also be evaluated externally to assist in validation.

Assessment methods will further in­ corporate information gathered about our majors past and present. I will con­ duct exit interviews with each graduat­ing senior to explore perceptions and attitudes (including values and love of reading). We will continue to survey those who supervise our student interns and begin to collect systematic informa­tion from those who employ our gradu­ates. This data will guide the depart­ment in those aspects of the program related to career development. We plan to survey graduate schools which have admitted our students to learn more about how our colleagues regard aca­demic preparation at James  Madison. To complete the picture, we will survey alumni in an attempt to better under­stand how effectively we prepared them.

By the end of the spring term, Frank Gerome will have amassed an archive of raw data, made sense of it, and reported our findings to the university. The least meaningful part of assessment for the department will conclude at that mo­ment, and our significant work will com­mence as we apply the results to im­prove the history program. Even in its preliminary stages, the assessment proc­ess has helped us raise some important questions. Should our program have greater structure? That is, would more carefully defined area requirements in­ sure a broader understanding of the world? Within the contexts of fiscal and personnel restraints, what additional opportunities can be set up to promote greater undergraduate research? How realistic are faculty expectations in sur­ vey and upper division courses? What resources must the university provide to help the department better serve stu­dent needs?

In conclusion, the assessment experi­ence, although it has involved a great commitment of time and effort, has been more than a quixotic tilt. Depart­ mental autonomy to establish objectives and measurements designed for self­ improvement is the key to effective and enduring assessment.

Michael Galgano

James Madison University