Forum on Assessment

Assessment at American University

Robert Griffith | Mar 1, 2009

For more than a decade political leaders in both parties (and at both the state and national levels) have been pushing assessment and accountability in education. In the case of K–12, this has led to, among other things, the enactment of the No Child Left Behind Act of 2001. There has also been mounting pressure in higher education; again, for more than a decade. This pressure has increased in recent years, fueled by rising college costs and especially by the Bush Administration. Indeed, Margaret Spellings, the secretary of education in that administration, had even tried to supplant the assessment activities of the regional accreditation system with a national accreditation program. Regional accrediting agencies such as the Middle States Commission on Higher Education (which accredits my university) have been emphasizing “outcomes assessment” for more than a decade, an emphasis that will no doubt continue in the wake of pressures from both state and national governments.

In 2001–02, as American University was gearing up for a Middle States accreditation review, departments were asked to examine how they assess student learning and to come up with new and better ways to do this. The response in my department was unenthusiastic, to say the least (“Oh my god, some administrator has had an idea!”). Some said: “We already assess student learning through the grades we assign and through student evaluation of teaching. Why do we need to do anything else?”

The result, nevertheless, was extensive discussion of undergraduate education in our department, which included a number of department meetings as well as two professionally moderated focus groups among our graduating seniors. As a result, the department came up with five broad goals that sought to capture what we thought we were trying to achieve, including historical literacy, critical thinking, and research and writing skills. We later broke down each of these large goals into five or six more specific items, for example: “Understands how to locate and critically evaluate relevant scholarly books and articles,” or “Understands how to search various library databases.” In “eduspeak,” such lists are often referred to as “rubrics.”

More importantly, and for us I think this was the critical breakthrough, we decided that we would try to integrate assessment into what we were already doing and make it serve our core mission, rather than simply adding it on as an unwelcome bureaucratic imposition.

Fortunately, our department already had a two-semester “Major Seminar” in which every graduating student was required to design, research, and write an original senior thesis. This, we decided, should be the cornerstone of our effort at assessment.

We decided first to establish an end-of-year conference (prosaically entitled “History Day”) at which graduating students would present their theses. We asked faculty and graduate students to read and comment on the papers and (using our “rubric”) to score what the papers and presentations revealed about how well the department was meeting its five goals. We distributed a somewhat similar survey to the graduating seniors themselves, asking them to reflect on their entire experience as a history major and, based on that reflection, to rate how well the department had achieved its professed goals.

For the past seven years we have pursued this assessment effort, collecting student- and faculty-generated data on each graduating class.

What Have Been the Results?

To begin with, I would note that our surveys were not methodologically robust; they would not pass “Social Statistics 101.” The scores suggested some difference in opinion between students, who gave the program higher marks than did faculty, but both surveys reflected a “Lake Woebegon” effect. Everything we did was well above average. If the goal of our assessment effort was to produce hard data on “outcomes,” then our effort was surely a failure. (A note: we supplemented data derived from these surveys with information drawn from our university’s annual survey of graduating seniors, as well as with data drawn from the National Survey of Student Engagement.)

But the numbers turned out to be mostly irrelevant. What the exercise did accomplish was to focus our collective attention much more intensively on the work of our undergraduates. We began to learn much more about both their achievements and failings and, as a consequence, to learn much more about the strengths and weaknesses of our program. In turn, this set in motion a whole series of changes, large and small, in the way we go about our work as teachers. For example, responding to focus-group comments about lack of community among our undergraduate majors, we initiated a fall luncheon for majors and prospective majors, revived our Phi Alpha Theta chapter, and began an annual “careers in history” night. Individual faculty began to incorporate the department’s goals into their syllabi and to create assignments designed to better prepare students for everything from understanding the historiography of a specific issue to familiarizing themselves with the library’s expanding list of databases. “History Day,” by the way, was an enormous success and has now become our department’s signature annual event.

Moving the intellectual work of our students (and faculty) into a more open and public arena helped us to assume more collective responsibility for our undergraduate majors, informing how we approached our lower- and upper-division courses, driving discussions of curriculum reform, and inspiring us to talk more freely and frequently about our teaching.

I do not want to claim too much for our program or to suggest that it is a model that can or should necessarily be pursued on campuses unlike my own. I will note that since initiating these changes, the number of undergraduate history majors at American University has more than tripled, and that other departments on campus have begun to introduce programs similar to our own.

I do believe that this suggests that assessment need not be a bureaucratic imposition by distant administrators and accrediting agencies; that it is possible to adapt it to or grow it out of individual academic cultures; and that it can lead to very positive results. At least it has for us.

—Robert Griffith is professor of history and chair of the history department at American University. He has also served as a member of several Middle States Accreditation Teams.


Tags: Resources for History Departments


Comment

Please read our commenting and letters policy before submitting.