Noralee Frankel, AHA’s assistant director for teaching, writes: Whenever a question on learning outcomes is asked on the AHA’s listserve for department chairs, we always receive a vigorous response. In March 2008, Gabrielle Spiegel, then AHA president, wrote about the need to take assessment seriously in her Perspectives on History article, “A Triple ‘A’ Threat: Accountability, Assessment, Accreditation.” In March 2009 Perspectives on History published a forum on assessment. This article, and the one by Marianne S. Wokeck, on the Lumina Foundation’s Tuning Project are offered to help departments struggling with the challenges of assessing student outcomes.
It took me only a couple of semesters as a new university instructor to figure out that office hours were taken up largely with students asking variations of two common (and reasonable) questions: “What does it take to do well on class exercises?”; and, “What should I get out of the course as a whole?”
It took me only a couple of more decades to figure out a way of answering the questions in a form that was straightforward, transparent, and related to larger disciplinary issues outside a single class. The tools that colleagues and I developed in Utah State University’s History Department were “rubrics,” scoring guides that help clarify how instructors evaluate tasks within a course—and how the evaluations tie into the broader goals of the history major.
There is nothing remarkable about rubrics themselves. What instructor has not weighed the standards, criteria, and expectations that will guide the work within a course? Some might assume that students should simply enter a class with the understanding that they will be pushed to excel. Others may prefer to use lengthy comments written on exams and papers to explain how a student’s analysis might be refined, redirected, or thoroughly revamped. Still others might hope that a few sketchy remarks along with a grade will goad inquisitive students to stop by for office visits and fruitful discussions of their performance.
A rubric does not substitute for high standards, thoughtful comments, and engaged conversations; instead, it offers a way to prepare students for course exercises and guides the discussions they may choose to have about their work. The rubric explains in advance why a course exercise exists at all, stating the goals an instructor has established for the task, the criteria that will structure the effort, and the means of measuring a student’s performance. But in order to write up such statements, instructors have to be aware of their own objectives and intentions, thinking carefully and systematically about what they want their class to achieve. Course goals cannot “go without saying”; they must be articulated. A rubric aims to make the instructor’s purposes as transparent as the student’s responsibilities.
Some may try to develop rubrics on their own, as a way of laying out a broader set of academic principles to students, as a means of clarifying the conventions of a discipline, or (as in my own case) simply answering in advance the questions that would inevitably and repeatedly come up in office visits. Years ago, in an early set of “evaluation scales,” I outlined for students “what counted” on exams and term papers. The grade on an essay exam reflected the pertinence, accuracy, organization, and explanation offered in the student’s response. The evaluation of a term paper grew out of its informing thesis as well as the reasoning, organization, conclusion, substantiation, and mechanics of the argument.1 Perhaps the “scales” offered students some guidance, but they failed to define different levels of mastery in each of the categories and only indirectly pointed to expectations in the discipline rather than a single course.
Rather than revising the “evaluation scales” as anindividual project for my own course assignments, the next step came in a collective project with fellow faculty members. Beginning in the spring semester of 2009, the history department at Utah State University became part of a broader and more ambitious project of assessment on all nine campuses in the Utah System of Higher Education (USHE). The USHE received a grant from the Lumina Foundation for Education to work along with history departments in Indiana and Minnesota on “Tuning USA,” a project designed to see what American colleges and universities might learn from the Bologna Process of academic reform in Europe.2
The Lumina project addressed rubrics—and all of university assessment—in a novel and promising way modeled on the Bologna reforms. Rather than tackling assessment in a top-down manner (from central administration directives down to individual faculty), Lumina outlined a bottom-up approach in which faculty took “ownership” of the work and established the standards for measuring performance. Rather than outlining a uniform set of academic goals cutting across all departments, Lumina emphasized the discipline-specific nature of the project, allowing those trained in a field to determine the “core competencies” of their specialty. Rather than tackling assessment haphazardly, the foundation designed a systematic effort informed by fundamental “learning outcomes” which reflected and fit a particular discipline. Rather than making assessment an isolated project of one department or one campus, Lumina encouraged participants to re-engage with key professional organizations—and the international community of scholars in their field—in order to understand the current “state of the discipline” in the evaluation of programs. And rather than speaking only among academics, the project required participants to engage with various “stakeholders” (including students, alumni, legislators, and employers) in order to establish a broad consensus about what university degrees should prepare students to “do” and “know.”3
Our department began the work of assessment within this distinctive framework. The first lesson we learned was the most helpful: not to plunge into the project as individualists. Instead, we depended on the kindness of academic strangers who have already engaged in the effort and shared their thoughts with colleagues. Three pieces of information helped shape our work from the start. The first was a list of “learning outcomes” that historians in the United Kingdom had developed in their work on the Bologna Project. The second was an American Historical Association pamphlet, Assessment in History: A Guide to Best Practices, which offered both a sound overview of the subject and a useful vocabulary for expressing the department’s goals.4 The third was a history rubric developed at the University of North Carolina at Wilmington that struck us as being comprehensive in its statement of broad objectives and specific requirements.5 Drawing on all these sources, faculty proceeded with assessment knowing that they were guided by reliable colleagues who spoke for the discipline rather than being driven by an administrative agenda imposed from on high.
The second lesson we learned was to keep our questions simple and straightforward: What should students know, understand, and be able to do in the discipline of history? We needed to consider answers that were concrete, realistic, and transparent, readily understood by faculty, students, and those outside the institution. The answers we developed would stand not only as abstract ideals but also as practical guides to the goals of any course and to the structure of the entire curriculum.
Finally, we learned a key lesson of assessment: the goals we stated must be measurable. Twenty-five learning outcomes may be comprehensive; but measuring them would consume much of our time. We had to decide on a more concise list of core features that reflected the courses we taught and the specialties we have developed.
With help from the United Kingdom, the AHA, and UNC-Wilmington, the history faculty set out seven key learning outcomes arranged around three categories: historical knowledge, historical thinking, and historical skills. The scope of “knowledge” reflected the areas of historical study our department could reasonably address; we could not make promises on certain regions and particular methodologies that we could not cover. “Historical thinking” engaged students in an appreciation of the “past-ness” of the past, the complexity of past experience, and the problematic nature of the historical record itself. "Historical skills” focused on critical reading, writing, thinking, and research.
The rubrics we developed served as the means of measuring these goals. In order to make the evaluation tools as useful as possible for professors individually and the department as a whole, the rubric for any class and any exercise followed the same basic three-part pattern, addressing historical knowledge, thinking, and skills. In this way, the rubrics served as constant reminders to students of the larger, shared goals set by the department. But the specific contents of rubrics under the three headings varied according to the subject, level, and methodological focus of each course. First-year surveys might concentrate more on acquiring “knowledge” and addressing a fairly limited set of competencies in the categories of “thinking” and “skills.” Upper-division courses would likely develop a broader range of “thinking” and “skills.” The senior capstone course would require the highest and most diverse level of mastery in the different outcomes. Professors could feel confident that the rubric framed evaluation in a general yet flexible format that could be tailored to each section, reflecting the specific contributions of each course while recalling the informing objectives of the program for our majors.
We started our experiment with a rubric for the “end,” creating one evaluation form for all sections of our senior capstone course. Although faculty members define the course’s themes in different ways, we all expect the same final project: a thesis grounded in primary- and secondary-source research. With the rubric, we now also expect the same model of evaluation, one which represents a “summative” assessment of a student’s historical knowledge, thinking, and skills.
But the rubric did much more—for faculty as well as students. Class members understood more clearly the types of competencies they had developed over four years in the history major and became accustomed to a new way of talking about their disciplinary skills.6 Faculty followed an “inter-rater reliability” approach to evaluation in order to compare and contrast their assessments of the same papers (and learned that they were, indeed, “on the same page” in the standards they applied to student work). The capstone rubric became a baseline for rubrics in other courses as faculty members scaled back and altered sections of the three-part device to reflect the specific goals of lower- and upper-division courses leading up to the capstone. And the department as a whole came to a clearer understanding of the different functions that courses play within the curriculum.7 One consequence is that the department has developed a “pre-major” in which students follow a more logical, sequential course of study that itself models the incremental development of disciplinary skills on which we base our curriculum—and its assessment.
In the end, we have taken long-standing—but often unarticulated—standards of evaluation, stated them in a clearer, more coherent, and systematic fashion, and applied the assessment tools not simply to single courses but also to courses-within-a-curriculum. Our hope is that the experiment with rubrics will continue to expand, drawing students and faculty (and those outside the university) into a more meaningful understanding of what historical study develops in a major and “delivers” to the community.
Daniel McInerney is professor of history at Utah State University.
Notes
1. For examples of these early efforts—as well as examples of the more fully developed rubrics created by USU’s History Department for freshman surveys, upper-division courses, and the department’s senior capstone—please visit the “Mission Statement and Assessments” link on the department’s web site, at www.usu.edu/history/abouthistory09/index.htm.
2.For a full discussion of “Tuning USA” by the Lumina Foundation for Education, see: www.luminafoundation.org/our_work/tuning. Additional Web resources on the subjects raised in this article may be found in the “Rubrics, Learning Outcomes, ‘Tuning,” and “The Bologna Process: Web Resources” link on the department’s web site: www.usu.edu/history/abouthistory09/index.htm.
3.What of the “stakeholders” we surveyed? Our department asked students and alumni to rank the “core competencies” that a history education should develop. The top-ranked and bottom-ranked skills they stated matched quite closely with the highest-valued and least-valued skills faculty had selected. The Utah Board of Regents also engaged in an ambitious survey of employers in the region and learned what the Association of American Colleges & Universities has discovered in its own national samplings: that there is a strong connection between “liberal education and workforce learning.” Both educators and employers display strong similarities in the “learning outcomes” they define as essential. See: Association of American Colleges & Universities, The Quality Imperative: Match Ambitious Goals for College Attainment with an Ambitious Vision for Learning (Washington, D.C.: Association of American Colleges & Universities, 2010), 3–6; www.aacu.org/about/statements/documents/Quality_Imperative_2010.pdf.
4. AHA Teaching Division, Assessment in History: A Guide to Best Practices (Washington D.C.: American Historical Association, 2008). See also Terrel L. Rhodes, ed, Assessing Outcomes and Improving Achievement: Tips and Tools for Using Rubrics (Washington, D.C.: Association of American Colleges & Universities, 2010).
5. College of Arts and Sciences, University of North Carolina at Wilmington, “Department of History Learning Outcomes Scoring Rubric,” online at www.uncw.edu/cas/documents/Elaboratedcompetencies3.pdf.
6. For student comments about the rubrics used in a lower-division survey and an upper-division period course, see the “Student Assessment of Assessment” link on the department’s web site at www.usu.edu/history/abouthistory09/index.htm. While most students appreciated the way in which rubrics clarified the standards for course assignments, several objected to the length of the descriptions included on the form and others expressed concern that such a carefully structured set of criteria might stifle creativity and individual expression.
7. For a discussion of the way in which one colleague, Frances B. Titchener, used the “Tuning USA” project to restructure a course in the classics curriculum, see “Fine-Tuning College Degrees to the Job Market,” Christian Science Monitor, June 2, 2010, online at www.csmonitor.com/USA/Education/2010/0602/Fine-tuning-college-degrees-to-the-job-market.
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. Attribution must provide author name, article title, Perspectives on History, date of publication, and a link to this page. This license applies only to the article, not to text or images used here by permission.