Publication Date

February 1, 2002

Portfolios have been one of the most pervasive innovations recommended by educational reformers of the 1980s and 1990s. In 1990, for example, the Association for Supervision and Curriculum Development identified portfolios as one of the nation's top three curriculum trends.1 Professional and pedagogical journals exploded with pertinent literature during the next decade, as educators implemented a variety of portfolio projects in a wide array of classroom settings and in subjects as diverse as high school literature and social studies, entry-level college writing, and adult continuing education.2 Practitioners advocated portfolios as devices that engaged students in critical thinking and realistic performance tasks, as strategies for empowering students to take responsibility for their own learning, and as tools for evaluating the effectiveness of educational programs.3

During the fall of 1997 the history and philosophy department at Shippensburg University of Pennsylvania embarked on its own journey of portfolio assessment. I refer to this undertaking as a journey because we are continually learning, refining, and reevaluating this process. We are still thinking critically about our four-year experience with portfolios.

Our initial deliberations about portfolios occurred in the fall of 1996, when our administration charged each university department with designing a program assessment mechanism for its majors and minors. As we discussed the task, several concerns emerged. How would we evaluate students who take a variety of history and philosophy electives? How would our assessment tools reflect the goals of a faculty with diverse requirements and teaching styles? How could we devise a system that would measure student outcomes outside the regular framework of courses? How could we minimize the stress and frustration that high-stakes program evaluation often generates? And most important, what sort of instruments would be most valid, accurately assessing what we actually teach?

At this time the faculty also reached some fundamental conclusions about how this assessment should be implemented. We agreed that our evaluation of student outcomes needed to be ongoing, rather than a one-shot, senior year, exit performance. We believed our majors and minors should receive regular and frequent feedback on their progress. We also wanted our students to be well-informed, willing participants in the assessment process. Finally, we saw a need for a tool that would evaluate how students learned the process, as well as the content, of history and philosophy.

Given our concerns and requirements, portfolios seemed to be a good fit for our departmental assessment. As we saw them, portfolios were collections of student work assembled over a period of time, accompanied by their reflective commentary on the included artifacts.4 This type of instrument accommodated our need for flexibility. Portfolios gave our students choices and enabled them to have a voice in the evaluation process. Properly structured and monitored, portfolio assessment could also help us incorporate coherence and ongoing feedback into our program evaluation.

Since few of us had any experience with portfolios, we invited a knowledgeable colleague from the Teacher Education Department to share her expertise. Her thoughtful commentary during a history faculty workshop was invaluable in helping us clarify our model, outcomes, and supporting evidence. She also shared information on ways in which we could manage the practical aspects of portfolios, such as storage and record keeping.5 At this point we felt confident designing the anticipated outcomes for our history majors and our history and philosophy minors. Since I had used a field experience portfolio with students in my teaching methods class, I volunteered to write the guidelines and the rubric that would be utilized by our faculty and students.

Because of our concern that the portfolio should reflect students' skills as well as content proficiency, our outcomes focused on three main areas. We wanted our majors to demonstrate knowledge of major themes and trends in United States, European, and Asia/Africa/Latin America. However, we also expected our students to exhibit methodological, analytical, and research skills, write coherently and cogently, and assess and reflect upon their growth and improvement as history scholars. For our education students we added a fourth outcome, namely competent teaching performance. Our goal was that 80 percent of our graduating seniors would achieve at least a satisfactory portfolio rating. The next challenge for us was to translate our expectations into a rubric, a set of evaluative criteria that our students and faculty could easily understand.

As I began to draft our first rubric I found the National Standards for United States and the National Standards for World History, published by the National Center for History in the Schools, to be excellent resources.6 These documents stressed the need for students to be able to apply knowledge and demonstrate their understanding of historical themes and trends. Although the National Center’s historical thinking skills were designed for grades 5–12, they ably fit our outcomes. Historical comprehension, analysis, interpretation, and research capability were exactly the competencies we wanted our majors to exhibit. Thus the initial rubric outlined these expectations (see “Rubric for History Major Portfolio Assessment”). Along with the rubric, students also received a set of explicit guidelines on how to construct their portfolios and with whom to share the product (see “Guidelines for History Major Portfolio Assessment”).

Rubric for History Major Portfolio Assessment

Assessed, in each category, as excellent, good, satisfactory, or unsatisfactory

Written Communications Skills

  • Coherent and cogent
  • Grammatically correct

Historical Knowledge

  • United States
  • Europe
  • Asia/Africa/Latin America

Historical Thinking and Research

  • Understands historical time
  • Comprehends historical narratives
  • Shows historical perspective
  • Demonstrates analysis and interpretation
  • Can formulate historical questions
  • Can obtain and use historical data
  • Supports writing with evidence

Teaching Performance-B.S.Ed. only (Graduating Seniors)

  • Level I Field Experience
  • Level II Field Experience
  • Unit Plans
  • Unit Tests & Exhibitions
  • Comments:

Guidelines for History Major Portfolio Assessment

Goals

  • Assess individual growth and improvement in historical thinking and writing
  • Assist majors in compiling samples of their work for use in job/grad school interviews
  • Assist History Department faculty in assessing the B.A. and B.S.Ed. programs

Requirements

Each history major is required to include the following:

1. A Portfolio Holder such as an accordion folder or three-ring binder which is strong enough to withstand frequent handling.

2. A Table of Contentsthat lists the portfolio’s contents. The portfolio may be arranged sequentially, topically(World, U.S., or Asia/Africa/Latin America), or according to genre (journals, research papers, essay tests, etc.)

3. Artifacts-samples of student work. Students' portfolios will include work from each semester they are enrolled as Shippensburg University history majors. (Students who are graduating in December 1997 will include artifacts from spring and fall 1997 semesters.)

Students should select artifacts from a variety of required and elective history courses. Artifacts should be selected which represent beginning work and improvement. Samples should include at least two of each of the following:

(a) Sets of journal entries;

(b) Essays on historical documents, articles, and books;

(c) Research papers;

(d) In-class and take-home essay examinations;

(e) Assignments from Theory and Practice of History, Comparative History (if the student has completed these courses.)

4. Captions for each artifact, explaining why it is included in the portfolio, what the student learned from doing this assignment, how it shows growth or improvement.

Assessment

Students will present their portfolio to their academic advisor each semester during the course selection period. (Students graduating in the fall of 1997, including student teachers, will make an appointment to see their advisor before the final exam period.) It is expected that students will organize, edit, and update their portfolios before meeting with their advisors, and be ready to discuss their artifacts, areas of improvement, and goals for future semesters. Each academic year the Undergraduate Assessment Committee will ask for and evaluate a random sample of history majors' portfolios. Faculty will evaluate portfolios using the designated rubric.

Although we had initially discussed the upcoming assessment with our majors during the spring of 1996, the department chair and the Undergraduate Assessment Committee (UGAC) held a more extensive meeting in September 1997. As might be expected, students were somewhat apprehensive about this new requirement. Seniors were particularly concerned. Many had not kept copies of their past work and therefore had few artifacts for their portfolios. Students were also confused about the self-reflection aspect of the assessment and about what to include. Furthermore, none of them, with the exception of the seniors in the teaching methods course, had ever seen a portfolio. The faculty members present calmed their fears by explaining that the purpose of the portfolio was to assess the history program, not individual students. We had anticipated that students would want some sort of model to follow, so I had borrowed one from a previous teaching methods student and circulated it to the group.

We had asked our majors to bring a skeleton portfolio to their adviser's office in mid-November, when they met for schedule advisement. As the week neared, we worried about how our first assessment would proceed. Since students were not receiving grades for their portfolios, would they bring them? Were the rubric and guidelines actually clear enough for students to follow? Would we see evidence of what we wanted to evaluate? Would we see signs of growth and improvement?

Our anxieties lessened as the advisement period progressed. Most of our majors had taken the assessment seriously and brought at least the nucleus of a portfolio during both the November and March advisement periods. In late April the department's Undergraduate Assessment Committee read both a 50 percent random sample of junior and senior portfolios collected by advisers and a sample from all students enrolled in our comparative history seminar. We were not satisfied with the quality of the documentation, in part because some students were confused about the assessment criteria. Because our students did not include a sufficient number of artifacts and failed to include captions, most of them scored only as high as "satisfactory" on the portfolio rubric. We strongly believed our majors could submit better work. Faculty advisers, responding to an April 1998 questionnaire, expressed concern that students apparently did not know what to include in the portfolio or the purpose behind the assessment.

In May 1998 we also surveyed our graduating seniors for their opinions about the first year of portfolio assessment. The students' reactions ranged primarily from lukewarm to negative. Using a five-point Likert-scale in which a "1" equaled excellent and "5" symbolized unsatisfactory, only 2 percent of our students responded with a "1" to the question, "How successful was the portfolio experience in contributing to your professional growth?" None of the students answered with a "2." On the other hand, 44 percent responded with a "3," and 54 percent answered with a "4" or "5." Clearly this was not a ringing endorsement for portfolios!

On the basis of this feedback, the Undergraduate Assessment Committee decided to prepare a more thorough rubric for the following school year. Our revision gave a more detailed set of criteria. For example, instead of just asking students to provide artifacts that showed evidence of historical analysis and interpretation, we more explicitly defined the category. Our majors, therefore, knew to include work that proved their abilities to compare and contrast, understand multiple perspectives, and analyze historical actors' motives and beliefs. The section on written communications skills asked for writing samples that supply evidence in the forms of examples, reasons, and descriptions that are elaborated upon and explained. "Understands historical narratives" had been clarified so that students could provide artifacts that showed their competence in identifying the sequence of events, causes, and outcomes of an event or issue.

Thus armed with a clearer rubric, we briefed both new and continuing students once again in September 1998. In addition to sharing these documents, we explained the reasons behind our portfolio assessment more clearly. We even suggested that students might take the process a step further by culling through their artifacts and constructing showcase portfolios for graduate school and job interviews. The evaluation process remained the same, although the UGAC also checked the portfolios of students who had taken the sophomore-level Theory and Practice of History course the previous semester. Both advisers and the Undergraduate Studies Committee saw a smoother functioning assessment the second and third times around. The 1999 and 2000 adviser surveys reported that students now had less difficulty assembling a portfolio and choosing appropriate artifacts. Faculty also believed their advisees showed a better understanding of the purpose behind the assessment. The quality of the artifacts was extremely high. A review of the 50 percent random sample of junior portfolios in 1999 and 2000 found that roughly 61 percent of our majors showed excellent to good work in their research and analytical skills and 72 percent scored within this range in writing skills. Only 6 percent were rated unsatisfactory in these three areas. We felt this was a significant improvement over the scores of 47 percent in analytical skills, 37 percent in research skills, and 58 percent in writing skills that we had seen in 1998. Since our students during this period had similar grades and none of us significantly changed our teaching methods, we attribute the improvement to a more carefully crafted portfolio rubric and a clearer explanation of the assessment during our early fall meeting with the majors.

In a September 2000 survey, related to our departmental Five-Year Review, 85 percent of our faculty said the portfolio was "very" or "somewhat" effective in encouraging and assessing student learning. We still have concerns, however, about the quality of students' captions. Our undergraduates seem to have continuous difficulty analyzing their own work. This is just one of the ongoing issues we address as we journey through the process of portfolio assessment. Our students, for example, still express frustration with this requirement. Our spring 2000 survey showed that while 35 percent of graduating seniors believed portfolios had contributed to their professional development, 50 percent still saw the experience as a waste of time. As a department, we clearly need to do more to "sell" portfolios to our students. We also need to continue honing our rubric, to improve upon the instrument's validity and hold more discussions about what we want to measure. Meanwhile, our chair plans to survey alumni on the usefulness of the portfolio five years after their graduation from Shippensburg. Several department members, including myself, also attended a formal assessment workshop in October 2000 where we learned even more about creating rubrics and designing and implementing assessment outcomes.

After four years of practice with portfolios at Shippensburg, I would heartily recommend them to other history departments interested in evaluating their programs. At the same time, I have some advice for teachers about to embark on this type of assessment. First, make sure department members have reached a consensus on the goals and outcomes for the portfolios themselves. In our experience, the department needed to have many discussions, particularly during that first planning year, about what we believed our majors actually needed to know and demonstrate. Second, clearly communicate these expectations to the students. We had much confusion during the first year of actual assessment. Students were not sure about what to include in the portfolio and how the faculty intended to make use of these documents. A clearly designed rubric and a model for students to follow help with this process. This rubric should clearly state the necessary outcomes and the types of artifacts to be included, but at the same time offer students some flexibility. This enables them to feel a sense of ownership and pride in their portfolios. It also allows for faculty diversity, in terms of goals and course requirements.

Finally, departments need to establish procedures for the collection and evaluation of the portfolios, along with a means of reporting and using the results. Faculty must hold students accountable and monitor progress throughout their college careers. Assessment at the end of a student's program, of course, is also necessary.

Based on our experiences at Shippensburg University, properly utilized portfolios can deliver what the educational literature claims. Our history department has used this assessment to measure our students' abilities to think critically and engage in the work of real historians. The portfolio has also been useful in helping our majors reflect on their own learning. It has also aided our department in evaluating the effectiveness of our program. Although we still have a great deal of ground to cover, portfolio assessment has been a journey well worth beginning.

— teaches at Shippensburg University of Pennsylvania.

Notes

1. Susan Callahan, "Portfolio Expectations: Possibilities and Limits," Assessing Writing 2 , no. 2 (1995): 124.

2. For articles describing the use of portfolios in high school literature and social studies classes see Bill Martin, "Literature and Teaching: Getting Our Knowledge into Our Bones," English Journal 81 (September 1992): 56-60. For articles on portfolios in high school social studies see Sarah E. Drake, “One Teacher’s Experiences With Student Portfolios,” Teaching History 60–76; JoAnn Larson, “Social Studies Assessment in New York State,” Social Science Record 30 (Fall 1993): 13–16; Eugene Perkins, “Portfolio Assessment in Social Studies: A Program That Offers a Systematic Approach,” Social Studies Review 32 (Spring 1993): 44–47 and Grant Wiggins “Assessment to Improve Performance, Not Just Monitor It: Assessment Reform in the Social Sciences,” Social Science Record 30 (Fall 1993): 5–12. For literature on portfolio usage in college writing courses see Liz Hamp-Lyons and William Condon, “Questioning Assumptions About Portfolio-Based Assessment,” College Composition and Communications 44 (May 1993): 176-190; Elizabeth Metzger and Lizbeth Bryant, “Portfolio Assessment: Pedagogy, Power, and the Student,” Teaching English in the Two-Year College 20 (December 1993): 279–88; and Kathleen Blake Yancey, “Looking Back as We Look Forward: Historicizing Writing Assessment,” College Composition and Communication 50 (February 1999): 483–503. For information on portfolios in adult continuing education programs see Doug MacIsaac and Lewis Jackson, “Assessment Processes and Outcomes: Portfolio Construction,” New Directions for Adult and Continuing Education 62 (Summer 1994): 63–72.

3. For articles dealing with portfolios in critical thinking and performance-based assessment see Drake, 62–72; Joan L. Herman and Lynn Winters "Portfolio Research: A Slim Collection," Educational Leadership 51 (October 1994): 48–55; Larson, 14–15; Perkins, 44–47; Joseph M. Ryan and Jeanne R. Miyasaka “Current Practices in Testing and Assessment: What is Driving the Changes?” NASSAP Bulletin 79(October 1995): 1–10; and Wiggins, 6–11. For portfolios as an empowerment strategy see Callahan, 119–21; Susan H. Case, “Will Mandating Portfolios Undermine Their Value?” Educational Leadership 51 (October 1994): 46-47; David Alan Gilman, Richard Andrew, and Cathleen D. Rafferty “Making Assessment a Meaningful Part of Instruction,” NASSP Bulletin 79 (October 1995): 20–24; and Merrrill Harmin, Inspiring Active Learning: A Handbook for Teachers (Alexandria, VA: Association for Supervision and Curriculum Development, 1994): 142–145. Articles which include the use of portfolios in program review include Callahan, 133–143; Jean Fontana, “Portfolio Assessment: Its Beginnings in Vermont and Kentucky,” NASSP Bulletin 79 (October 1995): 25–30; Hamp-Lyons and Condon, 176–85; Geof Hewitt, “Vermont’s Portfolio-Based Writing Assessment Program: A Brief History,” Teachers & Writers 24 (May–June 1993): 1–6; and Yancey, 493–5.

4. Metzger and Bryant, 279.

5. The History Department is indebted to Linda Hoover, Teacher Education Department, Shippensburg University, for her assistance in the development of the portfolio assessment program.

6. National Center for History in the Schools, National Standards for United States History and National Standards for World History (Los Angeles: University of California, 1994).

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. Attribution must provide author name, article title, Perspectives on History, date of publication, and a link to this page. This license applies only to the article, not to text or images used here by permission.