A Triple "A" Threat: Accountability, Assessment, Accreditation
At the 122nd annual meeting of the American Historical Association held in Washington, D.C., in January 2008, I chaired a fascinating panel on higher education policy. Participating on the panel were arguably three of the best informed educational leaders in the country: Carol Geary Schneider, president of the Association of American Colleges and Universities, an organization with 1,100 institutional members devoted to advancing and strengthening undergraduate liberal education; David Ward, president of the American Council on Education, a group dedicated to coordinating policy for higher education, and former chancellor of the University of Wisconsin-Madison; and Robert Berdahl, former chancellor of University of California at Berkeley and recently named president of the Association of American Universities, a group that includes 62 of the country's leading public and private research universities. David Ward was a member of the Spellings Commission on the Future of Higher Education, convened in 2005 by Secretary of Education Margaret Spellings, to address what the Bush administration deems to be fundamental problems besetting postsecondary education in the United States. The panel's discussion revolved in large part around the commission's report and what it augured for the fate of higher education in America (a PDF version of the report is available online at www.ed.gov/about/bdscomm/list/hiedfuture/reports/final-report.pdf). All three are well acquainted with the commission's findings and recommendations and the implications they possess for higher education policy. It is the seriousness with which they take those implications that has prompted me to report on the panel in this column, on the assumption that many, if not most, members of the American Historical Association are as unaware as I was of what is looming just over the horizon if the Spellings Commission report were to be implemented, or the impact that its recommendations may have, nonetheless, even if it is not implemented.
The Spellings Commission report is the result of work done by a prestigious committee made up of leaders in higher education policy, albeit of a fairly conservative ideological persuasion. The premise of the commission's report is that the United States has lost its premier place as a world leader in higher education, a phenomenon that is having deleterious effects on American competitiveness around the world. Evidence of this decline, according to the commission, is that the United States, once the world leader in higher education, now ranks a mere 12th among major industrialized countries in higher education attainment. Moreover, it claims, literacy rates among college graduates are falling and unacceptable numbers of college graduates enter the workforce without the skills that employers say are needed in an economy where knowledge is increasingly important for individual and national economic success.
Margaret Spellings, who convened the commission, is the person, it should be noted, who helped to design—and then spear-headed—the No Child Left Behind Act. The commission's report sets forth three principal goals: First, to improve access to postsecondary education for ever-growing numbers of high-school graduates, in the laudable belief that "access to higher education has traditionally functioned as a principal, indeed probably the principal means of achieving social mobility." In pursuit of this goal it seeks to improve the affordability of postsecondary institutions for all students, something the commission believes can be achieved by insisting that institutions of higher education improve their cost management through the development of new performance benchmarks. The second stated goal is to improve educational attainment within the college population, which it believes must be done through a process of assessment, that is, testing of some sort. The third is to create a climate of accountability, through which all institutions, private as well as public, would have to demonstrate—again through testing of some sort—the progress of students in the achievement of a specified set of educational goals, including those pertaining to the ability to write, analyze, think, and create knowledge.
Although everyone on the annual meeting panel insisted that drawing analogies between the No Child Left Behind program and the Spellings Commission's recommendations was counterproductive, and that the annual testing procedures embedded in No Child Left Behind would not be applied to institutions of higher education, there is no mistaking the fact that some of the same impulses lie behind the Spelling Commission's thinking. This can be discerned principally by considering the ways in which colleges and universities would be assessed and, ultimately, accredited, were the Spellings Commission recommendations—or some variants—to be implemented.
Until now, measures of excellence in higher education have focused mainly on the resources available to schools in the conduct of their educational and research missions. Hence factors such as size of endowment, number and quality of faculty, per capita expenditure on students, and the like have served as key elements in the rankings undertaken both by accrediting agencies and public vehicles such as U.S. News and World Report. In other words, the primary focus to date has been on the investments made by institutions to ensure their capacity to educate their student populations, on the not unreasonable assumption that the level of investment was related to, although no absolute guarantee of, the quality of education proffered, and hence to the learning that takes place.
In the commission's view, however, this mode of evaluating academic excellence has been one of the primary factors in raising the costs of a college education, since, it claims, "colleges and universities have few incentives to contain costs because prestige is often measured by resources, and managers who hold down spending risk losing their academic reputations." The Spellings Commission, to the contrary, wants to focus on learning outcomes, not inputs, in order to ensure that colleges in fact succeed in educating students.
According to the commission, "higher education must change from a system based on reputation to one based on performance." Behind this lies the conviction that, as college tuition rapidly rises at all levels of the higher education system, students, parents, and policy makers at both the state and federal level have a right to know what they are paying for and to be assured that they get their money's worth. Moreover, in what the commission specifically designates as a "consumer-driven environment," parents and students must be able to evaluate "the relative effectiveness of different colleges and universities," a statement that implies, if it does not state outright, that some kind of national system of assessment be made publicly available to facilitate such comparative shopping for educational products. The business tone of the report is striking and should not be breezily dismissed, for it subtends a great deal of the thinking that goes into the commission's recommendations.
It is here that the three "A"s come together into an indissoluble whole. Both modalities of ranking proclaim that what is being assessed is the institution and not the individual student (in contrast to No Child Left Behind); but in the commission's model, since the focus is on learning outcomes, accountability can be demonstrated only through assessment of individual student learning, on the basis of which, ultimately, accreditation of the institution will be reviewed and awarded. Looked at from this perspective, some sort of testing of student learning is on the way, and we would be foolish to ignore the implications that such a move potentially holds for all institutions of higher education throughout the entire spectrum of postsecondary education, from community colleges to research universities.
Given the underlying concern about the country's ability to compete in a rapidly evolving global context that informs almost every aspect of the commission's report, it is not surprising that its chief focus is directed to what it names as STEM fields—that is Science, Technology, Engineering, and Mathematics—with additional nods to medicine and "other disciplines critical to global competitiveness, national security, and economic prosperity." National security concerns prompt recommendations for the improvement of language training within a restricted range (principally Arabic, Korean, and Farsi for the moment), but apart from that component, it would seem that humanistic and historical knowledge lie largely beyond the commission's purview. Lest this seem an unrelievedly good thing, one might recall that fields not covered by No Child Left Behind tend not to get taught, or are given short shrift, in primary and secondary school curricula and that, even if the focus remains on STEM disciplines, the temptation to shift resources to those fields will surely divert limited funds away from the humanities and social sciences. In seems more likely, however, that if institutional accreditation comes to depend on learning outcomes among the entire undergraduate population, all fields will be subject to some form of assessment, particularly since literacy, analytical thinking, and the ability to write lucidly rank prominently among the commission's desired improvements.
The commission asserts that its goal is to provide incentives for the improvement of learning outcomes through federal and state funding mechanisms. Nowhere in the report is there any indication of punitive measures—for example, loss of federal funding—should assessment results fall below expectations. Yet Charles Miller, a Houston investment banker and former chair of the University of Texas Board of Regents who served as chair of the Spellings Commission, when interviewed by Linda Wertheimer for an article in the Boston Globe this past April (see "Testing Harvard" at www.boston.com/news/education/k_12/articles/2007/04/22/testing_harvard/) suggested that "the government may eventually decide to deny federal funds for research or student aid to a college, even Harvard, if it refused to measure how well its students are doing and reveal results."
In the face of this impending threat, Wertheimer reports, last fall Derek Bok, then interim president of Harvard, paid $50 each to more than 300 freshman to take a 90-minute exam that tested their skills in problem-solving and critical thinking, a test to be followed by one administered to selected seniors upon graduation in order to gauge their progress over the course of their four years at Harvard. Similarly, other institutions and states have taken the initiative of forming a Voluntary System of Accountability, adopted, for example, by both California and Maryland. This attempt to get ahead of a possible federal mandate by opting to comply voluntarily with the demand for assessment—but with tests of one's own choosing—is, in fact, the process strongly urged upon states and institutions by the three experts on the AHA annual meeting panel. As William E. Kirwan, chancellor of the University System of Maryland, confirmed to Gadi Dechter, a reporter for the Baltimore Sun (see "Can Colleges Pass the Test?" in the Baltimore Sun of November 11, 2007), the initiative was spurred by the "desire to reassure Washington that higher education does not need a No Child Left Behind law with uniform exit exams given to art history and engineering majors alike." Kirwan confessed, "There was concern that they (the feds, presumably) would start trying to do these grade-by-grade assessments, which I think all of us feel would be inappropriate in higher education."
On this view, states, colleges, and universities should take the lead in developing their own instruments to assess learning outcomes, or at least employ some of the existing ones, such as the Collegiate Assessment of Academic Proficiency (CAAP), the Measure of Academic Proficiency and Progress (MAPP), or the Collegiate Learning Assessment (CLA) in order to forestall the use of federally designed standardized tests. Among these, the CLA would doubtless prove the most amenable to the sort of testing that would address analytical skills routinely taught in courses on history and historiography, including the ability to read, evaluate, and write an argument about a set of problem-oriented materials that test the student's ability to absorb information from sources, consider how they apply to questions posed about them, and then to draw conclusions and present an argument based on their analysis of the information provided. But even the designers of CLA have acknowledged potential difficulties with this mode of testing, admitting that (see www.cae.org/content/
We conceptually speak of these learning outcomes as if their meaning is shared and understood. In actuality, however, this is not always the case. In addition, any measurement of these (or any) skills is limited by the method used and the content assessed.
As an alternative to standardized tests, some schools and higher education associations are experimenting with the creation of e-portfolios, in which samples of a student's work as a freshman would be electronically stored and later compared to work produced as a junior or senior. Precisely how many students would be asked to submit such portfolios (all? a sampling? and if a sample how chosen?) and who would be responsible for evaluating progress in defined areas such as critical thinking is somewhat unclear, although e-portfolios at least have the virtue of being based on the individual student's own work, rather than forcing all students to submit to a standardized test. For this reason, it appeared to be the method favored by the AHA annual meeting panelists, in that it avoided the pitfalls of universally applied standardized tests. However, if one of the motives for testing—or, more politely, assessing learning outcomes—in the first place is to provide prospective students and parents with grounds for making comparative choices in selecting colleges and universities, it is difficult to see how that can be accomplished without the administration of a single national test that offers a means to compare levels of achievement across institutions, in the way that the SAT currently does for colleges and universities when considering graduating high school seniors for admission.
Moreover, even if the Spellings Commission's recommendations fail to be implemented, it seems clear from the actions already taken by university systems such as California and Maryland that accrediting agencies at the state and regional level have gotten the message and are in the process of making similar demands of the institutions for which they provide accreditation. Indeed, I first learned of this whole phenomenon at a humanities and social sciences chairs' meeting at Johns Hopkins University, in which the dean announced that the state of Maryland would soon demand assessments of learning outcomes and that we should be prepared to respond. Not unexpectedly, virtually every chair there (including me) claimed that it would be impossible to devise a test of what departments taught and students learned and recommended that the administration strongly resist agreeing to administer anything of the sort. But such a response ignores the fact that the federal government possesses considerable leverage over regional accrediting agencies, which derives from the fact that the accrediting agencies themselves are subject to evaluation and accreditation every five years by the Department of Education, and thus are held responsible in various ways to meeting federal standards of accountability. And we would do well to remember that the accreditation of colleges and universities represents a process that simultaneously determines the eligibility of institutions and programs to receive federal and state grants and loans, including, as Miller indicated, financial aid loans to students.
To date, the Spellings Commission has not succeeded in forcing its vision of testing upon state and regional agencies, and seems to be losing ground on the specific issue of a single, one-size-fits-all national test for postsecondary education. But as Arnita Jones already warned in these pages in the fall of 2006, "Accountability has momentum. It has become a rallying cry for policymakers, legislators, regulators, accrediting associations, higher education administrators and increasingly outspoken members of the public" (see "The Higher Education Commission Report: Should History Departments be Concerned?" Perspectives, October 2006). As she notes, the Educational Testing Service has already proposed a comprehensive national system for determining the nature and extent of college learning that would include, among other dimensions, "domain-specific knowledge and skills."
The panelists at the AHA annual meeting and other knowledgeable people agree, and caution that if we don't craft the instruments of assessment, then the state or federal government surely will, and those instruments are likely to insist on standardized measurements of learning outcomes. Should that occur, the study of history might well be among the principal casualties, especially if the test is aimed at tracking "domain-specific knowledge" as well as more generalized analytical skills. In a broader sense, the long-term consequences for higher education in the United States from the mounting pressure to measure performance do not portend a happy prospect. As Adam Falk, dean of arts and sciences at Johns Hopkins University, remarked to the Baltimore Sun, "the more we rely on standardized testing as our bellwether for the quality of education, the more we will value in education only those things that can be measured on standardized tests."
—Gabrielle Spiegel (Johns Hopkins Univ.) is the president of the AHA.
Please read our commenting and letters policy before submitting.