Most of the words we use in history and everyday speech are like mental depth charges. When heard or read, they quickly sink into our consciousness and explode, sending off cognitive shrapnel in all directions. On the surface they may look harmless enough, or resemble something equally benign. But as they descend and detonate, their resonant power is unleashed, showering our understanding with fragments of accumulated meaning and association.
Teaching the history of Indian-white relations, whether independently or as part of an American survey course, quickly brings us face to face with some of the classic problems posed by our loaded vocabularies. The first problem is shared by all students of history—the tendency to apply our own limited range of modern meanings to words we share with the past, but which may have meant different things to the original historical actors. Here the Oxford English Dictionary or its American equivalents are needed to clarify the usage of each age and to prevent anachronism. Francis Jennings is particularly adept at this kind of semantical sleuthing. In The Invasion of America he used the OED to probe such elements of the English “cant of conquest” as king, pagan, heathen, peasant, savage, and filthy.1
Another problem faced by many historians, particularly those who study the history of native peoples largely through the documents produced by European invaders, is the tendency to adopt uncritically the intruders’ descriptions and value judgments of the natives as their own. Despite our best efforts, we are all, to some extent, the unwitting dupes and victims of our sources. It is all too easy to accept as objective description the colonists’ unflattering characterizations of the natives, particularly when we happen to share the writers’ race, religion, or nationality. While we teach our students to be critical of every source they use, we tend to drop our own guard when a source seems relatively familiar and intellectually congenial.
A third problem, related to the first two, is our tendency to make moral judgments without admitting that we do or without sufficient attention to the normative content of the words we use in making them. Some historians have no trouble with either issue. Jennings, for one, writes unabashedly moral history because he fears that “what we approve in past conduct will be repeated in the future.” Assuming that “human persons do have some power of choice over their own conduct and that their adherence to moral standards, whatever those standards may be, is a matter of historical concern,” he does not hesitate to use highly charged language to describe and interpret the past.
Accordingly, Jennings writes in The Invasion of America that the Puritan clergy “thundered their wrath and called it God’s.” Colonial leaders resorted to “mendacity extraordinary even among adepts” to “put a fair face on fraud,” and shamefully played “deed games,” the “missionary racket,” and “brutal charades.” The colonists’ “heedless grasping and bellicosity” were spear headed by “mercenary buccaneers” and “backcountry Euramerican thugs,” who resembled nothing so much as “great [feudal] hulks on horseback.” No reader has any difficulty interpreting Jennings’s moral stance or recognizing that he has one.
Despite our best efforts, we are all, to some extent, the unwitting dupes and victims of our sources.
Most historians are, by nature or nurture, more judicious in their use of overtly moral language. Yet they do not fail to make normative judgments all the time. Often they are blissfully unaware that they are doing so, or they try to wrap them in the cloak of professional objectivity or the mantle of esoteric dull ness. Even seemingly mild unpainted words are capable of carrying a great deal of moral freight. Take, for in stance, the following two passages:
One [of the two competing societies in colonial New England] was unified, visionary, disciplined, and dynamic. The other was divided, self-satisfied, undisciplined, and static. It would be unreasonable to expect that such societies could live side by side indefinitely with no penetration of the more fragmented and passive by the more consolidated and active.2
The second moral issue raised by the scalp bounties is not that Europeans taught the Indians how to scalp—they already knew how—but that Europeans adopted the Indian practice of scalping even though their cultures offered no moral or religious warrant for it and the traditional standards of Christian behavior condemned it.3
When asked to choose the least “moral” most “objective” passage, students invariably pick the first because the adjectives seem temperate and disinterested; it helps that they are also polysyllabic and abstract (the warp and woof of scientific objectivity). The second passage, by contrast, is sprinkled with normative-sounding words, such as moral, religious, standards, Christian, and condemned. But in fact, as students realize after a brief session of Socratic questioning, the first passage is much more “personal” and value-laden than the second, which simply describes, without judgment, the historical status of a moral issue raised by contemporaries themselves. The first passage is objectionable, not only because we cannot define or describe a person or society by negation (divided vs. unified, etc.), but because the unconscious sexual metaphor that concludes it betrays the male Eurocentric bias of its author.
In attempting to teach students to be fair to both Indian and white cultures and sensitive to the normative challenge of our historical vocabularies, I spend considerable time making them watch their words in speech and in writing. As we read the materials of the course together, I draw their attention to words commonly used to describe native peoples. Some of them, notably in the newer ethnohistories, are unobjectionable to Indians and historians alike. But most of the descriptive nouns and adjectives used by historical contemporaries and even modern historians to portray native life are biased, pejorative, de meaning, or simply inaccurate.
So pervasive is our literary bias against Indian people and culture that perhaps the best way to spend the few class hours we devote to the Indians in our American survey courses is to attack head-on our students’ stereotypes. Asking for a list of words to describe “Indians” (tribe and time unspecified) will usually provide more than enough to work with.
Another source is the collection of colonial documents in my The Indian Peoples of Eastern America: A Documentary History of the Sexes (New York, 1981). Then the list can be attacked, item by item, with reliable ammunition from books such as Jennings’ The Invasion of America (particularly the first ten chapters), Gary Nash’s Red, White, and Black: The Peoples of Early America (Englewood Cliffs, NJ, 1974; 2d ed. 1982), Robert Berkhofer’s The White Man’s Indian: Images of the American Indian from Columbus to the Present (New York, 1978), Wilcomb Washburn’s The Indian in America (New York, 1975), or my The European and the Indian: Essays in the Ethnohistory of Colonial North America (New York, 1981).
Many of the words we use to describe native people and culture are relative, having no concrete reality in them selves, their meaning depends on other words that are equally slippery.
Take, for instance, the word savage, in European writings this is the most common synonym for Indian. It is based, of course, on an ethnocentric ranking of societies, with those of Western Europe at the top. Derived from the Latin word for “forest” (silva) through the French word for “wild” or “untamed” (sauvage), savage by the late sixteenth century had come to mean “an uncivilized, wild person” in “the lowest stage of culture.” The key term of reference is civilize, which by circular definition means “to bring out of a state of barbarism.”
Barbarism, as one might guess, means a “barbarous social or intellectual condition.” And barbarous is defined no more helpfully as “rude,” “savage,” or the opposite of civilized. In other words, the meanings of all these terms depend on an imaginary construct, a social-evolutionary hierarchy in the speaker’s mind that has no objective or historical reality. Understandably, the criteria for this ranking of societies are never stated explicitly because they are the familiar products of cultural habit rather than the earned results of philosophical analysis. From early documents it is relatively easy and very useful for students to discover some of the unarticulated standards by which European observers judged a savage American society. Con sonant with the definition of savage as uncivilized, these benchmarks are usually stated as deficiencies: lack of clothing, large towns and cities, statutory law, centralized and compulsory forms of government, literacy and printing, draft animals and fences, iron, cloth, glass, and scriptural and ecclesiastical religion.
In the discussion surrounding these social judgments, the word primitive usually appears. Derived from the Latin primus (first), primitive in the late seventeenth century meant “having the quality or style of that which is early or ancient; simple, rude, or rough.” But since then it has acquired more pejorative connotations from social evolutionists. Such sensitive anthropologists as Francis L. K. Hsu have urged their colleagues to expunge the word from their vocabularies, because the concept is as value-laden and descriptively use less as savage.4
When so-called primitive or tribal societies are examined carefully, the only thing remotely simple or rude about them is their technology. While the North American Indians had no wheels, ships, paper, guns, compasses, or cathedrals, some of the shrewdest students of society have struggled mightily to plumb the complexity and sophistication of their polytheist religions, kinship systems, barter economies, persuasive governments, arts of war and peace, and languages.
Once the last vestige of crude evolutionism has been disposed of, students should be urged to consider more worthy criteria for comparing human societies, namely those things that contribute to the quality of life. Without succumbing to “noble savagism,” our classes are likely to give the Indians higher marks than did their colonial predecessors by measuring native and colonial societies—all members of them—against standards of health, life expectancy, physical security, individual freedom, personal fulfillment, leisure, emotional support, and aesthetic and religious expression.
With the exception of kinship, religion was the least understood aspect of Indian life. Seeing no familiar churches, crosses, clergymen, or Scripture, Europeans concluded that the natives were “without faith, godless, pagan.” But of course pagan (and it synonym heathen) is simply a Christian definition of—or rather epithet for—a non-Christian, one who “does not worship the true God.” To the natives who worshipped them, Indian deities were no less true.
Instead of true religion, Indian religion was thought to be devil-worship and rank superstition. From the early sixteenth century, Englishmen used superstition to denote “religious belief or practice founded upon fear or ignorance.” In the next century, however, Thomas Hobbes in his Leviathan reminded his countrymen that “fear of things invisible, is the natural seed of that, which every one in himself calleth religion; and in them that worship, or fear that power otherwise than they do, superstition.” Students of history can use the same reminder. Even well meaning modern historians occasionally use terms that are inappropriate or vaguely insulting.
Red Man and Redskin are inappropriate for two reasons. First, because they refer to a physical characteristic, they lend themselves to racial stereotyping and discrimination, as does Whites to denote Europeans or Anglo-Americans. Historians should be discriminating only in their respect for cultural and human diversity.
Second, the color is objectively wrong. As Alden Vaughan has shown, the colonists described the Indians’ pigmentation as brown, copper, olive, black, tawny, and even white, but not red. When red slowly came into use in the second half of the eighteenth century, the color referred to the Indians’ warpaint and, by extension, their allegedly ethnic or racial antipathy to the “White Man.”5
Brave is a nineteenth-century word for an Indian warrior of the Plains, so it is inappropriately used to describe a warrior of an Eastern Woodland tribe or an Indian male who did not join war parties.
Squaw, a neutral Algonquian word for woman, quickly acquired pejorative coloration from European descriptions of native women as drudges and slaves who did most of the farming, transported lodge material and household items in their travels, collected firewood and water, and hauled game home from the spot where it was killed by their men folk. Indian people today eschew it for that reason.
Miscegenation (there should be a better word) between Europeans and Indians produced numerous descendants who were often referred to, pejoratively, as mixed bloods, half-breeds, or simply breeds. Objectively, of course, the blood of members of two different ethnic or racial groups does not “mix” except in a genetic chromosomal level.
In order to unload “the freight of a phony and damning folk biology,” therefore, we should use the neutral French term métis for mixed.6 The original Métis were the nineteenth-century descendants of French and Indian parents from the Red River settlement in Manitoba. But today the term applies more generally to any person of mixed Indian-white ancestry, particularly in Canada and in the northern border states of the United States.
Almost invariably our textbooks commit three other verbal faux pas, to which students should be alerted.
Prehistory is used to describe the Indian past before the arrival of Europeans and written records, as if the natives had no real history until the white man gave it to them. Such a condescending attitude does an injustice to the historical value of archaeology, glottochronology, and oral tradition. Precontact is a better word.
Massacre is typically what Indians (savages) did to (innocent) white folks, as in the Virginia “Massacres” of 1622 and 1644 when the Powhatans surprised the encroaching colonists and in brilliant coordinated attacks claimed nearly 850 victims.
The English attack on the Pequot Fort in 1637, on the other hand, is rarely described as a massacre, although between three and eight hundred men, women, and children lost their lives in the fiery onslaught. If there is to be a historical standard of judgment, it should not be double.
French and Indian War appears in nearly every American history text book-and should give way to the Seven Years’ War (even though in America the war lasted nine years, thanks to George Washington), Indians fought on both sides in that and every other intercolonial war. From the French perspective, the encounter could have been seen as the English and Indian War.
To the Indians it was simply another French and English or (by that time) White Man’s War. Contemporaries, of course, called it none of these. To them it was just “the war” or “the last war.” The French and Indian tag apparently was hung by Anglo-American historians in the nineteenth century.
As they move through texts and documents, students will discover that other words bear watching. Buffalo, not Indians, “roam.” The nomadic Indians of the Eastern Woodlands did not “wander”; they commuted on an annual cycle between familiar residences. By the same token, the American environment was a “wilderness” only to the European newcomers, not to the natives who called it home. And only the rare certifiable homicidal maniac sought to commit “genocide” upon the Indians. The vast majority of settlers had no interest in killing Indians and those who did took careful aim at temporary political or military enemies. Genocide was coined in 1944 to denote the systematic “annihilation of a race,” and the settlers’ animus was directed at cultural or social foes.
Virtually any course will provide abundant materials for the teacher and student interested in exploring the mor al dimensions of history. But the history of Indian-white relations offers a particularly rich field because it features five centuries of sustained, sometimes deadly, combat over the most basic cultural values. The moral complexion and complexity of the contest for the continent provides students with a historical experience that raises the full range of normative issues in the relative safety and quiet of the past, but also reminds them that few of those issues are dead. And to prepare them to deal sensitively and intelligently with the moral dilemmas of their own time is, after all, the main purpose of moral history.
Notes
- Chapel Hill, 1975, 43 49n., 73–74, 114. [↩]
- Alden T. Vaughan, New England Frontier: Puritans and Indians 1620–1675 (Boston 1965), p. 323. In fairness, Vaughan has considerably modified this statement in his revised edition (New York, 1979). [↩]
- James Axtell and William C. Sturtevant, “The Unkindest Cut, or Who Invented Scalping?” William and Mary Quarterly 3d ser. 37 (1980), p. 470. [↩]
- “Rethinking the Concept ‘Primitive’,” Current Anthropology 5 (1964). [↩]
- “From White Man to Redskin; Changing Anglo-American Perceptions of the American Indian,” American Historical Review 87 (1982). [↩]
- Jacqueline Peterson and Jennifer S. H. Brown, eds., The New Peoples: Being and Becoming Métis in North America (Winnipeg, 1985), 4-6. [↩]
James Axtell is William R. Kenan, Jr. Professor of Humanities and professor of history at the College of William and Mary. His book, The Invasion Within: The Contest of Cultures in Colonial North America, won the 1986 Albert B, Corey Prize awarded jointly by the CHA and AHA,