The Limited Nutritional Value of Cannibalism and the Development of Early Human Society
Gerald Sack
Safed Academic College, Israel ABSTRACT
An increasing number of instances of cannibalism, while a relatively rare human behaviour, have come to light over the past 30 years. Forty years ago two anthropologists suggested that the limited nutritional value of cannibalism meant that primitive man could not have survived solely on human flesh. Recently an archaeologist tried to evaluate this suggestion. This paper broadens his analysis by linking Paleolithic cannibalism with an unnoticed analysis of the population dynamics of small hunter-gatherer groups among early man. It is argued that the synthesis of these two hypotheses provides additional explanations of why cannibalism was a sub-optimal survival strategy.
Keywords: Paleolithic cannibalism, the nutritional value of cannibalism, exogamy, early human reproductive physiology, nutrition and reproduction.
1. INTRODUCTION
This paper is based on two articles that went unremarked for over five decades: Garn and Block's (1970: 106) brief note about 'the limited nutritional value of cannibalism,' and Slater's (1959) argument that 'early human' reproductive physiology compelled extra-group mating. Nutritional research over the past few decades supports Garn and Block's contention, albeit for different reasons. Two recent papers (Cole 2017a, 2017b) are the first attempts at examining the validity of Garn and Block's assertion. My undergraduate encounter with these articles simmered away at the back of my mind for almost five decades until I recently encountered Cole's study of the nutritional value of the prey animals, including humans, which European Pleistocene
Social Evolution & History, Vol. 20 No. 2, September 2021 27-49 © 2021 'Uchitel' Publishing House DOI: 10.30884/seh/2021.02.02
27
man consumed. It seems logical to assume that Pleistocene man was primarily a hunter-gatherer, with possibly the first signs of pastoralism and agriculture. Thus, while aware of the fact that contemporary hunter-gatherers do not live in their original habitats, having been driven into marginal regions by a number of factors, I think that some nutritional insights may be derived from these societies.
Anthropologists have not displayed much interest in cannibalism, while archaeologists, mainly European, have uncovered a substantial number of instances of cannibalism during the past 30 years. As an anthropologist, the probable social behaviours associated with cannibalism are of particular interest, particularly the early stages of the development of hominin society. This paper is an attempt at reasoned speculation about whether or not Pleistocene man was a persistent or occasional eater of human flesh.
Archaeological evidence seldom extends to reasoned speculation in the absence of concrete empirical data. Most of the research into cannibalism bypassed anthropologists and sociologists, which means that we know virtually nothing about how or whether Paleolithic cannibals perpetuated themselves. Nonetheless, I dare to stumble around where archaeological angels fear to tread, by considering whether cannibalism could have influenced the reproductive fertility of early man, utilizing Slater's model, and adding to it other social factors that seem relevant.
During research in the early 1970s, I heard of rare instances of cannibalism among Zulu witches, that is, cannibals were deviant members of Zulu society (cf. Marlar et al. 2000). Similarly Cole contended (2017a: 1-2) that Pleistocene cannibalism was rare, episodic and probably ritualistic, rather than a persistent source of nutrition. He was aware of the question to what extent his calculations of the caloric value of adult, adolescent, juvenile, and child Homo sapiens flesh were comparable to 'non-Homo sapiens hominin species' (Ibid.: 4).
This important caveat seems to have made the archaeological study of Pleistocene cannibalism stop at evaluations of its characteristics and frequency, and prevented consideration of the implications of cannibalism for the evolution and survival of early human society. A synthesis of Cole's and Slater's contributions and the addition of other factors should, however, enable the reconstruction of late Pleistocene social development.
2. THE STRUCTURE OF THE PAPER
This paper discusses the nutritional value of cannibalism, its place in the early human diet and Slater's model of 'ecological factors in the origin of incest.' It synthesizes these approaches into a model of early
human social development that takes into account both archaeological research into cannibalism and research into the impact of nutrition on human reproduction.
Recent studies into human physiology and nutrition suggest that groups practicing persistent cannibalism were unlikely to survive over time. This means that since group exogamy was critical for human survival, cannibalistic predation on neighbouring human groups was a poor species survival strategy. Archaeological discoveries, mainly in Europe and North America, indicate that Pleistocene cannibalism while not as rare as previously thought, nevertheless was not widespread, which suggests either that it may have had drawbacks as a survival strategy, or else more instances have simply not yet been uncovered.
3. CANNIBALISM AND THE EARLY HUMAN DIET
It has been argued that meat became part of the diet of humans only in the late Pleistocene period, that is about 10,000-11,000 years ago (Larsen 2003: 3893S; Pobiner 2013: 1-13), which means that cannibalism was a relatively new cultural and nutritional experiment in human social evolution. Garn and Block observed that 'While human flesh may serve as an emergency source of both protein and calories, it is doubtful that regular people-eating ever had much nutritional meaning' (Garn and Block 1970: 106). Cole took their observation a step further by evaluating the nutritional value of human and other mammalian flesh. We then link cannibalism to early humans' social reproduction and physiology.
The impact of cannibalism on nutritional health and hence female fertility taken together with Slater's hypotheses about human societal development (Slater 1959: 1042-1059), suggest the impact cannibalism may have had on the development of early human society. This broadens the relevance of archaeological research into cannibalism, which generally stops after the description and analysis of instances of cannibalism. Slater's 'interactional model' of the influence of early human behaviour on group survival links cannibalism with the dynamics of early human groups' reproduction and likelihood of survival.
4. SOME NUTRITIONAL AND HEALTH DISADVANTAGES OF CANNIBALISM
It was unlikely that early humans could have survived on a diet consisting exclusively of human flesh for several reasons. First, since the human body consists of about 70 per cent water (Personal communications from two orthopedic specialists and several general practitioners who wished to remain anonymous), its nutritional value per kilo-
gram of butchered meat is probably lower than that of the animals most frequently preyed on by early man. Similarly, discussions with several butchers indicated that when a bovine is butchered for human consumption, only about 50-60 per cent of the animal's live weight is considered fit for human consumption. Consequently, late Pleistocene cannibals would probably have hunted human prey to extinction with less nutritional benefit than they would have derived from the consumption of other similarly-sized prey animals that existed at the time their remains were deposited.
Second, given the relatively small human population at that time and the major climatic changes that affected both the European environment and their food animals, persistent cannibalism may have led to the extinction of the neighbouring human groups preyed on. Third, because Pleistocene man was much smaller than we are, he was probably not the optimum prey nutritionally speaking. Moreover, being cogni-tively similar if not identical to his co-species predators, late Pleistocene man was probably difficult to hunt, and so other sources of animal protein would have been preferable, when they were easier to hunt, thereby optimizing the quantity of energy expended in the hunt.
Fourth, as Slater showed, early human groups were compelled to marry out or mate outside their natal group for a number of biological and reproductive reasons. Hunting other human groups for food, therefore, would have reduced a group's long-term survival possibilities, by reducing the number of potential mates and/or sexual partners available.
Fifth, if cannibalism was persistent, the corollary is that more than only one or two groups would have practiced it. This would have lowered a cannibalistic group's hunting efficiency, because of the necessity of being on guard against other predatory human groups in the normal course of hunting and gathering food. In other words, if persistent cannibalism was not an isolated phenomenon, it would have jeopardized the survival of Pleistocene man.
At the same time, cannibalism could have been nutritionally beneficial, since cannibalistic hunters would have consumed flesh with the same nutritional and chemical make up as their own bodies. This would in all likelihood have meant that human flesh would be easily digestible, and so the questions of the quantity and the quality of the animal protein in the human body are important. It has not been possible to establish exactly what these two factors would have been, despite discussing the topic with doctors and dieticians. Perhaps, in time, the information will become available. In the meantime I have had to rely on butchers' estimations. It should also be noted that although meat is extremely rich in protein, honey and some vegetable foods
provided more calories than meat in the diet of the Hadza of Tanzania (Berbesque 2010: 9-11).
5. CANNIBALISM OUTSIDE OF EUROPE
Evidence from other parts of the world suggests (Worrall 2017) that cannibalism occurred during times of economic, nutritional (Canessa and Vierci 2017), political (Hoffman 2014) or social stress, such as among the Anasazi of the American Southwest in the thirteenth century (Turner 2009) and the early British settlers in America (Horn et al. 2015). Examples have been documented as recently as the 1960s (Reinhard 2006; Verano 2000), the anthropologist David Rockefeller, who was eaten by the Asmat tribe in New Guinea in 1961 (Hoffman 2014), and the Uruguayan rugby players whose plane crashed in the Andes in 1972 (Canessa and Vierci 2017).
Additionally, cannibalism may be influenced by social customs, such as among the Fore of New Guinea, where it was found to be a way of showing love and expressing grief at a kinsman's passing (Whitfield et al 2008). Elsewhere in New Guinea, cannibalism was thought to prevent the supernatural harm to a person's descendants that could result from the decomposition of his buried body (Mathews 1965; Mathews, Glasse and Lindenbaum 1968). Whitfield et al. (2008) state that in New Guinea only women consumed the body parts of their deceased kin, so the likelihood of men contracting kuru disease was diminished. Finally, it should be remembered that human body parts were sold in Europe as medicines as late as the 1800s (Stuart 2006), and there were documented instances of cannibalism among the defenders of Leningrad during the Second World War (Salisbury 1985, 595; Kirschenbaum 2006). Thus the prohibition on cannibalism seems to be sociological in origin, as has been conventionally argued. I intend to supply additional reasons.
Among parasitic insects, entomologists argued that cannibalism can reduce the spread of parasite-born disease by reducing the number of carriers (Van Allen et al 2017). The so far unique human example that may support this contention is the Fore of New Guinea where recorded deaths from kuru declined. The decline in Fore deaths from kuru was accredited to a decline in the incidence of cannibalism (Uhrlass 2019), a conclusion which may obviously be contested, since the 'drop' may simply reflect inaccurate reporting of the phenomenon.
6. THE NUTRITIONAL VALUE OF HUMAN FLESH
A problem encountered in the research was that very few nutritionists and doctors were prepared to discuss the topic, and most had never
considered the nutritional value of cannibalism. In fact the only people able and willing to discuss the topic intelligently were butchers, on the basis of their experience with flensing animals and preparing animal protein for human consumption. Several butchers who raised freerange bovines for supplementary income held the opinion that human flesh probably contained fewer calories by weight than other types of mammal protein, mainly because bovines put on more weight and mature far more rapidly than humans. All of them said that this required scientific investigation. Investigation of this does not seem to be particularly difficult.
Interestingly, Cole also thought that human flesh was so much less nutritious than other faunal meats that social and cultural factors best explained Pleistocene cannibalism (Cole: 2017a, 1). He concluded that a Pleistocene group 'of [about 25] anatomically modern humans' (Ibid.: 5) would have derived only one-third of the 3,788.5 calories required for a day's sustenance (Ibid.: 5). This was the same quantity of calories provided by a saiga antelope, but more than the prey animals whose remains are most commonly found together with human bones in the European Pleistocene archaeological digs he discussed (Ibid. : 5).
According to the United States Department of Health (USDH 2015), however, active males between the ages of 16 and 60 need between 2,600 and 3,200 calories per day, so Cole's figure seems high, especially since Pleistocene man was smaller than we are. His calculations of the caloric value of the various fauna found in several Pleistocene burial sites in Europe are reasonable (Cole 2017b: 1-17), and suggest that the energy expended in hunting, butchering and eating a human was less nutritionally-worthwhile than the yield from other similar-sized fauna, although he correctly cautioned against simplistic '"nutritional" or "symbolic"' explanations of cannibalism. He noted the importance of ritual and aggressive motivations for cannibalism.
Most of the human bones found in European burials together with other eaten fauna appear to have been ritually consumed (Bello et al. 2015), so the fact that Zulu witches consume only selected body parts supports Cole's contention that cannibalism may not have been for nutritional purposes, since Pleistocene cannibalism in Europe was restricted to only a few body parts.
7. OTHER NUTRITIONAL DRAWBACKS OF CANNIBALISM
There are a number of nutritional drawbacks to occasional or episodic cannibalism. First, as Cole notes, the consumption of human brains can lead to kuru disease, (one type of transmissible spongiform encephalo-
pathy) among humans (Mead et al. 2003), although in the long term, cannibalism might select for genetic resistance to this disease (Mead et al. 2003). Second, a predominantly meat (or human flesh) diet, like that popular in the 1970s and 1980s as a rapid weight-loss diet for brief periods, (the so-called 'Atkins diet'), caused malnutrition, kidney problems, constipation, osteoporosis, increased risk of cancer and heart disease (Scharffenberg 1979: 15-48).
Other adverse effects were violence and anti-social behaviour (Hippchen 1978: 3-19), which was ascribed by the researchers to the fact that meat-rich diets lack essential carbohydrates, roughage, vitamins and amino acids, which are important for cognitive development. All the contributors to Hippchen's book stressed the powerful impact of nutritional deficiencies on common learning disabilities and behavioural disorders conventionally treated with medications and soporifics (cf. Crawford and Marsh 1995).
The contributors contended that poor nutrition caused biochemical imbalances in the brain whose importance as causes of delinquent behaviour had been neglected (Hippchen 1978). Similarly, the nutritionists Crawford and Marsh (1995: 12) argued that 'it is not just the amount of food but its qualitative composition which matters' and influences human physiology and evolution. Long-term purely cannibalistic diets, therefore, could have increased the likelihood of group extinction through malnutrition, by reducing the relatively short life-expectancy of Pleistocene man, reducing female fertility (Stanton 2001) and exacerbating anti-social behaviour within the cannibalistic group. Malnutrition, it should be remembered does not mean simply a lack of food: it means an unbalanced diet; inadequate food is 'under-nutrition.'
The nutritional deficiencies of cannibalism as part of the meat component of a mixed diet could have been ameliorated by the vegetable components. This assumption is borne out by the generally good nutritional status of cattle-keepers that have ready access to bovine flesh.1 It should be noted, however, that just because a society is pastoralist does not necessarily mean that a lot of meat is consumed. For example, the Zulu keep cattle but do their utmost not to eat them, even when this is ritually-prescribed. If a Zulu man is diagnosed as suffering from witchcraft, this means that his behaviour caused his ancestors to withdraw their protection against witchcraft. To restore their protection a beast must be sacrificed. I witnessed several instances of goats being sacrificed instead of bovines, with the sacrificer apologizing for this, saying it was due to economic necessity!
In the 1970s, the Zulu consumed a reasonable amount of meat in the form of chicken, goat and buck (from hunting). Thus we must re-
member that cannibalism was only one of a number of possible survival strategies open to Pleistocene man, which included hunting, foraging, pastoralism and agriculture. Some mixture of these strategies probably provided the optimal survival strategy.
A further, related problem, is that in hunting and gathering societies, an often appreciable amount of meat and foraged vegetable food is consumed during hunting and foraging (Berbesque et al. 2016), Berbesque et al., in fact found that Hadza men consumed 90 per cent of their estimated daily calorie expenditure on hunting and gathering during food collection (2016: 1, 3-6). Interestingly, women consumed far fewer calories during foraging than did men, which may have impacted their health and fertility. Berbesque (2010: ix; 17) also found that Hadza women were far less partial to meat than Hadza men. She also found that Hadza women ate more frequently than men (Ibid.: x). John Marshall's (1957) ethnographic film on the Ju/'hoansi !Kung also shows that men on the hunt eat meat to replenish the energy expended in the chase.
In the absence of other detailed ethnographic analyses of sexual differences in food preferences, the Hadza example is a warning against unexamined ethnographic assumptions, while at the same time raising important but as yet unanswered questions regarding the relationship between nutrition, gendered food preferences, and the impact of diet on health and fertility in small-scale societies. Moreover, Berbesque (2010: 4-5) argues that human males specialized in muscle-formation, while human females specialized in body-fat, which is important for fecundity (Berbesque and Marlowe 2009: 754; Berbesque 2010: 19-21; 37; Berbesque, Marlowe, and Crittenden 2011: 339; 345).
There is, however, presently no way of knowing whether human flesh is nutritionally superior or inferior to faunal meat sources. Possibly the body-mass indices of deceased people who underwent postmortem examinations or autopsies could be derived by subtracting the weight of the organs removed which are usually weighed, from the total body weight. This would be a source of information that would not infringe the Helsinki guidelines.
What we do know is that carnivores, including man, who consume substantial quantities of meat, mature more rapidly than animals subsisting largely or entirely on vegetable matter (Kralj-Cercek 1956). This has some unsuspected consequences. In Japan, for example, young girls whose diet was 'American,' that is high in meat, had their menarche before the age of 13, and were subsequently over four times more likely to develop breast cancer than young women whose diet
was more traditional, that is with a higher vegetarian component. Young Japanese women whose diets did not include substantial amounts of meat first had their menarche at age 19 (Carrol 1975; 1977; Hirayama 1979; Scharffenberg 1979: 39). Other studies showed that as the consumption of animal fats increased, so did the incidence of breast, prostate and colon cancers (Reddy and Winder 1973; Scharffenberg 1979: 39-48; Wynder 1977). If cannibalism was a major component of the Paleolothic diet, therefore, it would have impacted indirectly on fertility, by lowering both the adult females' survival rates and male fertility.
Group extinction during the Pleistocene period because of a meat-rich diet, including human flesh that stimulated early maturation, violent behaviour and cancers was thus not impossible, although other factors seem to have been at least as important, as Slater contended. It may even be concluded, that since Pleistocene man had substantial, even overwhelming competition from larger predators, his reliance on vegetable matter aided his physical survival and diminished the necessity for him to rely on cannibalism. At the same time, the relatively small size and structure of the social groups where archaeological evidence of cannibalism is present (Cole 2017a: 4), suggest that the question of how such groups reproduced and persisted over time needs to be studied, which was Slater's concern.
8. 'ECOLOGICAL FACTORS IN THE ORIGIN OF INCEST'
The title of Slater's paper is misleading, as it suggests that ecological factors led to incest, whereas she argued that certain limitations on human reproductive physiology compelled the development of exogamy and the incest taboo. It must be borne in mind that Slater's article was written just after a vigorous anthropological debate about the function of the incest taboo, with the structural functionalists arguing that the taboo promoted inter-group integration by preventing intra-group sexual competition, while the 'Leslie White-Levi-Strauss school' argued, like Tylor several decades before them, that groups that did not marry out would have died out, and that exogamic affinal ties promoted alliances that contributed to the survival of groups that exchanged female spouses or sexual partners. This insight seems pertinent when considering the sociological implications of Pleistocene cannibalism.
Slater's contribution to our understanding of group survival in the Pleistocene is her two hypothetical outcomes of group reproduction that she developed from the groups' age- and sex-structure, and physiological factors such as females' age at menarche, the duration of female
fertility, the birth order and sex of children and individual life-expectancy. These factors governed the amount, frequency and physical possibility of intra-group incest. When this hypothesis is joined with the potential nutritional limitations of cannibalism, it appears a group that ate its human neighbours would have eliminated potential mates, which were important for their group's physical survival simply because of the nature of human reproductive physiology.
Slater derived her models from ethnographical accounts of life-expectancies and fertility-patterns in several 'primitive' societies known of when she wrote her paper (Slater 1959: 1051-1052). From these data she concluded, based on Krzywicki (1934), that 'the life of the most ... primitive people is short, few children reach maturity ... the reproductive period is reduced,' there were high rates of child mortality, rather than infant mortality (which we may note would have been especially wasteful of the group's energy spent on hunting and gathering).
The final factor in Slater's model was a relatively early decline in women's health and hence fertility (Slater 1959: 1051). These factors, together with abortion and infanticide which probably occurred as indicated by research on San fertility mentioned below, led to 'primitive' women having an average of three surviving children, who were suckled for three years (Ibid.: 1051). This, incidentally, was also a common period of sexual abstinence among many African tribes when I did my fieldwork among the Zulu and other South African tribes in the 1970s.
Slater noted that one reason the Fijians gave to their European interlocutors in the late nineteenth century for Fijian population decline after persistent European contact (in the 1880s), was the 'physical weakness of the women due to their having children in too rapid succession' (Krzywicki 1934: 93). Apart from showing that the Fijians were not simply 'ignorant primitives,' this remark reminds us that population growth does not 'just happen,' but results from deliberate individual choices, no less than ecological and health factors. While anthropologists or archaeologists seldom have knowledge of the impact of such factors, they should not ignore their potential influences on group size and population growth. Krzywicki noted (fn. 3, 91), that among the Chuckchee of Siberia infant mortality among their average of five surviving children was over 50 per cent. While they are 'unusual Eskimos' in that their diet includes a substantial amount of dairy protein, their environment is probably very similar to that of Pleistocene man in Europe.
Another example of high infant mortality is the Dobe San. During a six-month stay among them in 1970 I observed or heard about five pregnancies in a group of 30 individuals. One infant was stillborn; one
was a case of infanticide; one pregnant mother aborted after less than three months' pregnancy, and two mothers carried their foetuses to term. The group comprised six married couples, each with two young children, two old women, two old men, and two young bachelors. The main reason for the three infant deaths was a severe drought, which meant that new mothers had insufficient milk to suckle a child for three years.
More research into women's control of their fertility is required before we can generalize, but the !Kung suggest that women in small-scale societies have more control over their fertility than is often realized. One of the old women remarked that it made no sense to weaken a woman by suckling a baby that had little chance of survival. Bell (2006: 5-7) made a similar argument, while Blurton-Jones argued that a four-year interval between births was optimal for population increase among the !Kung of Namibia and Botswana (Blurton-Jones 1986: 91-105). Bentley contended that the physical stress of food-gathering lowered female fertility among the !Kung (Bentley 1985: 79-109; cf. Howell 1979). It seems, therefore, that Slater's conclusion about the relatively low number of potential incestuous births in 'early human groups' may even have been over-optimistic. Two of the researchers cited in this paragraph are women, and the subject obviously begs more female representation in any debate of women's control of fertility, especially the use of methods that may be regarded as ethno-centrically unacceptable, such as infanticide.
9. THE SURVIVAL IMPLICATIONS OF CANNIBALISM IN EARLY HUMAN GROUPS
We therefore argue that in the early stages of the development of human society, along with limited or what I would term 'occasional cannibalism,' several factors related to human reproductive physiology determined whether intra-group incest would have ensured or jeopardized group size and survival. To Slater's factors we may add the impact on group survival of hunting and prey-consumption patterns, including cannibalism. It should not be forgotten that cannibalism is just one type of food acquisition and consumption, even though it is usually viewed by researchers with a wide range of (mostly ethnocentric) attitudes ranging from horror to cynical amusement. Additionally, an important factor impacting on female fertility not fully considered by Slater, although the material was available when she wrote her article, is adolescent sterility (e.g., Ashley-Montagu 1939), which can reduce fertility for protracted periods, especially in small groups.
Before we continue with Slater's model, several further observations about Pleistocene cannibalism are pertinent. First, persistent cannibalism in the Pleistocene which was a period characterized by a relatively sparse human population, would have reduced the number of extra-group human resources for mutual assistance, trade and sexual partners, which were important because of human reproductive physiology. Second, there may not have been much human prey available. Third, the possibility of the transfer of prion-based diseases like kuru would have reduced Pleistocene population.
Garriga et al. remind us that during the early Pleistocene, hominin predators in Western Europe competed relatively unsuccessfully with hyaenids and felids for meat in a rapidly changing environment and climate. This 'made meat consumption a key resource for the adaptive possibilities of local hominins' (Garriga et al. 2016: 22-24). They also thought that '... cannibalism may have been an adaptive strategy to alimentary stress-situations, a practice aiming at preventing car-nivorans from scavenging hominin bodies once the site was abandoned, or [it] may have resulted from intergroup competition.'
Scavenging is a complementary strategy for food procurement, but is unlikely to have provided sufficient food by itself (Pobiner 2013: 6). This prompts us to paraphrase Tylor (1889: 267-269): primitive man would have had the choice of 'eating out,' quite literally, or dying out. Occasional or opportunistic cannibalism, which archaeologists usually call 'ritual cannibalism,' may therefore explain why cannibalism was less frequent than the consumption of other types of mammalian meat. As such it would have been a supplementary source of protein in the Pleistocene diet, as Garn and Block noted.
Returning to the Zulu, the fact that Zulu witches ate only selected body parts of their victims, particularly the brain, supports Cole's archaeological evidence of human gnaw marks on human skulls, and his contention that Pleistocene cannibalism was ritual. It should be noted, however, that the brain is a poor source of protein, although it is rich in fat, as Cole himself shows (Cole 2017a: Tables S1 and S4). Thus one reason for the (ritual) consumption of human brains may have been that, although not very nutritious, they were simply available for consumption. Garriga et al. (2016: 1903-1910) also contend that the presence of both animal and human gnaw marks on faunal bones '.suggests that hominins and carnivorans may have competed for the access to the carcasses or, alternatively, that hominins were exposed to the limitations [on carcass-scavenging] imposed by carnivorans.'
While humans could have scavenged human carcasses killed by carnivores, early human scavengers would have derived limited nutri-
tional utility from what remained of these animal carcasses. Thus Garriga et al. observed that carcasses killed by carnivores did not show human-based cut marks 'on the more nutritious elements after carni-voran intervention' (Gidna, Yaravedra, and Domínguez-Rodrigo 2012: 25). This means that after the carnivores and the scavengers that followed them, either there was very little usable meat left over, or else only the less-nutritious parts remained. This suggests that food shortages would only have been partly met by even occasional opportunistic cannibalism, as Garn and Block contended.
Moleón et al. discussed the neglected interactions between human, animal and avian scavengers (Moleón et al. 2014: 394-403), who also took their nutritional toll of what remained after predator kills and consumption of their prey. Avian scavengers are usually forgotten in analyses of scavenging. Ferraro et al. (2013: 3-5) showed that in African Pleistocene sites, hominins scavenged and hunted mainly medium sized bovines, rather than larger ones, probably because of the technological and physiological limitations of hominins at an early stage of their evolutionary development. It is, of course likely that smaller bovine prey animals would have been younger and possibly their flesh tastier and softer. Pleistocene European cannibalistic predation on other humans seems very similar to African predation on medium-sized bovines, with all its competitive disadvantages. We therefore conclude that the fact that cannibalistic scavenging was probably rare, because other subsistence strategies were more advantageous for early human survival.
10. SOCIAL-PHYSIOLOGICAL REASONS THAT CANNIBALISM WAS A POOR STRATEGY FOR HUMAN SURVIVAL DURING THE PLEISTOCENE
Slater argued that the functionalist and alliance theorists' 'explanations of the incest taboo were incorrect, because they equated function with ultimate motive or origin: we have rules for the sake of (whose result is) a certain end,' but these explanations did not always specify the process by which the ends were identified or achieved (Slater 1959: 1045). Slater constructed her model to show that group survival with persistent intra-group mating was either impossible or only marginally possible. Her model is based on the social interactions that must have preceded the establishment of social rules like exogamy. She made the following assumptions regarding early human social groups, which were probably very similar to the Pleistocene groups discussed above:
Scenario 1 Scenario 2
Fig. 1. Slater's Models of Early Human Female Reproductive Physiology
Scenario 1:
• Female menarche at the age of 15 years.
• Length of adolescent sterile period after menarche was two years.
• Duration of suckling period was three years.
• Life-expectancy was about 50 years.
Scenario 2:
• Female menarche at the age of 13 years.
• No adolescent sterile period after menarche.
• Duration of suckling period was two years.
• Life-expectancy was about 50 years.
These assumptions translate into two scenarios of Pleistocene women's reproductive life that appear in Figure 1.
Scenario 1:
Scenario 1 shows that a young Pleistocene woman bore her first child when she was aged 19, her second when she was 23, her third when she was 27, her fourth when she was 31, her fifth when she was 35, and her sixth when she was 39. If her first child was a girl, her second a boy, her third another girl, her fourth another boy, her fifth a boy and her sixth a girl, she could have mated with her first son when she was 41 years old, and borne him one child. At the same time, her husband could have mated with her first daughter when she was 34 and with her third daughter when she was 44. She could have suckled her second child, but it is dubious whether her aged husband could have provided sufficient food for the augmented family because of his age. While Pleistocene groups probably shared their food, as Bell argued, it cannot simply be assumed that they did.
Thus a 'married couple' could not have produced more than a theoretical maximum of eight offspring (her and her husband's six plus two more from sibling incest), assuming zero infant and child mortality. The Chuckchee and San ethnographic evidence, however, suggests that it is more reasonable to assume that more than one of her children would not have reached child-bearing or -fathering age.
Scenario 2:
In this scenario, a young Pleistocene woman bore her first child when she was aged 15, her second when she was 18, her third when she was 21, her fourth when she was 24, her fifth when she was 26,
and her sixth when she was 30. Her seventh child would have been born when she was 33, and her eighth when she was 36, her 9th when she was 39 and her tenth when she was 42. If her first child was a girl, her second a boy, her third another girl, her fourth another boy, her fifth a boy and her sixth a girl, she could have mated with her first son when she was 32 years old, but could have borne him only one child if her husband kept her persistently pregnant. At the same time, her husband could have mated with her first child when she was 30, with her third child when she was 33 and her sixth child when she was 42. Thus a 'married couple,' could have produced a maximum of ten offspring, assuming zero infant and child mortality. Again, it is more reasonable to assume that at least one of her children would not have reached child-bearing or -fathering age, thereby increasing the precar-iousness of Pleistocene human survival.
11. SUMMARY OF SLATER'S MODEL
Scenario 1 showed that a woman could have given birth to eight children, and Scenario 2, to 10. If her first child was a girl and her second a boy, she could have borne her son a child. In terms of group survival, it does not matter if her mate or her son impregnated her with her second and subsequent children. Poor nutrition and the physical stress of gathering would probably have lowered female fertility, while excessively good nutrition, especially if predominantly meat, would have both raised fertility and the likelihood of earlier female mortality. This suggests that the sociological processes generated by Slater's model of what may have occurred in the Pleistocene, though for different reasons. If a large number of offspring was a critical factor in the survival of early humans, exogamy was selectively superior, especially if persistent cannibalism had lowered group fertility.
12. THE IMPLICATIONS OF PRESENT-DAY 2 REPRODUCTIVE STATISTICS FOR SLATER'S MODEL2
While it may seem odd to compare Stone Age and modern fertility, it is done simply to stress the fact that women's control of their fertility takes a number of forms, and is not paid sufficient attention by anthropologists and demographers. The CIA World Fact Books (CIA N.d.) statistics on infant mortality and mothers' age at first childbirth in Third World countries indicate an infant mortality rate of between 33 and 34 deaths per 100,000 live births, or 0.008 per live 25 births. It should be noted though, that the infant mortality rates reported ranged from 10 to 113 per 100,000. This figure translates into 0.3 deaths per
25 births, which shows that Slater's figures are not unreasonable, although she did not take into consideration abortions and infanticide, which are apparently also absent from the CIA's statistics.
I therefore suggest that Slater's fertility figures in both scenarios tend to be high, but do not invalidate her thesis. Elsewhere, the CIA reported on the sexual composition of infant mortality, which indicated in every country that more male infants than females died at birth. This, again, suggests that her figures for live births should be lower, especially since fewer men in the group would have meant less meat and hence lower female fertility in the Pleistocene diet. While caution is necessary when theorizing, both the influence of nutrition on female fertility and infant mortality (including infanticide), must have been an important factor in the reproductive physiology of Pleistocene groups.
13. THE IMPLICATIONS OF SAN REPRODUCTIVE BEHAVIOUR ON THE SURVIVAL OF SMALL GROUPS
Nancy Howell's path-breaking book about the factors that impacted on Dobe !Kung demography (Howell 1979), showed the importance of factors like nutrition, age structure, 'the occupational hazards of the hunting and gathering way of life' (Ibid.: 54-59), 'interpersonal violence, homicide and suicide' (Ibid.: 59-63), and disease. Howell (Ibid.: 62-68), demonstrated very clearly that there are more factors involved in human reproductive success than those considered by Slater. Again, this suggests that Slater's fertility figures were over-optimistic.
A recent UN study of contemporary adolescent fertility patterns worldwide showed that since 1990, they had declined, because of increased educational levels and contraception (UN 2013). This shows that women regulate their fertility, which is underlined by the report's finding that 'most adolescent childbearing occurs within marriage' (Ibid.). Montagu, in an early study of adolescent sterility, made the important point that the human menarche does not mean that young women are capable of child-bearing, merely that they have reached the stage of ovulation in their reproductive development (Ashley-Montagu 1939: 14).
He cited ethnographic accounts reporting several years of adolescent sterility and induced abortions, (Ashley-Montagu 1939: 14-17) as well as prolonged adolescent sterility in rhesus monkeys and chimpanzees (Ibid.: 24-34). The close genetic similarity between humans and primates may also be reflected in human fertility, even though human females ovulate more regularly and more frequently than primate females, and are also permanently sexually-receptive, not period-
ically in oestrus. Thus human adolescent sterility could last longer than Slater thought and persistent intra-group mating among Pleistocene man would probably have led to his extinction.
Finally, a UN report on adolescent fertility noted with blinding logic, that the earlier a young woman marries, the higher the number of children that she was likely to bear (UN 2013: 10-11). The report also devoted a substantial section to the adolescent demand for contraceptives (Ibid. : 210-228). Again, this shows that many women desire to control their fertility, and so it is not inconceivable that a similar fertility control could have existed among Pleistocene women and reduced the number of children they bore.
14. CONCLUSIONS
We have argued that the episodic rather than persistent nature of Pleistocene cannibalism in Europe at least, was influenced by sociological, nutritional, physical and physiological factors. The physiological factors included variables isolated by Slater, namely age of the female menarche, the period of adolescent sterility after the menarche, the duration of the suckling period, the sexual birth order of a woman's children, and the average life-expectancy of Pleistocene humans. We suggested that the number of children might have been lower than Slater's estimates, given contemporary adolescent reproductive behaviours, mainly because insufficient weight was given to the preferences of the child-bearers, who used abortion, contraception and infanticide to limit the number of children they bore, as well as the physical stress of hunting and gathering. The relatively small stature of Pleistocene hunters and the much larger size of the major predators of the time, especially the felines must also have impacted on both hunting and physical survival.
The impact of 'aboriginal' fertility-limiting practices was borne out in the 1970s by a number of Zulu women who talked freely to me about traditional methods of conception and aborting immature foetuses. Most Zulu methods were a combination of herbal infusions and vaginal flushing with various vegetable-based concoctions. Additionally, Zulu women also claimed to be able to determine the sex of their children by specific dieting and vaginal rinses prior to intercourse with their husbands or lovers.
Slater's model seems rather naive in the face of this example; indeed, she could have drawn on better ethnographic sources that were available in 1959, rather than those she used. Her model, nonetheless, is useful, but needs refining. Nutritional deficiencies would have de-
pressed Pleistocene female fertility and threatened intra-group cohe-siveness. While the sociological and nutritional suppression of fertility are seldom reflected in archaeological remains, they need to be considered.
It seems likely that the physiology of human reproduction compelled Pleistocene man to look for at least some sexual partners outside his natal group, and hence made persistent cannibalism a sure recipe for group extinction, for the reasons given above. New Guinean cannibalism seems to have been associated with the ritual consumption of victims of raids. This, plus the possible poor nutritional value of human flesh, predatory (Speth 1989: 329-343) and scavenging competition between different human groups and animal carnivores in the same geographical area, made inter-group co-operation a better survival strategy than consuming one's neighbours. This was especially true given that humans during the Pleistocene were probably at a disadvantage vis-à-vis the larger animal predators that could have eaten most of the humans available for breeding and feeding the group.
Given that the early human population in Europe was probably quite small, it is also possible that there were simply not enough available nutritionally-poor humans for cannibalistic Pleistocene groups to survive on. This would explain why so few instances of cannibalism have been found to date, so marrying out rather than eating out as we do today, was the optimal survival strategy for early man as it is for us.
ACKNOWLEDGEMENTS
I am indebted to my wife for her comments and diligent proof-reading of earlier drafts and to colleagues at the Safed Academic College who critiqued my ideas in informal discussions before I began to write.
NOTES
1 I am indebted to an anonymous reviewer for raising this point.
2 This section is based on data drawn from the CIA World Fact Book. At https://www.cia.gov/library/publications/the-world-factbook/geos/bg.html. Accessed November 27, 2019.
REFERENCES
Ashley-Montagu, M. F. 1939. Adolescent Sterility. The Quarterly Review of
Biology 14 (1): 13-34. Bell, D. 2006. Bands, Fertility and the Social Organization of Early Humans. Social Evolution and History 5 (2): 3-23.
Bello, S. M., Saladie, P., Caceres, I., Rodriguez-Hidalgo, A., and Parfitt, S. A. 2015. Upper Palaeolithic Ritualistic Cannibalism at Gough's Cave (Somerset, UK): The Human Remains from Head to Toe. Journal of Human Evolution 82: 170-189. DOI: 10.1016/j.jhevol.2015.02.016.
Bentley, G. R. 1985. Hunter-Gatherer Energetics and Fertility: A Reassessment of the !Kung San. Human Ecology 13 (1): 79-109.
Berbesque, J. C. 2010. Sex Differences in Food Preferences, Eating Frequency, and Dental Attrition of the Hadza. PhD. Thesis, Tallahassee: Florida State University. URL: http://diginole.lib.fsu.edu/etd/1362. Accessed June 21, 2020.
Berbesque, J. C., and Marlowe, W. 2009. Tubers as Fallback Foods and Their Impact on Hadza Hunter-Gatherers. American Journal of Physical Anthropology 140: 751-758
Berbesque, J. C., Wood, B. M., Crittenden, A. N., Mabulla, A., and Marlowe, F. W. 2016. Eat First, Share Later: Hadza Hunter-Gatherer Men Consume More While Foraging than in Central Places. Evolution and Human Behavior 37 (4): 281-286. URL: http://dx.doi.org/10.1016/ j. evolhumbehav.2016.01.003.
Berbesque, J. C., Marlowe, F. W., and Crittenden, A. N. 2011. Sex Differences in Hadza Eating Frequency by Food Type. American Journal of Human Biology 23 (3): 339-45.
Blurton-Jones, N. 1986. Bushman Birth Spacing: A Test for Optimal Interbirth Intervals. Ethology and Sociobiology 7 (2): 91-105.
Canessa, R., and Vierci, P. 2017. I had to Survive. New York: Atria Books and Simon and Schuster.
Carrol, K. K. 1975. Dietary Factors in Hormone-Dependent Cancers. Cancer Research 35: 8874-3383.
Carrol, K. K. 1977. Dietary Factors in Hormone-Dependent Cancers. In Win-ick, M. (ed.), Nutrition and Cancer (pp. 24-40). New York: John Wiley and Sons.
CIA - Central Intelligence Agency. N. d. CIA World Fact Book. URL: https://www.cia.gov/the-world-factbook/. Accessed November 27, 2019.
Cole, J. 2017a. Assessing the Calorific Significance of Episodes of Human Cannibalism in the Palaeolithic. Scientific Reports 44707: 1-10. DOI: 10.1038/srep44707.
Cole, J. 2017b. Assessing the Calorific Significance of Episodes of Human Cannibalism in the Palaeolithic, Supplementary Methods: Calculating the Calorie Value of the Human Body. Scientific Reports 44707: 1-17. URL: http://www.nature.com/srep. Accessed November 22, 2019.
Crawford, M., and Marsh, D. 1995. Nutrition and Evolution. New Canaan, Connecticut: Keats Publishing.
Ferraro, J. V., Plummer, Th. W., Pobiner, B. L., Oliver, J. S., Bishop, L. C., Braun, D. R., Ditchfield, P. W., Seaman, J. W., Binetti, K. M., Sea-
man, J. W. Jr., Hertel, F., and Potts, R. 2013. Earliest Archaeological Evidence of Persistent Hominin Carnivory. PLoS ONE 8 (4): 3-5. DOI: 10.1371/journal.pone.0062174.
Garn, S. M., and Block, W. D. 1970. The Limited Nutritional Value of Cannibalism. American Anthropologist 72 (1): 106.
Garriga, J. G., Martínez, K., and Yravedra, J. 2016. Hominin-Carnivoran Adaptive Strategies in Western Europe during the Early Pleistocene. Archaeology, Ethnology and Anthropology of Eurasia 44 (2): 19-29. DOI: 10.17746/1563-0110.2016.44.2.019-029.
Gidna, A., Yravedra, J., and Domínguez-Rodrigo, M. 2012. A Cautionary Note on the Use of Captive Carnivores to Model Wild Predator Behavior: A Comparison of Bone Modification Patterns on Long Bones by Captive and Wild Lions. Journal of Archaeological Science 40 (4): 1903-1910. URL: https://doi.org/10.1016/jjas.2012.11.023.
Hippchen, L. J. 1978. The Need for a New Approach to the Delinquent-Criminal Problem. In Hippchen, L. J. (ed.), Ecologic-Biochemical Approaches to Treatment of Delinquents and Criminals (pp. 3-19). New York: Van Nostrand and Reinhold.
Hirayama, T. 1979. Cancer Epidemiology in Japan. Environmental Health Perspectives 32: 11-15.
Hoffman, C. 2014. Savage Harvest: A Tale of Cannibalism, Colonialism and Michael Rockefeller's Tragic Quest for Primitive Art. New York: Harper Collins.
Horn, J. P. P., Kelso, W. M., Owsley, D. W., and Straube, B. A. 2013. Jane: Starvation, Cannibalism, and Endurance at Jamestown. Colonial Williamsburg Foundation and Preservation.
Howell, N. 1979. Demography of the Dobe! Kung-Evolutionary Foundations of Human Behaviour. New Jersey: Transaction Publishers.
Kirschenbaum, L. A. 2006. The Legacy of the Siege of Leningrad, 1941— 1995: Myth, Memories, and Monuments. Cambridge: Cambridge University Press.
Kralj-Cercek, L. 1956. The Influence of Food, Body Build and Social Origin on the Age of Menarche. Human Biology 28: 393-406.
Krzywicki, L. 1934. Primitive Society and Its Vital Statistics. New York: Macmillan.
Larsen, C. S. 2003. Animal Source Foods and Human Health during Evolution. Journal of Nutrition 133 (11 Suppl 2): 3893S-3897S. DOI: 10.1093/jn/133.11.3893S.
Marlar, R. A., Banks, L. L., Billman, B. R., Lambert, P. M., and Marlar, J. E. 2000. Biochemical Evidence of Cannibalism at a Prehistoric Puebloan site in Southwestern Colorado. Nature 407: 74-78.
Marshall, J. 1957. The Hunters. Documentary Educational Resources. Water-town, Massachusetts.
Mathews, J. D., Glasse, R. M., and Lindenbaum, S. 1968. Kuru and Cannibalism. The Lancet 292 (7565): 449-452.
Mathews, John D. 1965. The Changing Face of Kuru. An Analysis of Pedigrees and of Recent Census Data Collected by Robert M. Glasse and Shirley Glasse. The Lancet 29 (7396): 1139-42.
Mead, S., Stumpf, M. P. H., Whitfield, J., Beck, J. A., Poulter, M., Campbell, T., Uphill, J. B., Goldstein, D., Alpers, M., Fisher, E. M. C., Collinge, J. 2003. Balancing Selection at the Prion Protein Gene Consistent with Prehistoric Kuru like Epidemics. Science 300 (5619): 640-643. DOI: 10.1126/science.1083320.
Moleón, M., Sánchez-Zapata, J. A., Margilada, A., Carrette, M., Owen-Smith, N., and Donazar, J. A. 2014. Humans and Scavengers: The Evolution of Interactions and Ecosystem Services. BioScience 64 (5): 394-403. URL: https://doi.org/10.1093/biosci/biu034.
Pobiner, B. 2013. Evidence for Meat-Eating by Early Humans. Nature Education Knowledge 4 (6): 1-13.
Reddy, B. S., and Wynder, E. L. 1973. Large Bowel Carcinogenesis: Fecal Constituents of Populations with Diverse Incidence Rates of Colon Cancer. Journal of the National Cancer Institute 50: 1437-1442.
Reinhard, K. 2006. A Coprological View of Ancestral Pueblo Cannibalism. Karl Reinhard Papers/Publications 26. URL: http://digitalcommons.unl. edu/natresreinhard/26. Accessed January 1, 2019.
Salisbury, H. E. 1985. The 900 Days: The Siege of Leningrad, Cambridge MA: De Capo Press.
Scharffenberg, J. A. 1979. Problems with Meat. Woodbridge Press: Santa Barbara.
Slater, M. K. 1959. Ecological Factors in the Origin of Incest. American Anthropologist 61 (6): 1042-1059.
Speth, J. D. 1989. Early Hominid Hunting and Scavenging: The Role of Meat as an Energy Source. Journal of Human Evolution 18 (4): 329-343.
Stanton, J. 2001. Listening to the Ga: Cicely Williams' Discovery of Kwashiorkor on the Gold Coast. Clio Medica: Studies in the History of Medicine and Health 61: 149-71.
Stuart, K. 2006. Defiled Trades and Social Outcasts. Cambridge: Cambridge University Press.
Turner, C. G. 2009. Man Corn: Cannibalism and Violence in the Prehistoric American Southwest. Salt Lake City: University of Utah Press.
Tylor, E. B. 1889. On a Method of Investigating the Development of Institutions; Applied to Laws of Marriage and Descent. Journal of the Anthropological Institute 18: 245-269.
USDH - U.S. Department of Health and Human Services and U.S. Department of Agriculture. 2015 - 2020 Dietary Guidelines for Americans. 8th
Edition. December 2015. URL: https://health.gov/our-work/food-nutri-tion/previous-dietary-guidelines/2015. Accessed November 20, 2019.
UN - United Nations Department of Economic and Social Affairs, Population Division. 2013. Adolescent Fertility since the International Conference on Population and Development (ICPD) in Cairo. New York: United Nations Department of Economic and Social Affairs, Population Division.
Uhrlass, E. 2019. Post-Mortuary Cannibalism and the Case of Kuru in the Fore Tribe of Papua New Guinea. URL: http://idst190.web.unc.edu/ 2019/04/post-mortuary-cannibalism-and-the-case-of-kuru-in-the-fore-tribe-of-papua-new-guinea/. Accessed April 11,2019.
Van Allen, B. G., Dillemuth, F. P., Flick, A. J., Faldyn, M. J., Clark, D. R., Rudolf, V. H. W., and Elderd, B. D. 2017. Cannibalism and Infectious Disease: Friends or Foes? The American Naturalist 190 (3): 299-312.
Verano, J. W. 2000. Paleontological Analysis of Sacrificial Victims at the Pyramid of the Moon, Moche River Valley, Northern Peru. Revista de Antropología Chilena 32 (1): 61-70.
Vila^a, Aparecida. 2000. Relations between Funerary Cannibalism and Warfare Cannibalism: The Question of Predation. Ethnos 65: 83-106.
Whitfield, J. T., Pako, W. H., Collinge, J., and Alpers, M. P. 2008, Mortuary rites of the South Fore and Kuru. Philosophical Transactions of the Royal Society 363 (1510): 721-3724. DOi: 10.1098/rstb.2008.0074.
Worrall, S. 2017. Cannibalism - the Ultimate Taboo - Is Surprisingly Common. National Geographic, February 19, 2017. URL: https://news.na tionalgeographic.com/2017/02/cannibalism-common-natural-history-bill-schutt/. Accessed April 08, 2019.
Wynder, E. L. 1977. Dietary Environment and Cancer. Journal of the American Dietetic Association 71 (4): 385-392.