Tag Archives: evolution

2.5 million year old Australopithecus manuport shows symbolic thought

The Makapansgat cobble

 

Found in a cave in Limpopo, South Africa. The Makapansgat cobble is a naturally occurring peice of reddish jasperite, found in cave breccia at the Makapansgat quarry, along  with Australopithecus bones. It’s dated to have been left in the cave 2.5 to 3 million years old. Technically it’s called a manuport, a naturally occurring object that’s been deliberately moved for some reason to a new location. It was found four kilometres from the nearest possible site, so it wasn’t carried back as an act of absent mindedness. Carrying an object over 4km has a reason. Presumably it was picked up by a hominid and carried back to the cave as a curiosity, because of it’s resemblance to a face and unusual colour. It seems that it was recognised it as a ‘face’, and taken it home; an indication of symbolic thought. Although, it may have been a ‘toy’ rather than ‘art’, as modern primates show a fondness for toys, and the colour red in particular.

Isotope analysis of the bones from this site suggest that the Australopithecus in residence were eating meat as well as fruits and vegetables, and that they were exploiting open grassland for food. Absent from this stage are stone tool and fire use. It would seem that Australopithecus was clever enough to recognize and value a face, but not to create tools or use fire. An ape right on the verge of the Homo genus.

The consequences of agriculture on the human body.

An absolutely fascinating paper, not written by me!

 
The Consequences of Domestication and Sedentism

Emily A. Schultz & Robert H. Lavenda

(This interesting piece on the effects of agriculture is from the college textbook Anthropology: A Perspective on the Human Condition Second Edition. pp 196-200)

Sedentism and domestication, separately and together, transformed human life in ways that still affect us today.

“Our Land”

Sedentism and domestication represent not just a technological change but also a change in worldview. Land was no longer a free good, available to anyone, with resources scattered randomly across the landscape; it was transformed into particular territories, collectively or individually owned, on which people raised crops and flocks.  Thus, sedentism and a high level of resource extraction (whether by complex foraging or farming) led to concepts of property tat were rare in previous foraging societies. Graves, grave goods, permanent housing, grain-processing equipment, as well as the fields and herds, connected people to places.  The human mark on the environment was larger and more obvious following sedentization and the rise of farming; people transformed the landscape in more dramatic ways–building terraces or walls to hold back floods.

Fertility, Sedentism, and Diet

One of the more dramatic effects of settling down was the change in female fertility and the rise in population.  A number of different effects together caused the population to grow.

Child Spacing Intervals   Among modern foragers, a woman’s pregnancies tend to be spaced three to four years apart because of the extended period of breastfeeding characteristic of these societies.  Extended means not just that children are weaned at three to four years of age but that they still nurse whenever they feel like it, as frequently as several times an hour (Shostak 1981, 67).  This nursing stimulus triggers the secretion of a hormone that suppresses ovulation (Henry 1989, 41). Henry points out that, “the adaptive significance of such a mechanism is obvious in the context of mobile foraging. A single child, who must be carried for some 3 to 4 years, creates a heavy burden for the mother; a second or third child within this interval would create an unmanageable problem for her and also jeopardize her health.

There are many reasons that nursing continues for three to four years in foraging societies. The foraging diet is high in protein, low in carbohydrates, and lacks soft foods easily digestible by very young infants. In fact, Marjorie Shostak observes that among Ju/’hoansi (!Kung), a contemporary foraging people of the Kalahari Desert, bush foods are rough and difficult to digest:  “To survive on such foods a child would have to be older than two years–preferably substantially older.” (1981, 66).  (See EthnoProfile 19.1: Ju/’hoansi [!Kung]).  By having her child nurse exclusively for six months, a mother does not have to find and prepare food for the infant in addition to her ordinary routine. Among the Ju/’hoansi, infants over the age of six months are given solid foods in the form of prechewed or pounded foods, a supplement that begins the transition to solid food (67).

The length of time between children in foraging societies serves to maintain a long-term energy balance in women during their reproductive years.  In many foraging societies, adding the caloric requirements of nursing to the physical demands of mobility, and the burden of food-gathering in the context of a high-protein, low-carbohydrate diet can keep the mother’s energy balance low. Where nutritional circumstances are marginal, the period of pregnancy and nursing can even constitute a net energy drain, resulting in a sharp drop in fertility. Under such circumstances, it will take the woman longer for her to regain her fertile condition. Thus, the period when she is neither pregnant nor nursing frequently becomes essential to building up her energy balance for future reproduction.

Fertility Rate Changes   In addition to the effects of breastfeeding, Ellison notes, age, nutritional status, energy balance, diet, and exercise all affect female fertility in a graduated way (1990).  That is, intense aerobic exercise may lead to the loss of the monthly period (amenorrhea), but less intense aerobic exercise may disrupt fertility in less obvious but still significant ways.

Recent studies of North American women who engage in high levels of endurance exercise (long-distance runners and young ballet dancers, for example) demonstrate several effects on childbearing.  These data are relevant to the transition to sedentism, because the levels of activity of the women studied approach the levels of activity of women in modern foraging societies.

Researchers found two different kinds of effects on fertility. Young, highly active ballet dancers studied by Warren (cited in Henry 1989) experienced their first menstruation at about 15.5 years, much later than a nondancing control group, whose members first menstruation was at about 12.5 years. High levels of exercise also seem to affect the endocrine system, reducing the time during which a woman is fertile by about one-third.

Summarizing the effects of foraging on female fertility, Henry observes:

It would appear then that a number of interrelated factors associated with a mobile foraging strategy are likely to have provided natural controls on fertility and perhaps explain the low population density of the Paleolithic. In mobile foraging societies, women are likely to have experienced both long intervals of breastfeeding by carried children as well as the high energy drain associated with subsistence activities and periodic camp moves.  Additionally, their diets, being relatively rich in proteins, would have contributed to maintaining low fat levels, thus further dampening fecundity. (1989, 43)

With complex foraging and increasing sedentism, these brakes on female fecundity would have been eased.  The duration of the breastfeeding period would have declined, as would the energy drain on women (Ju/’hoansi women, for example walk about 1,500 miles per year, carrying about 25 pounds of equipment, gathered food, and young children). This is not to say that a sedentary life is physically undemanding. Farming requires its own heavy labor, both from men and women. The difference seems to be in the kind of physical activity involved. Walking long distances carrying heavy loads and children was replaced by sowing, hoeing, harvesting, storing, and processing grain.  A diet increasingly rich in cereals would have significantly changed the ratio of protein to carbohydrate in the diet.  This would have changed the levels of prolactin, increased the positive energy balance, and led to more rapid growth in the young and an earlier age of first menstruation.

The ready availability of ground cereals would have enabled mothers to feed their infants soft, high-carbohydrate porridges and gruels. The analysis of infant fecal material recovered from the Wadi Kubbaniya site in Egypt seems to demonstrate that a similar practice was in use with root crops along the Nile at what may have been a year-round site by 19,000 years before the present (Hillman 1989, 230). The influence of cereals on fertility has been observed by Richard Lee among settled by Ju/’hoansi, who recently began to eat cereals and experienced a marked rise in fertility.  Renee Pennington (1992) notes that the increase in Ju/’hoansi reproductive success seems to be related to a reduction in infant and child mortality rates.

The Decline in the Quality of Diet

Westerners have long seen agriculture as an evolutionary advance over foraging, a sign of human progress. Put simply, however, early farmers did not eat as well as foragers. Jared Diamond (1987) writes:

While farmers concentrate on high carbohydrate crops like rice and potatoes, the mix of wild plants and animals in the diets of surviving hunter-gatherers provides more protein and a better balance of other nutrients. In one study, the San [Ju/’hoansi] average daily food intake (during a month when food was plentiful) was 2,140 calories and 93 grams of protein, considerably greater than the recommended daily allowance for people of their size.  It’s almost inconceivable that San [Ju/’hoansi] who eat 75 or so wild plants, could die of starvation the way hundreds of thousands of Irish farmers and their families did during the potato famine of the 1840s.

Skeletal evidence makes the same point. Skeletons from Greece and Turkey in late Paleolithic times indicate an average height of 5 feet 9 inches for men and 5 feet 5 inches for women. With the adoption of agriculture, the average height declined sharply; by about 5,000 years ago, the average male was about 5 feet 3 inches tall, the average woman, about 5 feet. Even modern Greeks and Turks are not, on average, as tall as the late Paleolithic people of the same region.

Increase in Precariousness

In the short term, agriculture was probably developed in ancient southwestern Asia, and perhaps elsewhere, to increase food supplies to support an increasing population at a time of serious resource stress. Over time, however, as dependence on domesticated crops increased, so did the overall insecurity of the food supply system. Why?

Proportion of Domesticated Plants in the Diet  There are several reasons why early farmers depended more and more on cultivated plants. Because the agroecology created an environment favorable to the plants, farmers were able to cultivate previously unusable land.  When such vital necessities as water could be brought to the land between the Tigris and Euphrates Rivers in Mesopotamia, land on which wheat and barley was not native could support dense stands of the domesticated grains.  Domestic plants also provided more and larger edible parts and were easier to harvest, process, and digest.  There is good evidence that they also tasted better. Rindos lists a number of modern food plants that derive from bitter wild varieties. Finally, the greater yield of domesticated plants per unit of ground also led to a greater proportion of cultivated plants in the diet, even when wild plants were still being eaten and were as plentiful as before.

Reliance on a Smaller Number of Plants   Unfortunately, reliance on an increasingly smaller number of plants is very risky should those plants fail.  According to Richard Lee, the Ju/’hoansi, who live in the Kalahari Desert, use over 100 plants (14 fruits and nuts, 15 berries, 18 species of edible gum, 41 edible roots and bulbs, and 17 leafy greens, beans, melons, and other foods; 1992b, 48).  By contrast, modern farmers rely on no more than 20 plants, and of those, three–wheat, maize, and rice–feed most of the world’s people.  Historically, it was only one or two grain crops that were the staple for a specific group of people.  A decrease in this crop has devastating effects on the population.

Selective Breeding, Monocropping, and the Gene Pool  Selective breeding of any given plant species decreases the variability of its gene pool, eliminating varieties with natural resistance to infrequently occurring pests and diseases and lowering its long-term survival chances by increasing the risk of severe losses at harvest time.  Again, the more people depend on a particular plant species, the riskier their future. Monocropping is the practice of growing only one kind of plant in a field. Although it increases efficiency and short-term yield, it exposes the entire field to destruction by diseases or pest damage. The outcome could be starvation.

Increasing Dependence on Plants  As cultivated plants took on an increasingly large role in their diet, people became dependent on plants and the plants in turn became completely dependent on the people–or rather on the environment created by the people. But the people could not completely control that environment. Hail, floods, droughts, infestations, frost, heat, weeds, erosion, and other factors could destroy or significantly affect the crop, yet all were largely outside human control.  The risk of failure and starvation increased.

Increase in Disease  Connected to the evolution of domesticated plants was an increase in disease, especially of the epidemic variety, for which there were several reasons.  First, prior to sedentism, human waste was disposed outside the living area. As increasing numbers of people began to live near each other in relatively permanent settlements, the disposal of human waste became increasingly problematic:  Large quantities of fecal material had the potential to transmit disease, and animal and plant wastes nourished pests, some of which served as disease vectors.

Second, a larger number of people living very near each other served as a disease reservoir.  Once a population is large enough, the likelihood of disease transmission increases.  By the time one person recovers from the disease, someone else reaches the infectious stage and can reinfect the first.  Consequently, the disease never leaves the population. The speed with which school children catch and spread colds, influenza, or chicken pox illustrates how a closely packed population and germs interact.

Third, settled people cannot just walk away from diseases; by contrast, if someone in a foraging band falls ill, the others can walk away, reducing the likelihood that the disease will spread.  Fourth, the agricultural diet may have reduced people’s resistance to disease. Finally, the rise in human population provided a greater opportunity for germs to evolve in human hosts. In fact, as we discussed in Chapter 3, there is good evidence that the clearing of land for farming in sub-Saharan Africa created an excellent environment for malaria-carrying mosquitos, leading both to a dramatic rise in human malaria and the selection for the HbAHbS genotype.

Environmental Degradation

With the development of agriculture, human beings began to intervene more actively in the environment.  Deforestation, soil loss, silted streams, and the loss of many native species followed domestication.  In the lower Tigris-Euphrates valley, irrigation waters used by early farmers carried high levels of soluble salts, poisoning the soil and making it unusable to this day.

Increase In Labor

Raising domesticated plants and animals requires much more labor than foraging.  People must clear the land, plant the seeds, tend the young plants, protect them from predators, harvest them, process the seeds, store them, and select the seeds for planting the next year; similarly, people must tend and protect domesticated animals, cull the herds, shear the sheep, milk the goats, and so on.

Bibliography

Diamond, Jared. 1987.  “The Worst Mistake in the History of the Human Race” Discover, May

Ellison, Peter. 1990.  “Human Ovarian Function and Reproductive Ecology: New Hypotheses” American Anthropologist 92 (4): 933-52

Henry, Donald. 1989.  From Foraging to Agriculture: The Levant and the End of the Ice Age  Philadelphia: University of Pennsylvania Press

Hillman, Gordon. 1989. “Late Paleolithic Plant Foods from Wadi Kubbaniya in Upper Egypt: Dietary Diversity, Infant Weaning, and Seasonality in a Riverine Environment.”  In Foraging and Farming: The Evolution of Plant Exploitation, edited by David Harris and Gordon Hillman, 207-39. Vol 13 of One World Archaeology.  London: Unwin Hyman.

Lee, Richard. 1992. The Dobe Ju/’hoansi  2d ed. New York: Holt, Rinehart and Winston.

Pennington, Renee. 1992.  “Did Food Increase Fertility: Evaluation of !Kung and Herero History” Human Biology  64: 497-501.

Shostak, Marjorie. 1981.  Nisa: The Life and Words of !Kung Woman  New York: Vintage Books

I feel I should add, that close contact with domesticated animals also brings us into contact with more nasty pathogens and parasites than a hunter gatherer would be exposed to.

So what did the first Europeans look like?

Were they all red haired? 

It’s claimed the mutation on the  SLC24A5 gene that gives Europeans very pale skin dates to between 6,000 and 12,000 years old. This would give the first immigrants into Europe an Asian tan colour skin. However, the dating of at least two variants of the European red hair gene go back to about 80,000 years.

 “Both African and non-African data suggest that the time to the most recent common ancestor is ~1 million years and that the age of the global 314 variant is 650,00 years. On this time scale, ages for the Eurasian-distributed Val60Leu, Val92Met, and Arg163Gln variants are 250,000-100,000 years; the ages for African silent variants—Leu106Leu, Cys273Cys, and Phe300Phe—are 100,000-40,000 years.  For the European red hair-associated Arg151cys and Arg160Trp variants, we estimate an age of ~80,000 years; for Asp294His, and Ser316Ser, we estimate an age of <= 30,000 years. “ (Harding et al, 2000, p. 1357 )

The red hair genes generally produce a somewhat lighter skin tone, even if you aren’t homogeneous for ginger hair genes (ie, red haired). I should know, I have one pale ginger gene from Granny, and I’m porcelain skinned and burn very easily, even with the dark hair. Red haired individuals are a lot lighter skinned and don’t tan, and prone to sunburn. Alos, genes that cause lighter eye colours also lighten skin colour.

Leaving aside the Neanderthal date of this gene, this would make the first Europeans pretty much the same skin tone wise as modern ones, as the red hair MC1R mutations affect skin as well as hair colour, almost like a mild form of albinism. This would have meant there probably wasn’t much difference between Cro Magnon and Modern European skin tone.

My hypothesis is, that the recent light skin colour mutation is a recent improvement on the mutations to the MC1R that cause ginger hair, as they allow skin light enough to allow vitamin D synthesis, but it also allows the carrier to tan to a limited degree, giving some UV protection as well. This could mean that the MC1R ginger hair genes are slowly being replaced by the light skin gene, and ‘gingerness’ was probably carried by the majority of early Europeans.  The red hair mutation is fine if the base skin colour is an Asian tan, but add it to European pale skin and you’ve got a recipe for sunburn. I expect the frequency of red heads has been decreasing ever since the skin lightening mutation came along.

So different gene, similar effect on skin tone.

As an after thought, I’ve found a picture of an Indian boy with Caucasian features and light coloured eyes. He’s probably a good approximation of what the first Europeans looked like, about 35,000 years ago. Not including the shirt.

Five myths of race.

Here are my five myths of race, by Jon Entine. 

It’s an archived cut and paste, none of it is my work, barring a couple of comments.

The complete text is available through the link.

1. Humans are 99.9 percent the same. Therefore, race is “biologically meaningless.”

This statement finds its origins in the research of Harvard University geneticist Richard Lewontin during the 1960s. “Human racial classification is of no social value and is positively destructive of social and human relations,” Lewontin concluded in The Genetic Basis of Evolutionary Change in 1974. “Since such racial classification is now seen to be of virtually no genetic or taxonomic significance either, no justification can be offered for its continuance.”

Coming from a geneticist, Lewontin’s views had enormous influence and he was making a valid argument at the time. As Laval University anthropologist Peter Frost points out, Lewontin was referring to classic genetic markers such as blood types, serum proteins, and enzymes, which do show much more variability within races than between them. But his comments are widely misinterpreted even today to extend beyond that limited conclusion. Further research has shown this pattern of variability cannot reliably be extrapolated to all traits with higher adaptive value.

(It’s now 99.7% the same, the figure was corrected recently)

The 99.9 percent figure is based on DNA sequences that do not differ much between people or even between most mammals. As Jared Diamond, UCLA physiologist has noted, if an alien were to arrive on our planet and analyze our DNA, humans would appear as a third race of chimpanzees, who share 98.4 percent of our DNA. Just 50 out of the 32,00 genes that humans and chimps are thought to possess, or approximately 0.15 percent, may account for all of the cognitive differences between man and ape.

The impact of minute genetic differences is magnified in more sophisticated species. From a genetic perspective, humans and chimpanzees are almost identical because their genes code for similar phenotypes, such as bone structure, which are remarkably similar in many animals. For that matter, dogs share about 95 percent of our genome and mice 90 percent, which is why these species make good laboratory animals. Looked at another way, while the human genome contains some 32,000 genes, that’s not much more than the nematode worm (18,000), which is naked to the human eye. Humans only have 25 percent more genes than the mustard weed (26,000). The real story of the annotation of the human genome is that human beings do not have much more genomic information than plants and worms.

A large-scale study of the variability in the human genome by Genaissance Pharmaceuticals, a biotechnology company in Connecticut, has convincingly shown the fallaciousness of arguments tied to the 99.9 percent figure. The research shows that while humans have only 32,000 genes, there are between 400,000 and 500,000 gene versions. More specifically, they found that different versions of a gene are more common in a group of people from one geographical region, compared with people from another.

The implications are far reaching. By grouping individuals by the presence and variety of gene types, physicians may someday be able to offer treatments based on race or ethnic groups that will have been predetermined to work on a genetic level. Kenneth Kidd, a population geneticist at Yale University who is not connected to the study, said it confirmed the conclusions of those who have maintained that there is in fact considerable variability in the human population. He also chided the government and some genetic researchers for having stripped ethnic identities from the panel of people whose genomes have been searched for gene sequences. The study prompted Francis Collins, director of the National Human Genome Research Institute, to backtrack from earlier assertions that the small percentage of gross gene differences was meaningful or shed light on the debate over “racial” differences. “We have been talking a lot about how similar all our genomes are, that we’re 99.9 percent the same,” he said. “That might tend to create an impression that it’s a very static situation. But that 0.1 percent is still an awful lot of nucleotides.”

In other words, local populations are genetically far more different than the factoid that humans are 99.9 percent the same implies. The critical factor is not which genes are passed along but how they are patterned and what traits they influence.

2. The genetic variation among European, African and Asian populations is minuscule compared to differences between individuals within those populations.

This factoid, which is a variation on the first myth, has been elevated to the level of revealed truth. According to Lewontin, “based on randomly chosen genetic differences, human races and populations are remarkably similar to each other, with the largest part by far of human variation being accounted for by the differences between individuals.”

What does that mean? Not much by today’s nuanced understanding of genetics, it turns out. Consider the cichlid fish found in Africa’s Lake Nyas. The chiclid, which has differentiated from one species to hundreds over a mere 11,500 years, “differ among themselves as much as do tigers and cows,” noted Diamond. “Some graze on algae, others catch other fish, and still others variously crush snails, feed on plankton, catch insects, nibble the scales off other fish, or specialize in grabbing fish embryos from brooding mother fish.” The kicker: these variations are the result of infinitesimal genetic differences–about 0.4 percent of their DNA studied.

As retired University of California molecular biologist Vincent Sarich has noted, there are no clear differences at the level of genes between a wild wolf, a Labrador, a pit pull and a cocker spaniel, but there are certainly differences in gene frequencies and therefore biologically based functional differences between these within-species breeds.

There are other more fundamental problems resulting from misinterpretations of Lewontin’s original studies about gene variability. Numerous scientists since have generalized from his conclusions to the entire human genome, yet no such study has been done, by Lewontin or anyone else. Today, it is believed that such an inference is dicey at best. The trouble with genetic markers is that they display “junk” variability that sends a signal that variability within populations exceeds variability between populations. Most mammalian genes, as much as 70 percent, are “junk” that have accumulated over the course of evolution with almost no remaining function; whether they are similar or different is meaningless. The “junk” DNA that has not been weeded out by natural selection accounts for a larger proportion of within-population variability. Genetic makers may therefore be sending an exaggerated and maybe false signal.

The entire issue of gene variability is widely misunderstood. “In almost any single African population or tribe, there is more genetic variation than in all the rest of the world put together,” Kenneth Kidd told me in an interview in 1999. “Africans have the broadest spectrum of variability, with rarer versions at either end [of the bell curve distribution]. If everyone in the world was wiped out except Africans, almost all human genetic variability would be preserved.”

Many journalists and even some scientists have taken Kidd’s findings to mean that genetic variability equates with phenotypic variability. Since Africans have about 10–15 percent more genetic differences than people from anywhere else in the world, the argument goes, Africans and their Diaspora descendents should show more variability across a range of phenotypic characteristics including body type, behavior, and intelligence. This “fact” is often invoked to explain why athletes of African ancestry dominate elite running: it’s a product of variability, not inherent population differences.

This is a spurious interpretation of Kidd’s data. Chimpanzees display more genetic diversity than do humans. That’s because genetic variability is a marker of evolutionary time, not phenotypic variability. Each time an organism, human or otherwise, propagates, genetic “mistakes” occur as genes are mixed. The slightly increased variability in Africans reflects the accumulation of junk DNA as mutations have occurred over time. Such data “prove” little more than the fact that Africa is the likely home of modern humans–and it may not even signify that.

University of Utah anthropologist and geneticist Henry Harpending and John Relethford, a biological anthropologist from the State University of New York at Oneonta, have found that this genetic variation results from the fact that there were more people in Africa than everywhere else combined during most of the period of human evolution. In other words, greater African genetic variability may be the result of nothing more than fast population growth.

When I asked Kidd directly whether his findings of genetic variability, which showed that blacks meant that Africans were most likely to show the most phenotypic variability in humans–the tallest and shortest, the fastest and slowest, the most intelligent and most retarded–he laughed at first. “Wouldn’t that be mud in the eye for the bigots,” he said, not eager to puncture the politically correct balloon. Finally, he turned more serious. “Genes are the blueprint and the blueprint is identifiable in local populations. No matter what the environmental influences, you can’t deviate too far from it.”

Part of the confusion stems from the fact that some scientists, and certainly the general public, have embraced the popular shorthand that discrete genes have specific effects. This is sometimes expressed as there is a “gene for illness X.” Lewontin himself expresses scorn for what he calls the “religion” of molecular biology and their “prophets”, geneticists, who make grandiose statements about what genes prove or disprove. Genes only specify the sequence of amino acids that are linked together in the manufacture of a molecule called a polypeptide, which must then fold up to make a protein, a process that may be different in different organisms and depends in part on the presence of yet other proteins. “[A] gene is divided up into several stretches of DNA, each of which specifies only part of the complete sequence in a polypeptide,” Lewontin has written. “Each of these partial sequences can then combine with parts specified by other genes, so that, from only a few genes, each made up of a few subsections, a very large number of combinations of different amino acid sequences could be made by mixing and matching.” Lewontin’s reasonable conclusion: the mere sequencing of the human genome doesn’t tell us very much about what distinguishes a human from a weed, let alone a Kenyan from a Korean.

Significant between group differences have been identified in the harder-to-study regulatory genes. This tiny fraction of the human genome controls the order and make-up of proteins, and may be activated by obscure environmental triggers. For instance, the presence of an abnormal form of hemoglobin (hemoglobin S) can lead to sickle-cell anemia, which disproportionately afflicts families of African descent. But the genetic factors that actually lead to the disease operate at a much finer level. Just one change in the base pair for hemoglobin, can trigger the disease. However, the genetic factors involved are even subtler in part because of gene-gene and gene-environment interactions. For example, a separate set of genes in the genome–genes that code for fetal hemoglobin–can counteract some of the ill effects of the adult hemoglobin S genes if they continue to produce into adulthood. This range of possibilities, encoded in the genome, is found disproportionately in certain populations, but do not show up in the gross calculations of human differences that go into the misleading 99.9 percent figure.

Francois Jacob and Jacques Monod, who shared the Nobel Prize for Medicine in 1965 for their work on the regulator sequences in genes, have identified modules, each consisting of 20-30 genes, which act as an Erector Set for the mosaics that characterize each of us. Small changes in regulatory genes make large changes in organisms, perhaps by shifting entire blocks of genes on and off or by changing activation sequences. But, whether flea or fly, cocker spaniel or coyote, Brittany Spears or Marion Jones, the genetic sequences are different but the basic materials are the same. Minute differences can and do have profound effects on how living beings look and behave, while huge apparent variations between species may be almost insignificant in genetic terms.

3. Human differences are superficial because populations have not had enough evolutionary time to differentiate.

Stephen Jay Gould has periodically advanced an equally flawed argument: Human differences are superficial because populations have not had enough evolutionary time to differentiate. “Homo sapiens is a young species, its division into races even more recent,” Gould wrote in Natural History in November 1984.”This historical context has not supplied enough time for the evolution of substantial differences. … Human equality is a contingent fact of history.” In other words, our relatively recent common heritage–differentiation into modern humans may have occurred as recently as 50,000 years ago, an eye blink of evolutionary time–renders the possibility of “races” absurd.

This view has made its way into the popular media as fact. Yet, it’s difficult to believe that Gould believes his own rhetoric, for his own theory of punctuated equilibrium, which argues that swift genetic change occurs all the time, demolishes this assertion. A quarter century ago, Gould and American Museum of Natural History curator Niles Eldredge addressed the controversial issue of why the fossil records appeared to show that plants and animals undergo little change for long periods of time and then experience sudden, dramatic mutations. They argued that new species do not evolve slowly so much as erupt, the result of a chain reaction set off by regulatory genes. Their theory, though controversial and still widely debated, helps explain the limited number of bridge, or intermediary, species in the fossil record (as Creationists never fail to point out). Either as a mutation or in response to an environmental shock, these regulators could have triggered a chain reaction with cascading consequences, creating new species in just a few generations.

The evolutionary record is filled with such examples. A breakthrough study by University of Maryland population geneticist Sarah Tishkoff and colleagues of the gene that confers malarial resistance (one known as the G6PD gene) has concluded that malaria, which is very population specific, is not an ancient disease, but a relatively recent affliction dating to roughly 4,000-8,000 years ago. When a variant gene that promotes its owner’s survival is at issue, substantial differences can occur very rapidly. The dating of the G6PD gene’s variants, done by a method worked out by a colleague of Dr. Tishkoff’s, Dr. Andrew G. Clark of Pennsylvania State University, showed how rapidly a life-protecting variant of a gene could become widespread. The finding is of interest to biologists trying to understand the pace of human evolution because it shows how quickly a variant gene that promotes its owner’s survival can spread through a population. Genes that have changed under the pressure of natural selection determine the track of human evolution and are likely to specify the differences between humans and their close cousin the chimpanzee.

This new understanding of the swiftness of genetic change may ultimately help solve numerous evolutionary puzzles, including the origins of “racial differences.” For instance, there has been contradictory speculation about the origins of the American Indian population. Excavations have pushed the date of the initial migration to the Americas as far back as 12,500 years ago, with some evidence of a human presence as far as 30,000 years. The 1996 discovery of Kennewick Man, the 9,300-year-old skeleton with “apparently Caucasoid” features sparked speculation in the possibility of two or more migrations, including a possible arrival of early Europeans.

Using computer analysis of skeletal fragments, University of Michigan anthropologist C. Loring Brace argues that most American Indians are the result of two major migratory waves, the first 15,000 years ago after the last Ice Age began to moderate and the second 3,000-4,000 years ago. The first wave were believed to be members of the Jomon, a prehistoric people who lived in Japan thousands of years ago. Similar to Upper Paleolithic Europeans 25,000 years ago as well as the Ainu in Japan today and the Blackfoot, Sioux and Cherokee in the Americas, these populations have lots of facial and body hair, no epicanthic eyefold, longer heads, dark hair and dark eyes. Brace argues that the first waves was followed by a second migration consisting of a mixed population of Chinese, Southeast Asians, and Mongolians–similar in some respects to current populations of Northeast Asia–and are likely ancestors of the Inuits (Eskimo), Aleut, and Navajo.

Brace’s data does not resolve whether the two migratory waves consisted of distinct populations or rather different “samples” over time of the same population, whose physical appearance had changed as a result of selection pressures specific to that region, notably the cold, harsh climate. According to Francisco Ayala of the University of California at Irvine, co-author with Tishkoff of the malaria study, the genetic data suggests the remains represent a similar population at different evolutionary points in time. By this reasoning, various American Indian populations are the result of differing paces of evolution of various sub-pockets of populations. “We are morphologically no different in the different continents of the world,” he contends. This research may help explain how “racial” differences could occur so quickly after humans began their expansion from Africa, as recently as 50,000 years ago, Ayala adds.

These findings reinforce those of Vince Sarich. “The shorter the period of time required to produce a given amount of morphological difference, the more selectively important the differences become,” he has written. Sarich figures that since the gene flow as a result of intermingling on the fringes of population pockets was only a trickle, relatively distinct core races would likely have been preserved even where interbreeding was common.

Stanford University geneticist Luigi Cavalli-Sforza has calculated the time it could take for a version of a gene that leads to more offspring to spread from one to 99 percent of the population. If a rare variant of a gene produces just 1 percent more surviving offspring, it could become nearly universal in a human group in 11,500 years. But, if it provides 10 percent more “reproductive fitness,” it could come to dominate in just 1,150 years.

Natural selection, punctuated equilibrium, and even catastrophic events have all contributed to what might loosely be called “racial differences.” For example, University of Illinois archaeologist Stanley Ambrose has offered the hypothesis that the earth was plunged into a horrific volcanic winter after a titanic volcanic blow-off of Mount Toba in Sumatra some 71,000 years ago. The eruption, the largest in 400 million years, spewed 4,000 times as much ash as Mount St. Helens, darkening the skies over one third of the world and dropping temperatures by more than 20 degrees. The catastrophe touched off a six-year global winter, which was magnified by the coldest thousand years of the last ice age, which ended some fourteen thousand years ago. It is believed to have resulted in the death of most of the Northern Hemisphere’s plants, bringing widespread famine and death to hominid populations. If geneticists are correct, some early humans may have been wiped out entirely, leaving no more than 15,000 to 40,000 survivors around the world.

What might have been the effect on evolution? “Humans were suddenly thrown into the freezer,” said Ambrose. Only a few thousand people in Africa and a few pockets of populations that had migrated to Europe and Asia could have survived. That caused an abrupt “bottleneck,” or decrease, in the ancestral populations. After the climate warmed, the survivors resumed multiplying in what can only be described as a population explosion, bringing about the rapid genetic divergence, or “differentiation” of the population pockets.

This hypothesis addresses the paradox of the recent African origin model: Why do we look so different if all humankind recently migrated out of Africa? “When our African recent ancestors passed through the prism of Toba’s volcanic winter, a rainbow of differences appeared,” Ambrose has said. The genetic evidence is in line with such a scenario. Anna DiRienzo, a post-doctoral fellow working out of Wilson’s lab at Berkeley in the early 1990s, found evidence in the mitochondrial DNA data of a major population spurt as recently as thirty-thousand years ago.

What’s clear is that little is clear. Human differences can be ascribed to any number of genetic, cultural, and environmental forces, including economic ravages, natural disasters, genocidal pogroms, mutations, chromosomal rearrangement, natural selection, geographical isolation, random genetic drift, mating patterns, and gene admixture. Taboos such as not marrying outside one’s faith or ethnic group exaggerate genetic differences, reinforcing the loop between nature and nurture. Henry Harpending and John Relethford have concluded “human populations are derived from separate ancestral populations that were relatively isolated from each other before 50,000 years ago.” Their findings are all the more convincing because they come from somewhat competing scientific camps: Harpending advocates the out-of-Africa paradigm while Relethford embraces regional continuity.

Clearly, there are significant genetically-based population differences, although it is certainly true that dividing humans into discrete categories based on geography and visible characteristics reflecting social classifications, while not wholly arbitrary, is crude. That does not mean, however, that local populations do not show evidence of patterns. The critical factor in genetics is the arrangement of gene allele frequencies, how genes interact with each other and the environment, and what traits they influence. This inalterable but frequently overlooked fact undermines the notion that gene flow and racial mixing on the edges of population sets automatically renders all categories of “race” meaningless. As Frost points out, human characteristics can and do cluster and clump even without reproductive isolation. Many so-called “species” are still linked by some ongoing gene flow. Population genetics can help us realize patterns in such things as the proclivity to diseases and the ability to sprint fast.

4. “There are many different, equally valid procedures for defining races, and those different procedures yield very different classifications.”

This oft-repeated quote, written by Jared Diamond in a now-famous 1994 Discover article titled “Race Without Color”, was technically accurate, to a point. Many phenotypes and most complex behavior that depends on the brain–fully half of the human genome–do not fall into neat folkloric categories. In fact, there has been little historical consensus about the number and size of human “races”. Charles Darwin cited estimates ranging from two to sixty-three.

The problem with this argument, however, and the clumsy way it was presented, revolves around the words “equally valid.” Diamond appeared to embrace the post-modernist creed that all categories are “socially constructed” and therefore are “equally valid,” no matter how trivial. To make his point, he served up a bouillabaisse of alternate theoretical categories that cuts across traditional racial lines, including a playful suggestion of a racial taxonomy based on fingerprint patterns. A “Loops” race would group together most Europeans, black Africans and East Asians. Among the “Whorls,” we would find Mongolians and Australian aborigines. Finally, the “Arches” race would be made up of Khoisans and some central Europeans. “Depending on whether we classified ourselves by anti-malarial genes, lactase, fingerprints, or skin color,” he concluded, “we could place Swedes in the same race as (respectively) either Xhosas, Fulani, the Ainu of Japan, or Italians.”

Throughout the piece (and indeed throughout Guns, Germs, and Steel), Diamond appeared to want it both ways: asserting that all population categories, even trivial ones as he puts it, are equally meaningful, yet suggesting that some are more meaningful than others. In discussing basketball, for instance, he writes that the disproportionate representation of African Americans is not because of a lack of socio-economic opportunities, but with “the prevalent body shapes of some black African groups.” In other words, racial categories based on body shape may be an inexact indicator of human population differences–as are all categories of human biodiversity–but they are demonstrably more predictive than fingerprint whorls or tongue-rolling abilities.

It’s one thing to say that race is in part a folk concept. After all, at the genetic level, genes sometimes tell a different story than does skin color. However, it’s far more problematic to make the claim that local populations have not clustered around some genetically based phenotypes. However uncomfortable it may be to Diamond, some “socially constructed” categories are more valid than others, depending upon what phenotypes we are discussing. Moreover, geneticists believe that some of the traditional folkloric categories represent major human migratory waves, which is why so many characteristics group loosely together–for instance, body type, hair texture, and eye and skin color.

5. Documenting human group differences is outside the domain of modern scientific inquiry.

Even suggesting that there is a scientific basis for “racial” differences is baseless speculation, according to some social scientists. University of North Carolina-Charlotte anthropologist Jonathan Marks cavalierly dismisses evidence of patterned differences. “If no scientific experiments are possible, then what are we to conclude? he wrote to me in 1999. “That discussing innate abilities is the scientific equivalent of discussing properties of angels.”

From one perspective, Marks appears to be taking the road of sound, verifiable science: we can only know what we can prove. But he casts the issue in misleading terms, for no one familiar with the workings of genes refers to “innate abilities.” Our personal set of genes no more determines who we are than the frame of a house defines a home; much of the important stuff is added over time. There is no such thing as “innate ability” only “innate potential,” which has an indisputable genetic component. No amount of training can turn a dwarf into a NBA center, but training and opportunity are crucial to athletes with anatomical profiles of NBA centers.

Marks’s corollary assertion that truth rests only in the laboratory presents the antithesis of rigorous science. If every theory had to be vetted in a laboratory experiment, then everything from the atomic theory of matter to the theory that the earth revolves around the sun could be written off as “speculative”. As Steve Sailer writes, “you can’t reproduce Continental Drift in the lab. You can’t scoop up a few continents, go back a billion years, and then see if the same drift happens all over again.”

Ironically, the extremist position taken by Marks and parroted by many journalists mirrors the hard right stance of Darwin’s most virulent critics. While microevolution has been verified, the weakest link of evolutionary theory has always been the relatively meager evidence of transitional fossils to help substantiate macroevolution. “Evolution is not a scientific ‘fact,’ since it cannot actually be observed in a laboratory,” argued the Creation Legal Research Fund before the Supreme Court in an unsuccessful attack on evolution theory. “The scientific problems with evolution are so serious that it could accurately be termed a ‘myth.’” Arguing for the teaching of Creationism in schools, anti-evolution Senator Sam Brownback (R-Kansas) has said “we observe micro-evolution and therefore it is scientific fact; … it is impossible to observe macro-evolution, it is scientific assumption.”

Does the lack of scientific experiments substantiating macroevolution render all talk of evolution theory “the scientific equivalent of discussing properties of angels”? This ideological posturing disguised as science, whether it emanates from the fundamentalist right or the orhodox left, demonstrates a fundamental misunderstanding of the process of scientific reasoning, which rarely lends itself to “smoking guns” and absolute certainty. It also confuses function with process. We may not yet know how genes and nature interact to shape gender identity but that does not mean, as Marks would have it, that stating that genetics play a role is “speculative.” We have yet to find the genetic basis for tallness, yet we can be quite certain that it is more likely to be found in the Dutch, now the world’s tallest population, than in the Japanese. The search for scientific truth is a process. It may be years before we identify a gene that ensures that humans grow five fingers, but we can be assured there is one, or a set of them. There are patterned human differences even though the specific gene sequences and the complex role of environmental triggers are elusive.

Mungo man, not descended from Mitichondrial Eve.

mungomancolour.jpg

Mungo man was found in at Lake Mungo, New South Wales in 1974. He was estimated to be a very tall 6’4″, and old when he died, and the most widely accepted date for his death was about 40,000 years ago. He was sprinkled with red ochre, a common practice in many ancient burials.

mungoskull.gif

The really interesting thing about Mungo man is the mitochondrial DNA that was extracted from his ancient bones. Mungo man was not descended from mitochondrial Eve, and yet he was an modern human. In fact his Mt DNA was nearly as distantly related to the modern lineages as a Neanderthals, and it really put a dent in the ‘out of Africa’ theory.

Borrowed from Donsmaps.

 mungotree.gif

The evolution of lactose tolerance, and it’s distribution.

 

Stone Age Adults Couldn’t Stomach Milk, Gene Study Shows.

  
James Owen
for National Geographic News

February 26, 2007
Milk wasn’t on the Stone Age menu, says a new study which suggests the vast majority of adult Europeans were lactose intolerant as recently as 7,000 years ago. While cow’s milk is a mainstay in the diet of modern-day Europeans, their ancestors weren’t able to digest the nutritious dairy product after childhood, according to DNA analysis of human skeletons from the Neolithic period. The findings supports the idea that milk drinkers became widespread in Europe only after dairy farming had become established there—not the other way around.

Most mammals lose their ability to digest milk after being weaned, but some humans can continue to benefit from the calcium-rich, high-energy liquid. This is because they carry a mutation that lets them continue producing lactase, the gut enzyme needed to break down the milk sugar lactose, in adulthood. Lactose tolerance is most common in people of European origin, especially those from the northern and central areas of the continent. It is relatively rare in Asian and Native American populations.

Lactose Tolerance

High levels of lactose tolerance among these European groups are thought to reflect an evolutionary advantage. Early farming communities that could digest milk could consume the liquid during otherwise poor harvests, for instance. Some scientists argue this adaptation was previously very rare in humans, spreading only after the introduction of farming to Europe. Others say prehistoric populations were already split between those who could and couldn’t drink milk as adults. This split, the researchers say, determined which groups became dairy farmers.

Burger’s team analysed the DNA of well-preserved Stone Age skeletons from locations in northern and central Europe
Bones dated to between 5800 and 5200 B.C. were tested for a genetic marker associated with the production of lactase. The team says it found no trace of the lactase gene, indicating that people from the period weren’t yet able to drink milk.
Natural Selection

The study suggests that the lactase gene spread rapidly in the human population only after dairy livestock were introduced to Europe about 8,000 years ago, Burger says. (Related: “Goats Key to Spread of Farming, Gene Study Suggests” [October 10, 2006].) “I think it’s a very old mutation that was completely useless before farming started,” he said. But then the gene suddenly became useful, and its presence in the population quickly grew through natural selection, Burger said. “People who had cows, goats, or sheep and were lactose resistant had more children, and those children survived infant mortality and years of poor harvests,” he said. The legacy of this evolutionary process is very apparent in the DNA of northern and central Europeans today, Burger notes. In parts of Sweden, he says, 100 percent of people carry the lactase gene, whereas the average figure for the whole country is about 90 percent.

In Scandinavia, Holland, Britain, and Ireland, he added, “you can say most of the people are the descendents of dairy farmers.” (See a map of Europe.) Milk tolerance also exists in southern and eastern European populations, while certain prehistoric farming communities in North Africa and the Middle East also developed the trait, scientists say.
But in other populations the lactase gene is largely absent. “All over the world most people can’t drink milk when they’re adults,” Burger said. “It’s only some populations in northern Africa and Europeans that can.”

http://www.ntanet.net/milk-consumption.pdf

Rates of lactose intolerance.

Southeast Asians/98%
Asian Americans/90%
Alaskan Eskimo/80%
African Americans Adults/79%
Mexicans from rural communities/73.8%
North American Jews/68.8%
Creek Cypriots/66%
Cretans/56%
Mexican American Males/55%
Indian Adults/50%
African American Children/45%
Indian Children/20%
Caucasians of N. European and Scandinavian decent/5%

http://www.cambridge.org/us/books/kiple/lactose.htm

Adult lactase capability appears to have evolved in two, and possibly three, geographic areas. The case is clearest and best documented for northern Europe, where there are very high percentages around the Baltic and North Seas. High levels of lactase persistence seem closely linked to Germanic and Finnic groups. Scandinavia, northern Germany, and Britain have high levels, as do the Finns and Estonians, the Finnic Izhorians west of St. Petersburg, the Mari of the middle Volga basin, and, to a lesser extent, their more distant relations, the Hungarians.

There is a general north—south gradient in Europe, which is evident within Germany, France, Italy, and perhaps Greece. As noted, more information is needed for Spain, Portugal, and eastern Europe, but there may be something of a west—east gradient in the Slavic lands. Varying frequencies of the LAC*P allele among Lapp groups may be related to differing lengths of historical use of reindeer and cow’s milk and to admixture with other Scandinavians (Sahi 1994).

The second center of adult lactase persistence lies in the arid lands of Arabia, the Sahara, and eastern Sudan. There, lactase persistence characterizes only nomadic populations heavily dependent on camels and cattle, such as the Bedouin Arabs, the Tuareg of the Sahara, the Fulani of the West African Sahel, and the Beja and Kabbabish of Sudan. Lower rates among Nigerian Fulani may indicate a higher degree of genetic mixing with other peoples than among the Fulani of Senegal. In contrast, surrounding urban and agricultural populations, whether Arab, Turkish, Iranian, or African, have very low rates. It is interesting to note that the Somali sample also had a low frequency of the LAC*P allele. Possibly, pastoral Somali have higher prevalences than their urban compatriots.

A third center of adult lactase persistence has been suggested among the Tutsi population of the Uganda-Rwanda area of the East African interior. The Tutsi are an aristocratic cattle-herding caste of Nilotic descent who have traditionally ruled over agricultural Bantu-speakers. Table IV.E.6.1 shows that only 7 percent of a sample of 65 Tutsi adults were lactase deficient, but the data are old, there certainly has been some mixture with Bantu-speakers, and the study should be replicated. The Nilotic peoples of the southern Sudan, whence the Tutsi originated a few centuries ago, do not display this trait. Unless the Tutsi result can be confirmed, and the Maasai and other East African Nilotic groups can be tested, this third center of the LAC*P allele must be considered doubtful. If it does exist, it probably arose as a fairly recent mutation, as there are no obvious historical mechanisms to account for gene flow between the Tutsi and desert dwellers farther north.

lactose-tolerance.jpg

Figure 1. Geographic coincidence between milk gene diversity in cattle, lactose tolerance in humans and locations of Neolithic cattle farming sites in NCE.
(a) Geographic distribution of the 70 cattle breeds (blue dots) sampled across Europe and Turkey. (b) Synthetic map showing the first principal component resulting from the allele frequencies at the cattle genes. The dark orange color shows that the greatest milk gene uniqueness and allelic diversity occurs in cattle from NCE. (c) Geographic distribution of the lactase persistence allele in contemporary Europeans. The darker the orange color, the higher is the frequency of the lactase persistence allele. The dashed black line indicates the limits of the geographic distribution of early Neolithic cattle pastoralist (Funnel Beaker Culture) inferred from archaeological data15.

http://www.nature.com/ng/journal/v35/n4/full/ng1263.html

This article states that the gene for lactase persistance is older than dairy farming in Europeans. I can’t help wandering if maybe Cro Magnons practised a Sami-like herd management that gave them access to milk.

People at Pinnacle Point.

Again, ancient humans were more widespread.

ASUNews

October 17, 2007

ASU team detects earliest modern humans

Evidence of early humans living on the coast in South Africa 164,000 years ago, far earlier than previously documented, is being reported in the Oct. 18 issue of the journal Nature.

The international team of researchers reporting the findings include Curtis Marean, a paleoanthropologist with the Institute of Human Origins at Arizona State University and three graduate students in the School of Human Evolution and Social Change.

“Our findings show that at 164,000 years ago in coastal South Africa humans expanded their diet to include shellfish and other marine resources, perhaps as a response to harsh environmental conditions,” notes Marean, a professor in ASU’s School of Human Evolution and Social Change. “This is the earliest dated observation of this behavior.”

Further, the researchers report that co-occurring with this diet expansion is a very early use of pigment, likely for symbolic behavior, as well as the use of bladelet stone tool technology, previously dating to 70,000 years ago.

These new findings not only move back the timeline for the evolution of modern humans, they show that lifestyles focused on coastal habitats and resources may have been crucial to the evolution and survival of these early humans.

Searching for beginnings

After decades of debate, paleoanthropologists now agree the genetic and fossil evidence suggests that the modern human species – Homo sapiens – evolved in Africa between 100,000 and 200,000 years ago.

Yet, archaeological sites during that time period are rare in Africa. And, given the enormous expanse of the continent, where in Africa did this crucial step to modern humans occur?

“Archaeologists have had a hard time finding material residues of these earliest modern humans,” Marean says. “The world was in a glacial stage 125,000 to 195,000 years ago, and much of Africa was dry to mostly desert; in many areas food would have been difficult to acquire. The paleoenvironmental data indicate there are only five or six places in all of Africa where humans could have survived these harsh conditions.”

In seeking the “perfect site” to explore, Marean analysed ocean currents, climate data, geological formations and other data to pin down a location where he felt sure to find one of these progenitor populations: the Cape of South Africa at Pinnacle Point.

“It was important that we knew exactly where to look and what we were looking for,” says Marean. This type of research is expensive and funding is competitive. Marean and the team of scientists who set out to Pinnacle Point to search for this elusive population, did so with the help of a $2.5 million grant from the National Science Foundation’s Human Origins: Moving in New Directions (HOMINID) program.

Their findings are reported in the Nature paper “Early human use of marine resources and pigment in South Africa during the Middle Pleistocene.” In addition to Marean, authors on the paper include three graduate students in ASU’s School of Human Evolution and Social Change: Erin Thompson, Hope Williams and Jocelyn Bernatchez. Other authors are Miryam Bar-Matthews of the Geological Survey of Israel, Erich Fisher of the University of Florida, Paul Goldberg of Boston University, Andy I.R. Herries of the University of New South Wales (Australia), Zenobia Jacobs of the University of Wollongong (Australia), Antonieta Jerardino of the University of Cape Town (South Africa), Panagiotis Karkanas of Greece’s Ministry of Culture, Tom Minichillo of the University of Washington, Ian Watts from London and excavation co-director Peter J. Nilssen of the Iziko South African Museum.

The Middle Stone Age, dated between 35,000 and 300,000 years ago, is the technological stage when anatomically modern humans emerged in Africa, along with modern cognitive behavior, says Marean. When, however, within that stage modern human behavior arose is currently debated, he adds.

“This time is beyond the range of radiocarbon dating, yet the dates on the finds published here are more secure than is typical due to the use of two advanced and independent techniques,” Marean says.

Uranium series dates were attained by Bar-Matthews on speleothem (the material of stalagmites), and optically stimulated luminescence dates were developed by Jacobs. According to Marean, the latter technique dates the last time that individual grains of sand were exposed to light, and thousands of grains were measured.

Migrating along the coast

“Generally speaking, coastal areas were of no use to early humans – unless they knew how to use the sea as a food source” says Marean. “For millions of years, our earliest hunter-gatherer relatives only ate terrestrial plants and animals. Shellfish was one of the last additions to the human diet before domesticated plants and animals were introduced.”

Before, the earliest evidence for human use of marine resources and coastal habitats was dated about 125,000 years ago. “Our research shows that humans started doing this at least 40,000 years earlier. This could have very well been a response to the extreme environmental conditions they were experiencing,” he says.

“We also found what archaeologists call bladelets – little blades less than 10 millimetres in width, about the size of your little finger,” Marean says. “These could be attached to the end of a stick to form a point for a spear, or lined up like barbs on a dart – which shows they were already using complex compound tools. And, we found evidence that they were using pigments, especially red ochre, in ways that we believe were symbolic,” he describes.

Archaeologists view symbolic behavior as one of the clues that modern language may have been present. The earliest bladelet technology was previously dated to 70,000 years ago, near the end of the Middle Stone Age, and the modified pigments are the earliest securely dated and published evidence for pigment use.

“Coastlines generally make great migration routes,” Marean says. “Knowing how to exploit the sea for food meant these early humans could now use coastlines as productive home ranges and move long distances.”

Results reporting early use of coastlines are especially significant to scientists interested in the migration of humans out of Africa. Physical evidence that this coastal population was practising modern human behavior is particularly important to geneticists and physical anthropologists seeking to identify the progenitor population for modern humans.

“This evidence shows that Africa, and particularly southern Africa, was precocious in the development of modern human biology and behavior. We believe that on the far southern shore of Africa there was a small population of modern humans who struggled through this glacial period using shellfish and advanced technologies, and symbolism was important to their social relations. It is possible that this population could be the progenitor population for all modern humans,” Marean says.

ASU’s Institute for Social Science Research partners with archaeologists to create 3-D video

Along with the paper, also posted on Nature’s Web site is a video “The Cave 13B 3-D Experience.” A first for archaeology, the three-dimensional video representation of the stone age site and its remains, was produced with technical assistance from ASU’s Institute for Social Science Research in the College of Liberal Arts and Sciences. Erich Fisher of the University of Florida led the development group. The video is a fully georeferenced representation of the paleoscape and cave at Pinnacle Point. It allows the scientists to add field data and have it appear in the exact position it was found.

“The video is a recording of mouse movements within the computer model. Essentially, the computer model allows the user to fly into the landscape and enter the caves, walk around and add data. Our plan is to eventually make it available to the public over the World Wide Web with avatars who conduct tours of the cave. School children could hear about the story in the news and then log on and fly into the cave to see the result,” says Marean.

Due to global sea levels rising, it’s pretty obvious that many crucial coastal sites are going to be deep underwater, and either destroyed or inaccessible. These people were modern humans, with symbolic thought and sophisticated tools. And they were widespread by 164,000 years ago. To think they weren’t all over the Eurasian land mass by then is just ridiculous, as Red Sea wasn’t much more than a big, very salty puddle at times,  manageable by swimming or the simplest of rafts.

Call me crazy, but I’m thinking 100k is a much more likely date for an expansion. And probably earlier. It does call into question the validity of dates gained through mitochondrial DNA.