Post List

Anthropology posts

(Modify Search »)

  • March 2, 2011
  • 04:14 PM
  • 1,603 views

The Combe Capelle burial is Holocene in age

by Julien Riel-Salvatore in A Very Remote Period Indeed

So says this Past Horizons report. This is fairly important in that it joins a bunch of other modern Homo sapiens remain long thought to have been associated with the Aurignacian to recently have been directly dated and shown to be much more recent (Churchill and Smith 2000). One recent and well publicized case was that of the Vogelherd remains, which were redated to between 3.9-5kya as opposed ... Read more »

Churchill SE, & Smith FH. (2000) Makers of the early Aurignacian of Europe. American journal of physical anthropology, 61-115. PMID: 11123838  

  • March 2, 2011
  • 05:05 AM
  • 1,900 views

Obsidian blades as surgical tools

by Julien Riel-Salvatore in A Very Remote Period Indeed

In my recent post on #hipsterscience, the quote that struck closest to home was the one about the obsidian blade. See, most of my analytical work has been focused on stone tools (aka lithics) and how they were manufactured, used and managed by people in the past. Whenever it was available, obsidian seems to have been one of the preferred materials to make sharp flakes of, mainly because it is ... Read more »

Buck BA. (1982) Ancient technology in contemporary surgery. The Western journal of medicine, 136(3), 265-9. PMID: 7046256  

  • March 1, 2011
  • 09:08 AM
  • 1,226 views

The Mystery of "Whoonga"

by Neuroskeptic in Neuroskeptic

According to a disturbing BBC news story, South African drug addicts are stealing medication from HIV+ people and using it to get high:'Whoonga' threat to South African HIV patients"Whoonga" is the street name for efavirenz (aka Stocrin), one of the most popular antiretroviral drugs. The pills are apparantly crushed, mixed with marijuana, and smoked for its hallucinogenic effects.This is not, in fact, a new story; Scientific American covered it 18 months ago and the BBC themselves did in 2008 (although they didn't name efavirenz.)Why would an antiviral drug get you high? This is where things get rather mysterious. Efavirenz is known to enter the brain, unlike most other HIV drugs, and psychiatric side-effects including anxiety, depression, altered dreams, and even hallucinations are common in efavirenz use, especially with high doses (1,2,3), but they're usually mild and temporary. But what's the mechanism?No-one knows, basically. Blank et al found that efavirenz causes a positive result on urine screening for benzodiazepines (like Valium). This makes sense given the chemical structure:Efavirenz is not a benzodiazepine, because it doesn't have the defining diazepine ring (the one with two Ns). However, as you can see, it has a lot in common with certain benzos such as oxazepam and lorazepam.However, while this might well explain why it confuses urine tests, it doesn't by itself go far to explaining the reported psychoactive effects. Oxazepam and lorazepam don't cause hallucinations or psychosis, and they reduce anxiety, rather than causing it.They also found that efavirenz caused a false positive for THC, the active ingredient in marijuana; this was probably caused by the gluconuride metabolite. Could this metabolite have marijuana-like effects? No-one knows at present.Beyond that there's been little research on the effects of efavirenz in the brain. This 2010 paper reviewed the literature and found almost nothing. There were some suggestions that it might affect inflammatory cytokines or creatine kinase, but these are not obvious candidates for the reported effects.Could the liver be responsible, rather than the brain? Interestingly, the 2010 paper says that efavirenz inhibits three liver enzymes: CYPs 2C9, 2C19, and 3A4. All three are involved in the breakdown of THC, so, in theory, efavirenz might boost the effects of marijauna by this mechanism - but that wouldn't explain the psychiatric side effects seen in people who are taking the drug for HIV and don't smoke weed.Drugs that cause hallucinations generally either agonize 5HT2A receptors or block NMDA receptors. Off the top of my head, I can't see any similarities between efavirenz and drugs that target those systems like LCD (5HT2A) or ketamine or PCP (NMDA), but I'm no chemist and anyway, structural similarity is not always a good guide to what drugs do.If I were interested in working out what's going on with efavirenz, I'd start by looking at GABA, the neurotransmitter that's the target of benzos. Maybe the almost-a-benzodiazepine-but-not-quite structure means that it causes some unusual effects on GABA receptors? No-one knows at present. Then I'd move on to 5HT2A and NMDA receptors.Finally, it's always possible that the users are just getting stoned on cannabis and mistakenly thinking that the efavirenz is making it better through the placebo effect. Stranger things have happened. If so, it would make the whole situation even more tragic than it already is.Cavalcante GI, Capistrano VL, Cavalcante FS, Vasconcelos SM, Macêdo DS, Sousa FC, Woods DJ, & Fonteles MM (2010). Implications of efavirenz for neuropsychiatry: a review. The International journal of neuroscience, 120 (12), 739-45 PMID: 20964556... Read more »

Cavalcante GI, Capistrano VL, Cavalcante FS, Vasconcelos SM, Macêdo DS, Sousa FC, Woods DJ, & Fonteles MM. (2010) Implications of efavirenz for neuropsychiatry: a review. The International journal of neuroscience, 120(12), 739-45. PMID: 20964556  

  • February 28, 2011
  • 03:01 PM
  • 1,520 views

Video: chimpanzees, tools and Treculia fruits

by Djuke Veldhuis in Elements Science

New research shows that the tools a chimpanzee population will use is governed by the environment they live in, reports Louise Ogden.



Related posts:Tricks of the trade: chimpanzees and their tools
... Read more »

  • February 28, 2011
  • 10:16 AM
  • 1,613 views

Polygamy bad for women

by Abi Millar in Elements Science

Polygamy has been shown to harm women’s reproductive success, heightening the mystery as to why it exists at all, reports Abi Millar



Related posts:Men and women’s reasons for running are miles apart
Women in science – a celebration
... Read more »

Jacob A. Moorada, Daniel E.L. Promislow, Ken R. Smith, Michael J. Wade. (2011) Mating system change reduces the strength of sexual selection in an American frontier population of the 19th century. Evolution and Human Behavior, 32(2), 147-155. info:/

  • February 28, 2011
  • 10:14 AM
  • 2,281 views

Effects of the Anthropocene | Indicator Species

by Michael Lombardi in a New Life in the Sea

A recent CNN news piece reported on baby dolphin deaths in the Gulf of Mexico. The report went on to discuss that these deaths were not the norm in considering the shear numbers, and the time of year - that is very early in the birthing season, indicating that some births may be premature. The report went on to imply that this may be a consequence of the BP oil disaster. Makes sense.

 http://www.cnn.com/2011/US/02/24/gulf.dolphins/index.html

For the first time in Planet Earth's history, one species and one species alone is causing a considerable global imbalance - and that species is Homo sapiens. The magnitude of our effects on the environment, in fact, have lead researchers to redefine our current geologic epoch the 'Anthropocene' (or era of humans). This very topic was the subject of a recent article in the March (2011) issue of National Geographic Magazine in a piece entitled 'the Age of Man'.



Historically, geologic time and cycles have been long drawn out while nature just ran its course in this closed system we call Earth. Since we've been around however, things have changed rapidly. So rapidly in fact that we can attribute them to distinct periods in human history - particularly coinciding with the industrial revolution. What's scary is that unlike global geology and time, we humans have a brain. Soon, there will be more than 10 billion brains on this small planet, each making decisions that could have significant impacts on our global balance. Yes, it is very scary - we can make conscious decisions about our goings on and developments and can elect to continue or discontinue based on our observation and analysis of the cause/effects we set into motion. We humans now control the global cycle.
With this ability comes great responsibility, and that is where we need to wake up folks.

Back to our dolphins washing up on the beach - we need to understand the concept of 'indicator species'. By definition, an indicator species is an organism whose presence, absence or abundance reflects a specific environmental condition. Very often, microbes in soils are viewed as indicators of change in chemical composition say on a farm. Seeing more or less of a particular species might indicate health or poor quality of these soils. Planta and algal species are also typical indicator species in other ecosystems.

Now I understand fully well that the news is not the perfect barometer of biological imbalance, but in just the past few months, national headlines have been made with birds falling out of the sky, massive fish kills in inland watersheds, and now dolphins washing up on the beach. Isolated events? Perhaps. But consider the possibility that these are our new indicator species - due to anthropogenic stress on the environment. Our new indicators, are higher on the tree of life, and that should scare the you know what out of you. What's next...monkeys falling out of trees?

Bottom line is that humans by their nature are selfish creatures, and are apparently not stopping their selfish pursuits at any cost. We can wait for study after study to prove our impacts right or wrong, but it'll be too late. Look at the indicators folks...I beg this of you.

References

Nature (1962). Plankton Indicator Species and their Statistical Analysis Nature, 193 (4822), 1245-1246 DOI: 10.1038/1931245a0

Mattson, M., Mullin, K., Ingram, G., & Hoggard, W. (2006). AGE STRUCTURE AND GROWTH OF THE BOTTLENOSE DOLPHIN (TURSIOPS TRUNCATUS) FROM STRANDINGS IN THE MISSISSIPPI SOUND REGION OF THE NORTH-CENTRAL GULF OF MEXICO FROM 1986 TO 2003 Marine Mammal Science, 22 (3), 654-666 DOI: 10.1111/j.1748-7692.2006.00057.x

Nature (2003). Welcome to the Anthropocene Nature, 424 (6950), 709-709 DOI: 10.1038/424709b

Related articlesHave We Entered The Anthropocene (New Man) Epoch? (lockergnome.com)
New age researchers highlight how man is changing the world (scienceblog.com)
Orchids Get Seeds Saved To Prevent Their Extinction (treehugger.com)
Is Global Species Loss a Self-Fulfilling Prophecy? (treehugger.com)
Understand Thresholds To Understand Problems (blogs.forbes.com)
The Next Ice Age (discoveryenterprise.blogspot.com)
... Read more »

  • February 27, 2011
  • 11:27 AM
  • 722 views

Got beef with worms?

by zacharoo in Lawn Chair Anthropology

Photo: {http://news.brown.edu/pressreleases/2009/09/bilateral}, by Eric Rottinger at kahikai.orgFlipping through the current issue of Current Biology, it sounds like someone has some serious beef with acoelomorph flatworms. Apparently these critters have been used as a model for the 'missing link' between simple-bodied cnidarians (like jellyfish) and bilaterians (bilaterally symmetrical animals like you and me and flies and fish, and really a good deal of animal biodiversity); and this may be problematic according to the commentary. The origins of bilaterians is a major development in the evolution of body plans, a topic about which I know nothing. But I'm sold on the title and a line of the summary:Title: A Soap Opera of Unremarkable Worms.From the summary: "...acoelomorphs might instead be degenerate deuterostomes..."Take that, you shifty bastard flatworms.Acoelomorph roastLowe CJ, & Pani AM (2011). Animal Evolution: A Soap Opera of Unremarkable Worms. Current biology : CB, 21 (4) PMID: 21334293... Read more »

  • February 27, 2011
  • 02:47 AM
  • 805 views

Best Acknowledgment Ever

by teofilo in Gambler's House

In 1978 H. Martin Wobst of the University of Massachusetts at Amherst published a short article in American Antiquity entitled “The Archaeo-Ethnology of Hunter-Gatherers or the Tyranny of the Ethnographic Record in Archaeology.”  Despite the evocative title, the article itself is a highly theoretical argument about the proper relationship between archaeology and ethnography that is [...]... Read more »

  • February 26, 2011
  • 11:59 AM
  • 866 views

Imitation and Social Cognition (III): Man’s best friend

by Michael in A Replicated Typo 2.0


Tweet


In my two previous posts (here and here) about imitation and social cognition I wrote about experiments which showed that

1)  young children tend to imitate both the necessary as well as the unnecessary actions when shown how to get at a reward, whereas wild chimpanzees only imitate the necessary actions.

And that

2) both 14-month old human infants . . . → Read More: Imitation and Social Cognition (III): Man’s best friend... Read more »

Range F, Viranyi Z, & Huber L. (2007) Selective imitation in domestic dogs. Current biology : CB, 17(10), 868-72. PMID: 17462893  

  • February 25, 2011
  • 04:30 AM
  • 1,421 views

Brazilians, more European than not?

by Razib Khan in Gene Expression


Credit: Dragon Horse
The Pith: Brazil is often portrayed as the second largest black nation in the world, after Nigeria. But it turns out that the majority of the ancestors for non-white Brazilians is European.
One of the more popular sources of search engine traffic to this website has to do with the population genomics of Latin America. For example, my post showing that Argentina is not quite as European a country as it likes to consider itself is regularly cited in online arguments (people of various “persuasions” are invested in the racial status of the Argentine people). But last week in PLoS ONE a paper looking at the patterns of ancestry in the Brazilian population came to a somewhat inverse conclusion as to the self-conception or perception of the preponderant racial identity of that nation. Let me quote from the conclusion of the paper:
Among the actions of the State in the sphere of race relations are initiatives aimed at strengthening racial identity, especially “Black identity” encompassing the sum of those self-categorized as Brown or Black in the censuses and government surveys. The argument that ...... Read more »

Pena SDJ, Di Pietro G, Fuchshube-Moraes M, Genro JP, & Hutz MH. (2011) The Genomic Ancestry of Individuals from Different Geographical Regions of Brazil Is More Uniform Than Expected. PLoS ONE . info:/10.1371/journal.pone.0017063

  • February 24, 2011
  • 09:30 AM
  • 1,661 views

PsychBytes: First Names, Vegetables, and Baseball

by Jason Goldman in The Thoughtful Animal

PsychBytes is an experiment: three recent findings in psychology, each explained in three paragraphs or less. Generally, these are papers that I wouldn't have otherwise covered in this blog. Please share your thoughts on this model in the comments. What works, and what doesn't? Would you like more PsychBytes in the future?

What's In A Name?
People who settle down and build a life in the frontier tend to be more individualistic, even if they started out with more interdependent values. Some features of the frontier life that would be attractive to an independent person are low population density, fewer social connections, and fewer social institutions. Indeed, people living in more recently settled regions in the United States more frequently behave in ways consistent with individualistic values, compared with people living in older parts of the country. This includes things like living alone after age 65 rather than moving into a retirement home, self-employment, and the getting divorced. It's possible, however, that the relationship between these individualistic behaviors and frontier life is simply a statistical accident. For example, the rate of divorce could be related to religiosity, which is in turn related to individualism. It would appear as if there was a relationship between divorce and individualistic behaviors, but it would only be due to the shared relationship with religious beliefs.

In order to address this question, Michael Varnum and Shinobu Kitayama of the University of Michigan wondered if uncommon names were more common among children born on a frontier. The way that parents choose names for their children is a well-established indicator of independent values. Varnum and Kitayama note that "naming practices embody important cultural values, and are linked to a host of psychological, social, and economic outcomes." They found that a greater percentage of babies who were born in older parts of the United States, such as New England, were given popular names (for the year the child was born), compared with babies born in newer regions, such as the Pacific Northwest and Rocky Mountains. In fact, the year in which a state was admitted to the United States was negatively correlated with the percentage of infants who were given the most popular boys' and girls' names.

Correlation between a state's inclusion in the US and the giving of top 10 names. Boys above, girls below. Click to enlarge.


And this relationship wasn't unique to the United States. A similar dataset was generated using baby names given in seven provinces in Canada: three eastern provinces which were settled earlier (Nova Scotia, Ontario, and Quebec), and four western provinces which were more recently settled (Alberta, British Columbia, Manitoba, and Saskatchewan). As expected, popular names were more common in the older provinces than in the newer provinces. A third dataset using global data further replicated these results: popular names were more common in European countries (Austria, Denmark, England, Hungary, Ireland, Norway, Scotland, Spain, and Sweden) compared with "frontier countries," founded by European immigrants (Australia, Canada, New Zealand, and the United States). Baby naming is quite a significant decision for parents. It makes sense, then, that the practice would reflect cultural values.

Varnum ME, & Kitayama S (2011). What's in a Name?: Popular Names Are Less Common on Frontiers. Psychological science : a journal of the American Psychological Society / APS, 22 (2), 176-83 PMID: 21196534


Vegetables for Fun and Profit
How often do you hear parents promising their children dessert upon completion of their vegetables? While this sort of external motivation is very powerful, there is a potential downside: it could undermine intrinsic motivation. In other words, children might simply eat the vegetables to get the reward, and therefore never grow to like the vegetables themselves. This could result in poor eating choices later in childhood and adolescence, when the child is free to make his or her own decisions. The scientific literature on the use of incentives for children's vegetable consumption shows mixed conclusions: some studies show that vegetable intake increases when paired with a reward, and that those increases are maintained when the reward is withdrawn. Other studies find that as soon as the rewards are removed, vegetable intake returns to baseline. Lucy J. Cooke and colleagues from University College London and the University of Sussex attempted to clarify this confusing picture.

Over the course of twelve days, children age 4-6 were exposed to a vegetable they didn't like. The children were divided into three intervention conditions and one control condition. In the first intervention condition, vegetables were paired with non-edible rewards such as stickers. The second intervention condition paired social rewards (praise) with vegetables. The third intervention condition included no external reward; could exposure alone could increase liking for a previously disliked vegetable? Finally, the children in the control condition received no vegetables and no rewards.

The kids in all three intervention conditions reported increased liking for their disliked vegetable after twelve days, with no significant differences between the three conditions. The liking was maintained for three months for the two reward conditions, but not for the exposure-only/no-reward condition. Taken together, this experiment suggests that rewarding children for eating their vegetables is not only extremely effective, but lasts a considerable amount of time following withdrawal of the reward. In fact, exposure alone without a reward is actually less effective. Parents: keep that dessert coming!

Cooke LJ, Chambers LC, Añez EV, Croker HA, Boniface D, Yeomans MR, & Wardle J (2011). Eating for Pleasure or Profit: The Effect of Incentives on Children's Enjoyment of Vegetables. Psychological science : a journal of the American Psychological Society / APS, 22 (2), 190-6 PMID: 21191095
Photo: Flickr/woodleywonderworks


How Do We Set Personal Goals?
Why are students who score 89% on an exam more likely to study harder before the subsequent exam, compared with students who score 82%? In both cases, the scores are just one percentage-point below the next grade level: 90% would be an A-, while 83% would be a solid B. And the amount of extra effort necessary to achieve a higher grade for either student is roughly equivalent. Devin Pope and Uri Simonsohn from the schools of business at the Universities of Chicago and Pennsylvania, respectively, think that round numbers serve as "cognitive reference points," which people use when judging their own outcomes. In other words, individuals whose performance is just short of a round number (such as our B+ student) would be more likely to work at improving their performance, compared with people whose performance is just above a round number (such as our B- student). To test this prediction, Pope and Simonsohn collected data from professional baseball players and high school students taking the SAT exam.

The data matched with their predictions. Professional batters were four times more likely to end a season with a .300 batting average than with a .299 average. High school juniors were 10-20% more likely to re-take the SAT in an effort to boost their scores if their initial score ended in "90" (as in 1190 or 1290) than if their initial s... Read more »

Varnum ME, & Kitayama S. (2011) What's in a Name?: Popular Names Are Less Common on Frontiers. Psychological science : a journal of the American Psychological Society / APS, 22(2), 176-83. PMID: 21196534  

Cooke LJ, Chambers LC, Añez EV, Croker HA, Boniface D, Yeomans MR, & Wardle J. (2011) Eating for Pleasure or Profit: The Effect of Incentives on Children's Enjoyment of Vegetables. Psychological science : a journal of the American Psychological Society / APS, 22(2), 190-6. PMID: 21191095  

Pope D, & Simonsohn U. (2011) Round numbers as goals: evidence from baseball, SAT takers, and the lab. Psychological science : a journal of the American Psychological Society / APS, 22(1), 71-9. PMID: 21148460  

  • February 24, 2011
  • 04:49 AM
  • 1,874 views

Neanderthals and ornaments, birds of a feather?

by Julien Riel-Salvatore in A Very Remote Period Indeed

© Mauro Cutrona.
M. Peresani and colleagues (2011) report on the discovery of cut-marked bird bones from the latest Mousterian levels at Grotta di Fumane, located in the Veneto region of NE Italy. They interpret the fact that these cutmarks are almost exclusively found on wing bones of only a subset of the 22 species of birds found at Fumane as evidence that Neanderthals there specifically ... Read more »

Zilhao, J., Angelucci, D., Badal-Garcia, E., d'Errico, F., Daniel, F., Dayet, L., Douka, K., Higham, T., Martinez-Sanchez, M., Montes-Bernardez, R.... (2010) Symbolic use of marine shells and mineral pigments by Iberian Neandertals. Proceedings of the National Academy of Sciences, 107(3), 1023-1028. DOI: 10.1073/pnas.0914088107  

  • February 22, 2011
  • 12:41 PM
  • 1,361 views

Ancestor Worship

by Laelaps in Laelaps

By the close of 2002, there were at least three contenders for the title of “earliest known human.” There was the 7 million year old Sahelanthropus tchadensis from the Djurab Desert, the 6 million year old Orrorin tugenensis from Kenya, and the 5.6 million year old Ardipithecus kadabba from northeastern Ethiopia’s Afar region. Though very [...]... Read more »

Brunet, M., Guy, F., Pilbeam, D., Mackaye, H., Likius, A., Ahounta, D., Beauvilain, A., Blondel, C., Bocherens, H., Boisserie, J.... (2002) A new hominid from the Upper Miocene of Chad, Central Africa. Nature, 418(6894), 145-151. DOI: 10.1038/nature00879  

McBrearty, S., & Jablonski, N. (2005) First fossil chimpanzee. Nature, 437(7055), 105-108. DOI: 10.1038/nature04008  

White, T., Asfaw, B., Beyene, Y., Haile-Selassie, Y., Lovejoy, C., Suwa, G., & WoldeGabriel, G. (2009) Ardipithecus ramidus and the Paleobiology of Early Hominids. Science, 326(5949), 64-64. DOI: 10.1126/science.1175802  

Wood, B., & Harrison, T. (2011) The evolutionary context of the first hominins. Nature, 470(7334), 347-352. DOI: 10.1038/nature09709  

  • February 22, 2011
  • 11:30 AM
  • 1,879 views

John Shea, Human Evolution, and Behavioral Variability – Not Behavioral Modernity

by Daniel Lende in Neuroanthropology PLoS

John Shea, a professor of anthropology at Stony Brook University, gives us a double whammy of actual human evolution this month, rather than the typical victorious narrative. Using fossil and archaeological evidence, Shea takes down the idea that we became “modern” late in human evolution, with that sense of modernity (or progress) often tied to causes like “a cognitive revolution” or “an explosion of culture.”
In other words, he wants to contradict the popular image just below:

We have known for years that this image is entirely wrong. Chimpanzees, often cast at the base of the sequence, are – wow – just as evolved as we are. After all, they are an extant species, living today, and have also gone through millions of years of evolution since our common ancestor.
The sequence of being hunched over as a shuffling apeman in the middle to an erect man at the end is also wrong. Bipedality came early in hominin evolution, and is linked with an upright skeletal frame, not some missing link that mixes chimp and human into one. Finally, the linear sequence itself is wrong. There were numerous species of hominins in the past, a veritable branching tree just like Darwin used to diagram.
Shea adds one more important element that helps to completely dismantle this popular image. We often assume that our immediate ancestors were primitive, and that something special must set us apart.
Shea’s article, Refuting a Myth about Human Evolution, is this month’s feature article in American Scientist.
For decades, archeologists have believed that modern behaviors emerged among Homo sapiens tens of thousands of years after our species first evolved. Archaeologists disagreed over whether this process was gradual or swift, but they assumed that Homo sapiens once lived who were very different from us. These people were not “behaviorally modern,” meaning they did not routinely use art, symbols and rituals; they did not systematically collect small animals, fish, shellfish and other difficult-to-procure foods; they did not use complex technologies: Traps, nets, projectile weapons and watercraft were unknown to them.
Premodern humans—often described as “archaic Homo sapiens”—were thought to have lived in small, vulnerable groups of closely related individuals. They were believed to have been equipped only with simple tools and were likely heavily dependent on hunting large game. Individuals in such groups would have been much less insulated from environmental stresses than are modern humans. In Thomas Hobbes’s words, their lives were “solitary, nasty, brutish and short.”
This view is wrong. Much of the evidence used to support this view has relied on stone tool technologies, which are one of the most accessible records we have of human activity in the past, and the presumed florescence of art and sophisticated tool making in Europe by cro-Magnons, contrasted with those nasty, primitive Neanderthals.
Shea uses new evidence on human fossils and tool making to refute this old and worn-out narrative. First, modern humans have a greater time depth and greater geographical distribution than often assumed:
In Europe, the oldest Homo sapiens fossils date to only 35,000 years ago. But studies of genetic variation among living humans suggest that our species emerged in Africa as long as 200,000 years ago. Scientists have recovered Homo sapiens fossils in contexts dating to 165,000 to 195,000 years ago in Ethiopia’s Lower Omo Valley and Middle Awash Valley. Evidence is clear that early humans dispersed out of Africa to southern Asia before 40,000 years ago. Similar modern-looking human fossils found in the Skhul and Qafzeh caves in Israel date to 80,000 to 120,000 years ago. Homo sapiens fossils dating to 100,000 years ago have been recovered from Zhiren Cave in China. In Australia, evidence for a human presence dates to at least 42,000 years ago.
Second, the stone tool evidence shows great variability and overlap in tool types over this time period, rather than some march to modernity.
When [Clark's model of five modes of technology] is applied to sites in eastern Africa dating 284,000 to 6,000 years ago, a more complex view of prehistoric life there emerges. One does not see a steady accumulation of novel core technologies since our species first appeared or anything like a “revolution.” Instead one sees a persistent pattern of wide technological variability.
But Shea’s real target is not simply the documentation of variation in fossils and tools in the past, it’s how we think about the past, and how we do research and test ideas about recent human evolution.
In the most recent Current Anthropology, Shea has a peer-reviewed article, Homo Sapiens Is as Homo Sapiens Was: Behavioral Variability versus “Behavioral Modernity” in Paleolithic Archaeology. Here is the abstract:
Paleolithic archaeologists conceptualize the uniqueness of Homo sapiens in terms of “behavioral modernity,” a quality often conflated with behavioral variability. The former is qualitative, essentialist, and a historical artifact of the European origins of Paleolithic research. The latter is a quantitative, statistically variable property of all human behavior, not just that of Ice Age Europeans. As an analytical construct, behavioral modernity is deeply flawed at all epistemological levels.
This paper outlines the shortcomings of behavioral modernity and instead proposes a research agenda focused on the strategic sources of human behavioral variability. Using data from later Middle Pleistocene archaeological sites in East Africa, this paper tests and falsifies the core assumption of the behavioral-modernity concept—the belief that there were significant differences in behavioral variability between the oldest H. sapiens and populations younger than 50 kya. It concludes that behavioral modernity and allied concepts have no further value to human origins research. Research focused on the strategic underpinnings of human behavioral variability will move Paleolithic archaeology closer to a more productive integration with other behavioral sciences.
As Shea writes, the “behavioral modernity” approach focuses too much on the search for “human uniqueness”, and is framed by a narrative of moving from a primordial past to our resounding success as a species. In a typical heroic narrative, obstacles must be overcome, failures fought through, until finally here is a transformation in the hero – ourselves. That transformation is when we became “behaviorally modern” and language and art and all those good things are the trappings of our success.
This narrative comes with significant limitations.
The strongest reason for discarding “behavioral modernity” and “modern human behavior” is that they lack analytical precision. As matters stand today, there are wide and irresoluble theoretical disagreements about the nature of behavioral modernity, how to define it, and how to recognize it. Eurasian prehistorians use the term “modern human behavior” for evidence that occurs consistently over tens of thousands of years at a regional scale (various parts of Europe and/or Southwest Asia). Africanists use the term for behavior that occurs intermittently over hundreds of thousands of years at a continental scale. Neither term clarifies the description of archaeological evidence, nor does either of them refine our understanding of the evolution and variability of a particular behavior. They have become postmodern concepts, words that mean whatever one wants them to.
The idea that behavioral modernity is a derived evolutionary state, one not shared by all morphologically modern-looking H. sapiens and one that can be reliably diagnosed from behavioral characteristics, is rich with potential for abuse. It fits well with racist arguments that there are meaningful grade-level evolutionary differences among living humans. Such views are rarely expressed in scientific circles (or polite company), but they nevertheless can find traction among nonscientific audiences because they incorporate the same unilinear model of human evolution that underlies the behavioral modernity concept. If paleoanthropologists judge humans’ evolutionary state based on their behavior, why shouldn’t others do so as well? Discarding the term “behavioral modernity” will not stop individuals from cherry-picking selected findings of paleoanthropology to support racist agendas, but it will deny them the illusion that they are emulating an accepted scientific method.
Shea proposes a focus on behavioral variability as the solution. He approaches it as a quantifiable problem, and one linked to similar types of research in behavioral ecology.
Variability is a measurable quality of all human behavior expressed in term of modality, variance, skew, and other quantitative/statistical properties. These qualities change through time and space, and they do not necessarily follow a preferred direction. Trends are recognizable only i... Read more »

Shea, J. (2011) Is as Was . Current Anthropology, 52(1), 1-35. DOI: 10.1086/658067  

  • February 22, 2011
  • 11:30 AM
  • 1,876 views

John Shea, Human Evolution, and Behavioral Variability – Not Behavioral Modernity

by gregdowney in Neuroanthropology

John Shea, a professor of anthropology at Stony Brook University, gives us a double whammy of actual human evolution this month, rather than the typical victorious narrative. Using fossil and archaeological evidence, Shea takes down the idea that we became “modern” late in human evolution, with that sense of modernity (or progress) often tied to causes like “a cognitive revolution” or “an explosion of culture.”
In other words, he wants to contradict the popular image just below:

We have known for years that this image is entirely wrong. Chimpanzees, often cast at the base of the sequence, are – wow – just as evolved as we are. After all, they are an extant species, living today, and have also gone through millions of years of evolution since our common ancestor.
The sequence of being hunched over as a shuffling apeman in the middle to an erect man at the end is also wrong. Bipedality came early in hominin evolution, and is linked with an upright skeletal frame, not some missing link that mixes chimp and human into one. Finally, the linear sequence itself is wrong. There were numerous species of hominins in the past, a veritable branching tree just like Darwin used to diagram.
Shea adds one more important element that helps to completely dismantle this popular image. We often assume that our immediate ancestors were primitive, and that something special must set us apart.
Shea’s article, Refuting a Myth about Human Evolution, is this month’s feature article in American Scientist.
For decades, archeologists have believed that modern behaviors emerged among Homo sapiens tens of thousands of years after our species first evolved. Archaeologists disagreed over whether this process was gradual or swift, but they assumed that Homo sapiens once lived who were very different from us. These people were not “behaviorally modern,” meaning they did not routinely use art, symbols and rituals; they did not systematically collect small animals, fish, shellfish and other difficult-to-procure foods; they did not use complex technologies: Traps, nets, projectile weapons and watercraft were unknown to them.
Premodern humans—often described as “archaic Homo sapiens”—were thought to have lived in small, vulnerable groups of closely related individuals. They were believed to have been equipped only with simple tools and were likely heavily dependent on hunting large game. Individuals in such groups would have been much less insulated from environmental stresses than are modern humans. In Thomas Hobbes’s words, their lives were “solitary, nasty, brutish and short.”
This view is wrong. Much of the evidence used to support this view has relied on stone tool technologies, which are one of the most accessible records we have of human activity in the past, and the presumed florescence of art and sophisticated tool making in Europe by cro-Magnons, contrasted with those nasty, primitive Neanderthals.
Shea uses new evidence on human fossils and tool making to refute this old and worn-out narrative. First, modern humans have a greater time depth and greater geographical distribution than often assumed:
In Europe, the oldest Homo sapiens fossils date to only 35,000 years ago. But studies of genetic variation among living humans suggest that our species emerged in Africa as long as 200,000 years ago. Scientists have recovered Homo sapiens fossils in contexts dating to 165,000 to 195,000 years ago in Ethiopia’s Lower Omo Valley and Middle Awash Valley. Evidence is clear that early humans dispersed out of Africa to southern Asia before 40,000 years ago. Similar modern-looking human fossils found in the Skhul and Qafzeh caves in Israel date to 80,000 to 120,000 years ago. Homo sapiens fossils dating to 100,000 years ago have been recovered from Zhiren Cave in China. In Australia, evidence for a human presence dates to at least 42,000 years ago.
Second, the stone tool evidence shows great variability and overlap in tool types over this time period, rather than some march to modernity.
When [Clark's model of five modes of technology] is applied to sites in eastern Africa dating 284,000 to 6,000 years ago, a more complex view of prehistoric life there emerges. One does not see a steady accumulation of novel core technologies since our species first appeared or anything like a “revolution.” Instead one sees a persistent pattern of wide technological variability.
But Shea’s real target is not simply the documentation of variation in fossils and tools in the past, it’s how we think about the past, and how we do research and test ideas about recent human evolution.
In the most recent Current Anthropology, Shea has a peer-reviewed article, Homo Sapiens Is as Homo Sapiens Was: Behavioral Variability versus “Behavioral Modernity” in Paleolithic Archaeology. Here is the abstract:
Paleolithic archaeologists conceptualize the uniqueness of Homo sapiens in terms of “behavioral modernity,” a quality often conflated with behavioral variability. The former is qualitative, essentialist, and a historical artifact of the European origins of Paleolithic research. The latter is a quantitative, statistically variable property of all human behavior, not just that of Ice Age Europeans. As an analytical construct, behavioral modernity is deeply flawed at all epistemological levels.
This paper outlines the shortcomings of behavioral modernity and instead proposes a research agenda focused on the strategic sources of human behavioral variability. Using data from later Middle Pleistocene archaeological sites in East Africa, this paper tests and falsifies the core assumption of the behavioral-modernity concept—the belief that there were significant differences in behavioral variability between the oldest H. sapiens and populations younger than 50 kya. It concludes that behavioral modernity and allied concepts have no further value to human origins research. Research focused on the strategic underpinnings of human behavioral variability will move Paleolithic archaeology closer to a more productive integration with other behavioral sciences.
As Shea writes, the “behavioral modernity” approach focuses too much on the search for “human uniqueness”, and is framed by a narrative of moving from a primordial past to our resounding success as a species. In a typical heroic narrative, obstacles must be overcome, failures fought through, until finally here is a transformation in the hero – ourselves. That transformation is when we became “behaviorally modern” and language and art and all those good things are the trappings of our success.
This narrative comes with significant limitations.
The strongest reason for discarding “behavioral modernity” and “modern human behavior” is that they lack analytical precision. As matters stand today, there are wide and irresoluble theoretical disagreements about the nature of behavioral modernity, how to define it, and how to recognize it. Eurasian prehistorians use the term “modern human behavior” for evidence that occurs consistently over tens of thousands of years at a regional scale (various parts of Europe and/or Southwest Asia). Africanists use the term for behavior that occurs intermittently over hundreds of thousands of years at a continental scale. Neither term clarifies the description of archaeological evidence, nor does either of them refine our understanding of the evolution and variability of a particular behavior. They have become postmodern concepts, words that mean whatever one wants them to.
The idea that behavioral modernity is a derived evolutionary state, one not shared by all morphologically modern-looking H. sapiens and one that can be reliably diagnosed from behavioral characteristics, is rich with potential for abuse. It fits well with racist arguments that there are meaningful grade-level evolutionary differences among living humans. Such views are rarely expressed in scientific circles (or polite company), but they nevertheless can find traction among nonscientific audiences because they incorporate the same unilinear model of human evolution that underlies the behavioral modernity concept. If paleoanthropologists judge humans’ evolutionary state based on their behavior, why shouldn’t others do so as well? Discarding the term “behavioral modernity” will not stop individuals from cherry-picking selected findings of paleoanthropology to support racist agendas, but it will deny them the illusion that they are emulating an accepted scientific method.
Shea proposes a focus on behavioral variability as the solution. He approaches it as a quantifiable problem, and one linked to similar types of research in behavioral ecology.
Variability is a measurable quality of all human behavior expressed in term of modality, variance, skew, and other quantitative/statistical properties. These qualities change through time and space, and they do not necessarily follow a preferred direction. Trends are recognizable only i... Read more »

Shea, J. (2011) Is as Was . Current Anthropology, 52(1), 1-35. DOI: 10.1086/658067  

  • February 22, 2011
  • 09:30 AM
  • 1,532 views

Might Pleistocene Fido Have Been A Fox?

by Jason Goldman in The Thoughtful Animal

There is a small bit of land, only about a square kilometer, that has added a new wrinkle to the story of animal domestication. This bit of land located in Northern Jordan, just southeast of the Sea of Galilee near the banks of the Jordan River, is home to an archaeological site known as 'Uyun al-Hammam. One key feature of this site, excavated in 2005, is a burial ground containing the remains of at least eleven humans in eight different gravesites. The early humans were buried here sometime during the pre-Natufian period, or around 16,500 years ago.

Layout of the 'Uyun al-Hammam site, and an inset map indicating its location. Click to enlarge.


In addition to the human remains, the archaeologists and paleontologists from the Universities of Cambridge and Toronto, led by Lisa A. Maher, uncovered several animal bones from among the grave sites. In grave one, a fox skull was found along with its right humerus, as well as the remains of a gazelle, a deer, and a tortoise. In grave seven, the researchers discovered most of a fox skeleton, a red deer antler, and a fragment of a goat's horn. It's fairly common to find pieces of animal horns or bones sculpted into tools around human settlements, but a complete fox skeleton? This is unusual.

By comparing the fox skull with other fossil canid skulls from the same geographical region, the researchers were able to determine that their's was a red fox (Vulpes vulpes), which was indeed known to be present in that area during the late Pleistocene. It turned out, somewhat surprisingly, that the fox skull and the fox skeleton belong to the very same fox.

But why was the skull and humerus from the fox found in grave one, while a fox skeleton sans-skull and -humerus was all the way over in grave seven?
Read the rest of this post... | Read the comments on this post...... Read more »

  • February 21, 2011
  • 12:51 AM
  • 1,229 views

Tag-teaming research blogging: Me and Sci do it up, PMDD-style

by Kate Clancy in Context & Variation

When I was in college, my favorite hangout was the basement of the Harvard Book Store, where they had the used books and cheap remainders (they were also across the street from my freshman dorm, Wigglesworth, and yes, that is a most excellent name). I worked my way through several sci-fi and fantasy series, and got nearly all my Women’s Studies books, because of that one lovely room.One night in my freshman year I was browsing the philosophy section with a new boyfriend, a person with whom I often felt inferior and less-educated. I saw an author name on the spine of an old hardcover and, hoping to impress the boyfriend, pointed it out. “Hobbes Machiavelli, I’ve read stuff by him,” I said. I arched my eyebrows with what I hoped was an air of intelligence.The boyfriend, and a nearby witness, both turned towards me. “Hobbes and Machiavelli are two different people,” he said slowly.As a blush crept up my face, I realized several things: the excerpt of “The Prince” I had barely skimmed in high school was by Niccolo Machiavelli, Hobbes was a totally different dude, and my boyfriend thought I was a posturing idiot.It’s a good idea to know what you’re talking about before opening your mouth.* * *These days, if I don’t know the answer to something, I don’t try to fake it. Recently, a Twitter follower suggested I write on this New Scientist story and the empirical article upon which it was reporting on brain activity, hormones and Premenstrual Dysphoric Disorder. As I am not an expert on issues of the brain, rather than try to be I enlisted brilliant neuroscientist Scicurious to do tag-team blog posts where we could each cover the material where we had expertise. I had a few thoughts about the way the New Scientist article author framed the study, and about the hormone analyses. So I’ll talk about that, and Sci will cover BRAINZ in this post.What is this study about?Rapkin et al (2011) seek to understand why a minority of women experience Premenstrual Dysphoric Disorder (PMDD), a suite of premenstrual behaviors that include severe and debilitating irritability, depression and anxiety. They used PET scans to look at brain stuff (cue Scicurious) and also looked at hormone concentrations to see if the reproductive hormones that decline in the premenstrual phase had anything to do with it. They found no difference in hormone concentrations between control and PMDD women, but did find variation in cerebellar activity by menstrual phase. You need to read Scicurious's take on this, because she provides important background and context to the study of the cerebellum for mood.The New Scientist piece makes a lot of the potential effect of progesterone on GABA receptors in the brain, but as far as I can tell the article itself does not measure GABA receptors. Progesterone, allopregnanolone and GABA are all interrelated and important chemicals when it comes to mood (Concas et al 1998), but like I said, since the study didn’t actually look at GABA, I’m not going there. Sci has also made some important points about this issue, and on what the study authors found (which is admittedly cool) with what they discuss around GABA (which might be a wee bit of a stretch).Nits to pick with New ScientistZukerman, the author of the New Scientist piece, begins her piece, entitled “Why women get anxious at ‘that time of the month’” with this:“Is it that time of the month? These are the words no man should ever utter. How about this for a diplomatic alternative: "Are your GABA receptors playing up?"You may be spot on. It seems that these brain cells are to blame for some women's monthly mood swings.Many women feel a little irritable before menstruating, but up to 8 per cent suffer extreme symptoms, including anxiety, depression and fatigue.”There are a few things that trouble me about this. First, without citing any actual incidence of this symptom, the author claims that many women suffer from irritability before their period. This just perpetuates the idea that irritability is a common premenstrual trait, when the premenstrual phase is an incredibly variable period. This is despite the fact that at most only eight percent of women actually get these symptoms to the point that they are debilitating (the two studies the study authors cite give a 5% and 8% incidence, so 8% may be high).From a public health or science research perspective, eight percent of reproductively aged women is a pretty significant quantity. I absolutely want more research to be done on PMDD and, full disclosure, I’m running some pilot studies to work on it in the future myself. However, these results don’t necessarily translate to women who may just get a little irritable or experience other mild behavioral symptoms before their period.And that is why both the title and the “Is it that time of the month” joke at the start of the story were misleading. Besides its obvious sexism, where any female behavior that deviates from the pleasing and passive risks eliciting that question, the link here in the mind of a popular reader is that women’s behavior is governed by hormone and brain interactions more generally than the paper actually implies.So, to reiterate: PMDD impacts maybe eight percent of reproductively aged women (notice that I keep specifically referencing “reproductively-aged women,” which further shrinks the pool of women down to those between menarche and menopause). This is nothing to sneeze at. But this isn’t everyone.HormonesIn order to see if there were differences in hormone concentrations between normal and PMDD women, Rapkin et al (2011) took blood on the days of the PET scans: this translated into one follicular phase (first half of the cycle, between menses and ovulation) and one late luteal phase collection (the week or so before the next menses). They found no difference in the mean concentrations of estradiol and progesterone between the two groups, at either time period.Table 1 from Rapkin et al (2011). None of these differences between groups are significant according to the authors, but they didn't report p-values anywhere I could find.There are several problems with this. First, the sample size is tiny. I have certainly been known to run analyses with fewer subjects, but the way I and other folks who do hormone work get around this is to sample each individual many more times. When collecting hormone information on reproductively-aged women, for instance, you want to collect a minimum of one menstrual cycle’s worth of data… every single day.More power!My advisor raised me right, and so I did a power analysis of the data the study authors provided. A power analysis is a way to determine the statistical power of a test. You can do it beforehand to determine an appropriate sample size for your experiment, or afterwards if you didn’t find something statistically significant and don’t know if your analysis was effective. When there are small but important differences between two groups, but the sample size is also small, your statistical test can be insignificant and thus miss that important difference.Let’s take the hormone and time period that should be the most meaningful: progesterone in the late luteal phase. PMDD women had 5.50 ± 5.27 ng/mL, and control women had 6.76 ± 7.53 ng/mL. If we say that the smallest difference between these two groups that would be interesting is around 6 ng/mL (just splitting the difference between the two standard deviations, but this is pretty generous), then according to my calculations this test only has a power of about 60%. Therefore, 40% of the time a test with a sample size this small wouldn’t catch a potentially important difference between the groups. To put it into more perspective, the standard is to have a power of at least 80%.What’s blood got to do with it?... Read more »

Concas A, Mostallino MC, Porcu P, Follesa P, Barbaccia ML, Trabucchi M, Purdy RH, Grisenti P, & Biggio G. (1998) Role of brain allopregnanolone in the plasticity of gamma-aminobutyric acid type A receptor in rat brain during pregnancy and after delivery. Proceedings of the National Academy of Sciences of the United States of America, 95(22), 13284-9. PMID: 9789080  

Rapkin AJ, Berman SM, Mandelkern MA, Silverman DH, Morgan M, & London ED. (2011) Neuroimaging evidence of cerebellar involvement in premenstrual dysphoric disorder. Biological psychiatry, 69(4), 374-80. PMID: 21092938  

  • February 20, 2011
  • 11:46 PM
  • 1,842 views

Did the Red Fox Predate the Dog as Man's Best Friend?

by William Yates, M.D. in Brain Posts

Pets have longed played a role in human companionship.  Wild animals were primarily a source of food (and danger) in early human development.  Later in evolution,  animals began to serve a more complex role.  Domestication of a variety of animals served a more utilitarian role.  For example, domestication of horses allowed for extended travel, improved efficiency of hunting and provided a strategic advantage in battle.The domestications of wild wolves has been felt to be one of the earliest examples of using animals for companionship and the development of a human/mammal pet relationship.  Now some recent archaelogical research suggests that the red fox may have predated the domesticated wolf as the original canine domesticated by humans.Mather and colleagues present the case for the red fox predating domesticated wolves in a recent research summary in Plos One.  This research team includes members from the University of Cambridge in the UK as well as the University of Toronto.  Using findings from burial grounds in what is now northern Jordan, they lay out the evidence that the red fox predates the wolf as man's domesticated canine.Seven human grave sites have been examined in the Uyun al-Hamman region between the Transjordanian Highlands and the Jordan valley.  The key elements from these findings include:Red fox skulls and bones are noted in several burial sites in proximity to human remainsThe proximity and manner of red fox bones suggest intentional placement rather than coincidenceSome human bones were re-interred with movement and replacement of  red fox bones in the new graveThis re-burial process suggests a personal relationship between the deceased human and a specific red fox--the transfer may have indicated an attempt that "the dead person would continue to have the fox with him or her in the afterlife"The pattern of remains are not consistent with some secondary process such as use the red fox as a pelt or as part of consumption of the animalThe carbon dating data suggest this area and this burial site pre-dated by thousands of years the earliest known burial sites that include domesticated wolvesLater grave sites show humans being buried with dogs supporting an emotional tie with social, ideological or symbolic significanceIn summary, the research team feels these recent findings support an earlier "non-economic connections between people and animals".I admit this post is outside my area of expertise.  However, we are becoming more aware of the potential importance of pet relationships in reducing loneliness, stress reduction and social psychology.  I do have to note that I took the photo of the red fox in this post in the winter of 2009-2010 in my back yard in Tulsa Oklahoma.  After a prolonged period of very low temperatures and frozen water sources, this red fox jumped a fence into my back yard and drank from our pool.  This research study suggests the red fox was one of man's earliest pets.  I'm happy to do whatever I can to aid this beautiful animal.Photo of Red Fox Courtesy of Yates PhotographyMaher, L., Stock, J., Finney, S., Heywood, J., Miracle, P., & Banning, E. (2011). A Unique Human-Fox Burial from a Pre-Natufian Cemetery in the Levant (Jordan) PLoS ONE, 6 (1) DOI: 10.1371/journal.pone.0015815... Read more »

  • February 20, 2011
  • 10:30 PM
  • 1,254 views

Is Romantic Love a Western, Heterosexual Construct?

by The Neurocritic in The Neurocritic

ROMANTIC LOVE WAS INVENTED TO MANIPULATE WOMEN-Jenny Holzer, TruismsDoes romantic love manipulate women into providing free domestic labor and sexual favors for men? Some feminist views of romantic love [and the institution of marriage] portray it as controlling and oppressive (Burns, 2000):‘STOP HUMAN SACRIFICE. END MARRIAGE NOW.’ ‘IT STARTS WHEN YOU SINK IN HIS ARMS AND ENDS WITH YOUR ARMS IN HIS SINK.’ From a feminist perspective, romantic love was, and is, seen to obscure or disguise gender inequality and women’s oppression in intimate heterosexual relationships.But some in the men's movement see romantic love as dangerous for men as well as women, because it prevents men from being vulnerable (Bloodwood, 2003):...historically, romantic love has been a highly gendered but workable deal in which men provide women with social status and material goods while women provide men with sex/affective labour. Thus romantic relationships not only reinforce women’s second class status but also reinforce men’s lack of sex/affective autonomy, so that romantic love is equally dangerous for women and for men.Furthermore, romantic love is often portrayed as a relatively recent construct that is specific to Western societies. A cross-cultural study by Jankowiak and Fischer (1992) claimed that:The anthropological study of romantic (or passionate) love is virtually nonexistent due to the widespread belief that romantic love is unique to Euro-American culture. This belief is by no means confined to anthropology. The historian Philippe Aries (1962), for example, argues that affection was of secondary importance to more utilitarian ambitions throughout much of European history.However, their own analysis of the ethnographic literature found that romantic love (however ill-defined) could be observed in 147 out of 166 societies, including 77% in Sub-Saharan Africa and 94% in East Eurasia (Jankowiak & Fischer, 1992). Likewise, evolutionary anthropologist Helen Fisher and colleagues suggest that romantic love evolved as one of three motivational brain systems for mating, reproduction, and parenting (Fisher et al., 2002).The biological concept that romantic love (or attraction) is an emotional/motivational system in the human brain has prompted some neuroimaging investigators to search for its elusive neural correlates. How do you measure long-term intense romantic love in an fMRI experiment? Researchers have adopted the practical (yet flawed) strategy of examining the hemodynamic response to viewing pictures of a partner with whom participants were "madly in love".Previous studies on the "neural correlates of romantic love" have focused on recently attached heterosexuals from the UK (Bartels & Zeki, 2000) or US (Aron et al., 2005). One of the main findings from these studies is that the expected dopamine/reward areas [including ventral tegmental area (VTA), substantia nigra (SN), and caudate nucleus] showed greater activation when looking at the pictures of the partner, compared to pictures of a close friend or neutral acquaintance. And in the previous post on Posterior Hippocampus and Sexual Frequency, we saw a similar response in a specifically recruited group of participants still "madly in love" after 21 years of marriage (Acevedo et al., 2011).So are the "neural correlates of romantic love" the same in non-Western, non-heterosexual participants? Two recent papers attempted to spread the love to include diverse "others" (Xu et al., 2010; Zeki & Romaya, 2010). Is the simple act of asking if the Chinese and teh gays are "just like us" when it comes to love offensive? I'll let you be the judge.Although the original study of Bartels and Zeki (2000) recruited an ethnically and culturally diverse group of subjects, all were heterosexual. Zeki and Romaya (2010) wanted to extend this work to include romantically involved gay participants. This time, they included 12 females (6 in straight and 6 in lesbian relationships) and 12 males (6 in straight and 6 in gay relationships) in their fMRI experiment. I won't belabor the methods [and the critiques thereof] here, but will refer the reader to Posterior Hippocampus and Sexual Frequency.1Fig. 2 (Zeki & Romaya, 2010). Illustration of the t statistic for the contrast Loved > Neutral showing selected activations superimposed over averaged anatomical sections. Random effects analysis with 24 subjects. Background threshold p uncorrected < 0.001. (A) Medial sagittal plane (x = 0) showing activations in the tegmentum [VTA], hypothalamus and [cerebellar] vermis. (B) Sagittal plane x = −12 (LH) showing activation in the caudate head, anterior cingulate and parietal cortex. (C) Horizontal plane z = −30; right cerebellum. (D) Horizontal plane z = −9; mid insula, left hemisphere. As for differences between the groups, there were none: no main or interactive effects of gender or sexual orientation. The results were the same for gay and straight, male and female participants [but remember that the numbers were very low, n=6 for each of the four cells]. So this particular [underpowered] study suggests that "the romantic love brain circuit" (i.e., familiarity, attention, memory, reward, etc. activity associated with looking at your partner's face) is not restricted to heterosexuals. Did they really expect anything different? Actually not, Zeki and Romaya predicted a null effect.However, the authors themselves note the difficulties inherent in their entire endeavor:We begin by emphasizing that any study of so complex and overpowering a sentiment as love is fraught with difficulties. Chief among these is that the sentiment itself involves many components – erotic, emotional, and cognitive – that are almost impossible to isolate from the overall sentiment of love. ... While acknowledging this difficulty, we tried as best we could to circumvent it, by applying a uniform criterion – that of a loved face – for studying the brain's love system. Another problem is the difficulty of controlling the mental processes that occur when subjects view their lovers' faces. The only way to address this is through the statistical methods we have used to analyze our results. We have employed a random effects analysis using the summary st... Read more »

  • February 19, 2011
  • 02:06 PM
  • 1,126 views

The Web of Morgellons

by Neuroskeptic in Neuroskeptic

A fascinating new paper: Morgellons Disease, or Antipsychotic-Responsive Delusional Parasitosis, in an HIV Patient: Beliefs in The Age of the Internet“Mr. A” was a 43-year-old man...His most pressing medical complaint was worrisome fatigue. He was not depressed...had no formal psychiatric history, no family psychiatric history, and he was a successful businessman.He was referred to the psychiatry department by his primary-care physician (PCP) because of a 2-year-long complaint of pruritus [itching] accompanied by the belief of being infested with parasites. Numerous visits to the infectious disease clinic and an extensive medical work-up...had not uncovered any medical disorder, to the patient’s great frustration.Although no parasites were ever trapped, Mr. A caused skin damage by probing for them and by applying topical solutions such as hydrogen peroxide to “bring them to the surface.” After reading about Morgellons disease on the Internet, he “recalled” extruding particles from his skin, including “dirt” and “fuzz.”During the initial consultation visit with the psychiatrist, Mr. A was apprehensive but cautiously optimistic that a medication could help. The psychiatrist had been forewarned by the PCP that the patient had discovered a website describing Morgellons and “latched onto” this diagnosis.However, it was notable that the patient allowed the possibility (“30%”) that he was suffering from delusions (and not Morgellons), mostly because he trusted his PCP, “who has taken very good care of me for many years.”The patient agreed to a risperidone [an antipsychotic] trial of up to 2 mg per day. [i.e. a lowish dose]. Within weeks, his preoccupation with being infested lessened significantly... Although not 100% convinced that he might not have Morgellons disease, he is no longer pruritic and is no longer damaging his skin or trying to trap insects. He remains greatly improved 1 year later.(Mr A. had also been HIV+ for 20 years, but he still had good immune function and the HIV may have had nothing to do with the case.)"Morgellons" is, according to people who say they suffer from it, a mysterious disease characterised by the feeling of parasites or insects moving underneath the skin, accompanied by skin lesions out of which emerge strange, brightly-coloured fibres or threads. Other symptoms include fatigue, aches and pains, and difficulty concentrating.According to almost all doctors, there are no parasites, the lesions are caused by the patient's own scratching or attempts to dig out the non-existent critters, and the fibres come from clothes, carpets, or other textiles which the patient has somehow inserted into their own skin. It may seem unbelievable that someone could do this "unconsciously", but stranger things have happened.As the authors of this paper, Freudenreich et al, say, Morgellons is a disease of the internet age. It was "discovered" in 2002 by a Mary Leitao, with Patient Zero being her own 2 year old son. Since then its fame, and the reported number of cases, has grown steadily - especially in California.Delusional parasitosis is the opposite of Morgellons: doctors believe in it, but the people who have it, don't. It's seen in some mental disorders and is also quite common in abusers of certain drugs like methamphetamine. It feels like there are bugs beneath your skin. There aren't, but the belief that there are is very powerful.This then is the raw material in most cases; what the concept of "Morgellons" adds is a theory, a social context and a set of expectations that helps make sense of the otherwise baffling symptoms. And as we know expectations, whether positive or negative, tend to be become experiences. The diagnosis doesn't create the symptoms out of nowhere but rather takes them and reshapes them into a coherent pattern.As Freudenreich et al, doctors may be tempted to argue with the patient - you don't have Morgellons, there's no such thing, it's absurd - but the whole point is that mainstream medicine couldn't explain the symptoms, which is why the patient turned to less orthodox ideas.Remember the extensive tests that came up negative "to the patient’s great frustration." And remember that "delusional parasitosis" is not an explanation, just a description, of the symptoms. To diagnose someone with that is saying "We've no idea why but you've imagined this". True, maybe, but not very palatable.Rather, they say, doctors should just suggest that maybe there's something else going on, and should prescribe a treatment on that basis. Not rejecting the patient's beliefs but saying, maybe you're right, but in my experience this treatment makes people with your condition feel better, and that's why you're here, right?Whether the pills worked purely as a placebo or whether there was a direct pharmacological effect, we'll never know. Probably it was a bit of both. It's not clear that it's important, really. The patient improved, and it's unlikely that it would have worked as well if they'd been given in a negative atmosphere of coercion or rejection - if indeed he'd agreed to take them at all.Morgellons is a classic case of a disease that consists of an underlying experience filtered through the lens of a socially-transmitted interpretation. But every disease is that, to a degree. Even the most rigorously "medical" conditions like cancer also come with a set of expectations and a social meaning; psychiatric disorders certainly do.I guess Morgellons is too new to be a textbook case yet - but it should be. Everyone with an interest in the mind, everyone who treats diseases, and everyone who's ever been ill - everyone really - ought to be familiar with it because while it's an extreme case, it's not unique. "All life is here" in those tangled little fibres.Freudenreich O, Kontos N, Tranulis C, & Cather C (2010). Morgellons disease, or antipsychotic-responsive delusional parasitosis, in an hiv patient: beliefs in the age of the internet. Psychosomatics, 51 (6), 453-7 PMID: 21051675... Read more »

join us!

Do you write about peer-reviewed research in your blog? Use ResearchBlogging.org to make it easy for your readers — and others from around the world — to find your serious posts about academic research.

If you don't have a blog, you can still use our site to learn about fascinating developments in cutting-edge research from around the world.

Register Now

Research Blogging is powered by SMG Technology.

To learn more, visit seedmediagroup.com.