The previous post took a look at some recent research on how competition for mates affects how religious people say they are. When a group of students in the US were subtly reminded that there's a lot of competition for potential mates, they responded by claiming to be more religious. One potential explanation for this is simply that being religious is seen as socially desirable.If this were true, then you would expect that people who are inclined to 'self enhance' (i.e. paint a rather flattering portrait of themselves) are also more likely to say that they are religious. There have been a huge number of studies looking at this over the years (57 studies, in fact, totalling over 15,000 subjects), and Constantine Sedikides (at Southampton University in the UK) has just compiled the results into a mega-study.The results confirm that religion is strongly correlated with socially desirable responding (i.e. the tendency to give answers about yourself that you think will make you look good). There are two kinds of socially desirable responding - self deception (i.e. subconscious) and image manipulation (i.e. consciously talking yourself up).Overall, image manipulation, but not self-deception, was correlated with religion.Sedikides was able to look at the the two fundamental aspects of religion (well, as it is understood in the Western World, at least) - extrinsic and intrinsic religion. Extrinsic religion is basically the externalised expression of religion, whereas intrinsic religion is the internalised beliefs.Now, you might think that extrinsic religion would be closely linked to image manipulation, but you'd be wrong. In fact, both self deception and image manipulation were linked to higher intrinsic religiosity, while extrinsic religion was actually linked to less of both kinds of self-enhancement.A fascinating result, but it becomes even more interesting when you break it down at the national level. As you can see from the graph, the strongest effect of religion is in the USA.In Canada and the UK, the link between intrinsic religion and self-enhancement is smaller.Bizarrely, in the UK, self-enhancement is linked to more extrinsic religion (unlike the USA and Canada, where self-enhancers are actually less likely to claim extrinsic religiosity.Sedikides speculates that this is down to the different role of religion in the USA compared with Canada and - especially - the UK.In the USA, most people are religious and it's common for people to frown upon those who are only religious for what could be seen as superficial reasons. Self-enhancers respond by saying that they are intrinsic believers, but that they are not extrinsically religious. (Compare this with the study last year that showed people in the US believe the religious to be healthier, happier, and more normal than they actually are)In the UK, religion is a minority pursuit, and subject to ridicule. Self-enhancers respond by saying that they don't really take the beliefs too seriously, but they are in it for the community and social side.Sedikides also looked at the difference between secular universities in the USA and Christian ones, and found something similar. Self-enhancers at Christian Universities report high intrinsic religion, and low extrinsic religion. This effect is muted at secular universities (especially for extrinsic religion).Now, if you've read this far you are probably wondering why all this talk of cause and effect, given that all the data are correlational? Well, Sedikides has an answer. He points out that it's well known that people use a wide variety of means to satisfy their self-enhancement motives - so you would expect them to use religion as well. What's more, self-enhancement is a very basic psychological structure, whereas religion is primarily a cultural adaptation.__________________________________________________________________________Sedikides, C., & Gebauer, J. (2009). Religiosity as Self-Enhancement: A Meta-Analysis of the Relation Between Socially Desirable Responding and Religiosity Personality and Social Psychology Review DOI: 10.1177/1088868309351002 This article by Tom Rees was first published on Epiphenom. It is licensed under Creative Commons.
... Read more »
Sedikides, C., & Gebauer, J. (2009) Religiosity as Self-Enhancement: A Meta-Analysis of the Relation Between Socially Desirable Responding and Religiosity. Personality and Social Psychology Review. DOI: 10.1177/1088868309351002
A debate that’s been going on for some time is the role of ‘distraction’ in pain management. So many of the people I see have told me they ‘just ignore’ the pain, or ‘I try to distract myself’, or similar, that there isn’t much doubt to me that people habitually use attention management as a [...]... Read more »
Elomaa, M., de C. Williams, A., & Kalso, E. (2009) Attention management as a treatment for chronic pain. European Journal of Pain, 13(10), 1062-1067. DOI: 10.1016/j.ejpain.2008.12.002
Part of practicing medicine is the recognition of patterns. You need to get the symptoms, physical examination and lab results and review the data to recognize the big picture called diagnoses. The only difference between residents and specialists is the speed at which they arrive at the correct diagnoses. The specialist mostly get there quicker. [...]
Related posts:Neuroscience of Learning Arithmetic Maybe I have told you in the a previous...
The Neuroscience of Pregnancy Pregnancy requires many adaptations to new situations. These changes...
The Neuroscience of Music Enjoyment and Depression When feeling down good music can cheer you up....
Related posts brought to you by Yet Another Related Posts Plugin.... Read more »
I wrote yesterday about the difficulty there is in grouping patients so that the right treatment is given to the right person at the right time. Today’s post coincidentally follows a similar line – two screening tools that discriminate between ‘high risk’ and ‘low risk’ people with low back pain. The value of [...]... Read more »
Hill, J., Dunn, K., Main, C., & Hay, E. (2010) Subgrouping low back pain: A comparison of the STarT Back Tool with the Örebro Musculoskeletal Pain Screening Questionnaire. European Journal of Pain, 14(1), 83-89. DOI: 10.1016/j.ejpain.2009.01.003
Every now and again a finding comes along that provides perfect ammunition for psychologists confronted by the tiresome claim that psychology is all 'common sense'. Researchers have found that death-related health warnings on cigarette packs are likely to encourage some people to smoke. The surprising result is actually consistent with 'Terror-management Theory', according to which thoughts of mortality cause us to cling more strongly to our cultural beliefs and to pursue ego-boosting activities.Jochim Hansen and colleagues first measured how important smoking was to the self-esteem of 39 student smokers. Example questionnaire items included 'smoking allows me to feel valued by others'. Next, the smokers were divided into two groups: one group looked at two cigarette packs that featured death-related warnings, such as 'Smokers die earlier'. The other group looked at cigarette packs that featured death-neutral warnings, such as 'Smoking makes you unattractive.'Fifteen minutes later all the students reported their attitudes to smoking; the questionnaire included items such as 'Do you intend to quit smoking?'. Among the students for whom smoking was important to their self-esteem, those who looked at packets with death-related warnings subsequently reported more positive attitudes to smoking compared with those who looked at death-neutral packets. The exact opposite pattern was found for students for whom smoking was not important for their self-esteem. In other words, for smokers who derive a self-esteem boost from smoking - perhaps they see it as a key part of their identity or they think it makes them look cool - a death-related cigarette packet warning can have the ironic effect of making them want to smoke more, so as to buffer themselves against the depressing reminder of their own mortality. The findings suggest that for these kinds of smokers, packet warnings that target positive beliefs about smoking (e.g. 'Smoking makes you look unattractive') could well be more effective.'To succeed with anti-smoking messages on cigarette packs one thus has to take into account that considering death may make some people smoke,' the researchers concluded._________________________________Hansen, J., Winzeler, S., & Topolinski, S. (2010). When the death makes you smoke: A terror management perspective on the effectiveness of cigarette on-pack warnings. Journal of Experimental Social Psychology, 46 (1), 226-228 DOI: 10.1016/j.jesp.2009.09.007
... Read more »
Hansen, J., Winzeler, S., & Topolinski, S. (2010) When the death makes you smoke: A terror management perspective on the effectiveness of cigarette on-pack warnings. Journal of Experimental Social Psychology, 46(1), 226-228. DOI: 10.1016/j.jesp.2009.09.007
As an individual with chronic pain, I know that each person with chronic pain is different from the next, and as a clinician I know that there are few ways to predict who will benefit from what treatment – but it’s like a Holy Grail to find a way to group together people who will [...]... Read more »
Martin L Verra, Felix Angst, Roberto Brioschi, Susanne Lehmann, Francis J Keefe, J Bart Staal, Rob A de Bie, André Aeschlimann. (2009) Does classification of persons with fibromyalgia into Multidimensional Pain Inventory subgroups detect differences in outcome after a standard chronic pain management program?. Pain Research , 14(6), 445. info:/1929024711
Humans adapt their mating strategies according to what they think their chances are. For example, when there are more men than women, people marry earlier and divorce less. When there are more women, the opposite applies. The supposition is that this this is because, when women are in a 'buyers market' they are more able to demand fidelity.What's more, when women are shown an array of attractive, promiscuous women, they're more likely to reject the notion that casual sex is OK.How does religion fit into this? Douglas Kenrick (Arizona State University), who's an expert on human mating strategies, has set out to investigate this.He started with a hypothesis. He suspected that when women face more competition they would also report being more religious. The idea is that being more religious will somehow force prospective partners to be more faithful.Now, to me that doesn't really make sense. You would expect women in a competitive marketplace to want other people (especially men) to be more religious - but not themselves. If women are in a position to demand fidelity, then men would respond by claiming to be religious (in an attempt to persuade women that they are a good, faithful 'catch'). But if men were in demand, then they could demand less 'religiously virtuous' relationships.But that's really a moot point, because Kenrick didn't find what he expected to.What he did was to show students pictures of 6 attractive men or six attractive women (the ruse was that they were helping improve a fictional dating service). He then asked them about their religiosity.It turned out that neither men nor women said they were more religious after seeing pictures of the opposite sex. But both men and women reported being more religious after seeing pictures of the same sex.So, when you remind people of the competition, they get to thinking that the mating odds are stacked against them. And they respond (at least, these US students respond) by claiming to be more religious.It's a puzzling result if you start from the assumption that people assert their religiosity in order to advertise their secular fidelity. Why on earth should women claim to be religious, when that might make them less attractive to potential mates?So if these responses aren't about advertising fidelity, what can explain them? I think it's simple. In the USA, religion is a social norm. Atheists are outsiders. So, if you want to make yourself look attractive, then you claim to be religious.Indeed, there's some great evidence that that's exactly what happens - which is the topic of the next blog post.__________________________________________________________________________Li, Y., Cohen, A., Weeden, J., & Kenrick, D. (2009). Mating competitors increase religious beliefs Journal of Experimental Social Psychology DOI: 10.1016/j.jesp.2009.10.017 This article by Tom Rees was first published on Epiphenom. It is licensed under Creative Commons.
... Read more »
WE tend to assume that we see our surroundings as they really are, and that our perception of reality is accurate. In fact, what we perceive is merely a neural representation of the world, the brain's best guess of its environment, based on a very limited amount of available information. This is perhaps best demonstrated by visual illusions, in which there is a mismatch between our perception of the stimulus and objective reality.
Even when looking at everyday objects, our perceptions can be deceiving. According to the New Look approach, first propounded in the 1940s by the influential cognitive psychologist Jerome Bruner, perception is largely a constructive process influenced by our needs and values. Recent research has provided some evidence for this: in 2006, psychologists Emily Balcetis and David Dunning, then at Cornell University, reported that an ambiguous figure tended to be interpreted according to the self-interest of the perceiver. They now show that the desirability of an object influences its perceived distance. Read the rest of this post... | Read the comments on this post...... Read more »
Balcetis, E., & Dunning, D. (2009) Wishful Seeing: More Desired Objects Are Seen as Closer. Psychological Science. DOI: 10.1177/0956797609356283
Nature kicks off the 2010s with an editorial pep-talk for psychiatry: A decade for psychiatric disorders.New techniques — genome-wide association studies, imaging and the optical manipulation of neural circuits — are ushering in an era in which the neural circuitry underlying cognitive dysfunctions, for example, will be delineated... Whether for schizophrenia, depression, autism or any other psychiatric disorders, it is clear... that understanding of these conditions is entering a scientific phase more penetratingly insightful than has hitherto been possible.But I don't feel too peppy.The 2010s is not the decade for psychiatric disorders. Clinically, that decade was the 1950s. The 50s was when the first generation of psychiatric drugs were discovered - neuroleptics for psychosis (1952), MAOis (1952) and tricyclics (1957) for depression, and lithium for mania (1949, although it took a while to catch on).Since then, there have been plenty of new drugs invented, but not a single one has proven more effective than those available in 1959. New antidepressants like Prozac are safer in overdose, and have milder side effects, than older ones. New "atypical" antipsychotics have different side effects to older ones. But they work no better. Compared to lithium, newer "mood stabilizers" probably aren't even as good. (The only exception is clozapine, a powerful antipsychotic, but dangerous side-effects limit its use.)Scientifically, the 1960s were the decade of psychiatry. We learned that antipsychotics block dopamine receptors in the brain, and that antidepressants inhibit the reuptake or breakdown of monoamines: noradrenaline and serotonin. So it was natural, if unimaginative, to hypothesise that psychosis is caused by "too much dopamine", and that depression is a case of "not enough monoamines". (As for lithium, we still don't know how it works. Two out of three ain't bad.)These are still the core dogmas of biological psychiatry. Since the 60s, the amount of money and people involved in the field has exploded, but today's research is still essentially making footnotes to the work done 30 or 40 years ago. It would be somewhat unfair to say that we haven't learned anything since then, but only somewhat.The double helix structure of DNA was worked out in 1952, like antipsychotics and antidepressants. Imagine if biologists had learned about the double helix, but instead of using it to understand genetics, or catch criminals, or sequence genomes, they spent 50 years arguing about whether all DNA was shaped like that, or only some of it.The standard response to the charge that psychiatry has lagged behind the rest of medicine is that "It's hard". And it is, because it's about human life, which is complex. But so is the subject matter of every science: the whole point is to seek simplicity in the complexity. Genetics was hard, until we worked out how to do it.What's remarkable is that so many things in psychiatry are simple. For example: any drug which blocks the dopamine transporter (DAT) in the brain has stimulant effects: increased energy, focus, and motivation, and at high doses, euphoria, grandiosity, and potentially addiction. Cocaine, amphetamine, Ritalin etc all work this way. There are no cocaine-like drugs that don't block DAT and no DAT inhibitors that aren't cocaine-like. Simple. The stimulant high looks strikingly like the mania seen in bipolar disorder, and is pretty much the exact opposite of what happens in clinical depression. Couldn't be easier.There are plenty of cases just like this. What's also striking is that neuroscience has advanced in leaps and bounds since the 1960s. A 60s, or even a 90s, textbook about neuroscience looks incredibly dated - a 60s psychiatry textbook is essentially still up-to-date except for the drug names. Contemporary neuroscience is far from being a mature science like genetics, it has its problems (references: my blog) but compared to psychiatry, "basic" neuroscience is rock-solid. Although I trained as basic neuroscientist, so I would say that.Why? That's an excellent question. But if you ask me, and judging by the academic literature I'm not alone, the answer is: diagnosis. The weak link in psychiatry research is the diagnoses we are forced to use: "major depressive disorder", "schizophrenia", etc.Basic neuroscientists don't use these. If a neuroscientist wants to study the effect of, say, pepperoni pizza on the human caudate nucleus, they can order a Dominos, recruit their friends as research subjects, pop them in an MRI scanner and get to work doing rigorous (and delicious) science. They've got the pepperoni pizza, they've got the human caudate nucleus - away they go.Whereas in order to do research in psychiatry, you need patients, and to decide who's a patient and who isn't you basically have to use DSM-IV criteria, which are all but meaningless in most cases. It doesn't matter what amazing new scientific tools you have - genome-wide association studies, proteomics, brain imaging, whatever. If you're using them to study differences between "depressed people" and "normal people", and your "depressed people" are a mix of people who aren't ill and just need a holiday or a divorce, undiagnosed thyroid cases, local bums lying about being depressed to get paid for being in the study, and (if you're lucky) a few "really" clinically depressed people, you'll not get very far.Nature (2010). A decade for psychiatric disorders Nature, 463 (7277), 9-9 DOI: 10.1038/463009a... Read more »
Has anyone read the book Freakonomics? I have. And by "have," I mean that I read the first page of the table of contents (1). What I learned from that brief, yet informative passage is that "conventional wisdom is so often wrong."Here's an example. "Lithium carbonate and valproate semisodium are both recommended as monotherapy for prevention of relapse in bipolar disorder, but are not individually fully effective in many patients. If combination therapy with both agents is better than monotherapy, many relapses and consequent disability could be avoided. We aimed to establish whether lithium plus valproate was better than monotherapy with either drug alone for relapse prevention in bipolar I disorder" (2).For reasons of brevity, articles are worded so that certain assumptions are implied (implicit), while the main aim of the article can be stated explicitly.What is the implicit assumption in this introduction?It's this: Valproate (Depakote) and lithium are reasonably effective maintenance therapies. How do we know this? Because both drugs are recommended as monotherapy for the prevention of relapse in bipolar disorder.Here is where it gets interesting (or pathetically sad). Lithium has over four decades of research supporting its efficacy. If we define a mood stabilizer as a drug that treats acute mania, acute depression, and prevents relapse into either mood episode, then lithium is the only drug on the market that meets those criteria (3). Valproate, on the other hand, has evidence to support its efficacy as an anti-manic agent. It meets only 1 out 3 criteria for a mood stabilizer."Then why is it recommended as a maintenance treatment?" Because of this study (4), which found that "divalproex...did not differ significantly from the placebo group in time to any mood episode."If you are exceedingly sharp, you'll notice that it's a negative study. Yet valproate has managed to become a recommended monotherapy. To read more about this, check out this post (5).This article, released online ahead of print, is known as the BALANCE study. (BALANCE is a backronym that stands for Bipolar Affective disorder Lithium/ANticonvulsant Evaluation). Here is the saddest fact of this study: Most of the mental effort that when into it was for creating the backronym. It goes down hill after that.Here are the results: "For people with bipolar I disorder, for whom long-term therapy is clinically indicated, both combination therapy with lithium plus valproate and lithium monotherapy are more likely to prevent relapse than is valproate monotherapy. This benefit seems to be irrespective of baseline severity of illness and is maintained for up to 2 years. BALANCE could neither reliably confirm nor refute a benefit of combination therapy compared with lithium monotherapy."It other words, lithium monotherapy or lithium with valproate adjunctive therapy is more effective at preventing relapse than valproate alone. The difference between lithium and the combination treatment was not statistically significant.Here is where it gets really sad (6): "Welcome back lithium. After losing its luster because of concerns over potentially serious adverse effects, this drug is drawing increasing respect...This study, along with other recent research, goes a long way toward putting lithium back on top as the preferred treatment for bipolar disorder, said lead study author John R. Geddes, MD...We’ve got more evidence purporting the lithium efficacy, safety, and its antisuicidal effects than we’ve ever had before," Dr. Geddes told Medscape Psychiatry. "So don’t throw lithium away; it’s a highly effective treatment, and if people can tolerate it, then it’s worth trying.""don't throw lithium away!?" Exactly, what study suggested that? Some of you might be thinking that atypicals have replaced lithium since they too are effective as anti-manic and maintenance treatments, but lithium's efficacy was compared to valproate, not an atypical.In other words, lithium was more effective than a drug that is no more effective than placebo. Why is this a major finding? Why was this study done?"Although the study could not confirm a benefit of the valproate-lithium combination therapy over lithium alone, its findings should challenge current clinical guidelines that recommend valproate monotherapy as a first-line option for long-term treatment of bipolar disorder." There is one study, ONE! on maintenance treatment. It's NEGATIVE! That alone should have prevented valproate from becoming a first-line option.Here is a special kind of stupid: "In an accompanying editorial (7), Rasmus W. Licht, MD, Mood Disorders Research Unit, Aarhus University Hospital, Risskov, Denmark, praised the BALANCE study, describing it as 'outstanding work' and 'an impressive example of international collaboration.' He said that even without a placebo group*, the study 'confirms the long-term efficacy of lithium, not only for the prevention of mania but also for prevention of depression.' On the basis of the study’s results, 'the BALANCE group rightly challenges the recommendation by present clinical guidelines that valproate monotherapy is a first-line option for long-term treatment."Make sure you read the above carefully. I highlighted the parts that celebrate acts of stupidity. This "outstanding work" took an "international collaboration" to "confirm the long-term efficacy of lithium," which "rightly challenge" clinical guidelines.Lithium has been the most empirically supported bipolar drug to date. It's the only drug that meets all three defining criteria for a mood stabilizer. Valproate has proven efficacy as an anti-manic only. This study, along with the accompanying editorial, and subsequent press releases should not exist. This is just plain fucking stupid!A few years ago, articles, based on data that has been around for 20 years, stated that that antidepressants were not as effective as initially stated.Last year, research showed that vaccines didn't cause autism (even though no research showed that they did).Now, research is showing us that lithium is effective (never disputed) when compared to a drug that was never shown to be effective.This is science, telling us what we should already know! * Just as a side note. The press releases for this study (8) are pushing the combination treatment as the preferred method of treatment. Here is my problem with that: I don't interpret these results as supporting polypharmacy as superior. Although there was a trend for the combination treatment over lithium alone, the difference was not statistically significant. Second, since valproate never had proven efficacy, I view it as an "active placebo," which could also explain the the better performance of the combination treatment. Sadly, the damage is done.... Read more »
The BALANCE investigators and collaborators. (2009) Lithium plus valproate combination therapy versus monotherapy for relapse prevention in bipolar I disorder (BALANCE): a randomised open-label trial. The Lancent. info:/doi:10.1016/S0140-6736(09)61828-6
Survey research consistently shows that people tend to have a poor view of migrants. It's unpalatable but psychologically-speaking, it's no great surprise. After all, the odds are stacked against new-comers: most of us display inherent biases against people who we perceive to be in a different social group from our own - the so-called 'out group bias' - together with a similar aversion to people who are members of a social minority. Migrants usually fit both these descriptions.Now Mark Rubin and colleagues have tested a third, even more elemental reason for prejudice against migrants, one that has to do with what's known as 'cognitive fluency'. People generally favour things that they find easy to process, as demonstrated, for example, by their preference for investing in companies with easy-to-pronounce names and their fear of chemicals with gobbledygook labels. Rubin and his colleagues argue that, in a purely abstract way, there's something cognitively awkward when it comes to thinking about the notion of migrants, and this mental difficulty biases us against them. 'An Algerian who has moved to the United States would be more difficult to process than an Algerian who is living in Algeria,' they wrote.The researchers recruited hundreds of students to perform various thought experiments. The students imagined a group of people in a room and that this first group was divided arbitrarily into two smaller groups, A and B, with a minority of each group then sent to the other group. The group swappers were the 'migrants'. The researchers balanced out the effects of out-group and minority bias by asking the participants to imagine they were themselves either in the migrating group, control group, or not involved. They next asked the students to rate the character of a typical control group member (one who stayed in his or her original group) and a typical migrant (who'd swapped groups), and then they asked the students to rate how easy they'd found it to think about members of the different groups.Students who guessed the purpose of the study were excluded from further analysis. The key result: despite the abstract nature of the task, the students rated migrating group members more negatively than control group members and this was partly because they'd found it more difficult to think about the migrants compared with the control members. This effect also worked backwards: there was some evidence that the students found it more difficult to think about migrating group members because they'd rated them more negatively. A second study showed that group members who were excluded from their original group, rather than swapped to another group, were also rated negatively and described as awkward to think about. The researchers said their finding showed prejudice against migrants can partly be explained by the cognitive awkwardness of thinking about a person who lives in one place but hails from another. 'An obvious next step in this line of research is to investigate the influence of processing fluency on evaluations of migrants in the real world,' the researchers said. _________________________________Rubin, M., Paolini, S., & Crisp, R. (2010). A processing fluency explanation of bias against migrants. Journal of Experimental Social Psychology, 46 (1), 21-28 DOI: 10.1016/j.jesp.2009.09.006
... Read more »
Rubin, M., Paolini, S., & Crisp, R. (2010) A processing fluency explanation of bias against migrants. Journal of Experimental Social Psychology, 46(1), 21-28. DOI: 10.1016/j.jesp.2009.09.006
It's football season in America: The NFL playoffs are about to start, and tonight, the elected / computer-ranked top college team will be determined. What better time than now to think about ... baseball! Baseball players, unlike most football players, must solve one of the most complicated perceptual puzzles in sports: how to predict the path of a moving target obeying the laws of physics, and move to intercept it.
The question of how a baseball player knows where to run in order to catch a fly ball has baffled psychologists for decades. (You might argue that a football receiver faces a similar task, but generally in football, the distances involved are much shorter, and most football players aren't expected to catch passes at all.)
There are three primary possible explanations for how a baseball fielder catches a fly ball:
Trajectory Projection (TP): The fielder calculates the trajectory of a ball the moment it is hit and simply runs to the spot where it will fall (of course, taking into account wind speed and barometric pressure).
Optical acceleration cancellation (OAC): The fielder watches the flight of the ball; constantly adjusting her position in response to what she sees. If it appears to be accelerating upward, she moves back. If it seems to be accelerating downward, she moves forward.
Linear optical trajectory (LOT): The fielder pays attention to the apparent angle formed by the ball, the point on the ground beneath the ball, and home plate, moving to keep this angle constant until she reaches the ball. In other words, she tries to move so that the ball appears to be moving in a straight line rather than a parabola.
In principle, all three of these systems should work. However, TP is probably impossible; our visual system isn't accurate at determining distances beyond about 30 meters, and outfielders stand up to 100 meters away from home plate. The second system, OAC, might not work because the visual system isn't actually very sensitive to acceleration. And the third system, LOT, is problematic because it doesn't predict a unique path for the fielder to take to the ball. Further, the most likely paths a fielder would take to catch a ball wouldn't be much different under OAC and LOT.
But Philip Fink, Patrick Foo, and William Warren figured out a way to experimentally distinguish between all three models. They had 8 skilled male baseball players and 4 skilled female softball players don VR headsets and attempt to catch virtual balls in a large room. The room was big enough that they could freely move 6 meters in each direction. VR was necessary because the researchers made their virtual balls take paths that aren't possible in real life: Read the rest of this post... | Read the comments on this post...... Read more »
Fink, P.W., Foo, P.S., & Warren, W.H. (2009) Catching fly balls in virtual reality: A critical test of the outfielder problem. Journal of Vision, 9(13), 1-8. info:/10.1167/9.13.14
A recent meta-analysis examined the relationship between various Internet uses and well being. The studies published until know is mostly about the discussion whether using Internet for communication with e-mail replaces other forms of communication such as using the phone, chat or face to face contact. Contact through e-mail, facebook, twitter and such replaces real [...]
Related posts:Internet Cool Tools for Physicians This is an excellent book for physicians to read...
Will Online Chat Alleviate Mood Loneliness? Loneliness is a subjective experience in which the individual’s...
Elderly and Internet and Computer Skills, An Update Social Capital Divide between the young and the elderly...
Related posts brought to you by Yet Another Related Posts Plugin.... Read more »
Huang, C. (2009) Internet Use and Psychological Well-being: A Meta-Analysis. CyberPsychology , 2147483647. DOI: 10.1089/cpb.2009.0217
Atonement is a funny concept. Essentially, it's the idea that you can cancel out a wrongdoing not by doing a good deed, but by engaging in some act of self-punishment.Although the classic example comes from Christianity (the tortured death of Jesus) similar concepts of penance are widespread in other religions. Penance goes beyond the more normal concepts of justice (revenge and punishment) because it's voluntary.Perhaps there's more going on here than meets the eye. Rob Nelissen and Marcel Zeelenberg at Tilburg University in The Netherlands speculate that people might indulge in self-punishment because it makes them feel better. They set out to test whether people self-punish when they are made to feel guilty, but only if they can't make good the wrong doing directly.The basic idea was that subjects had to perform a test that they were told was a measure of how hard they concentrated. As usual it was no such thing - whether the subjects succeeded or failed was entirely manipulated by the investigators (why do the subjects fall for this every time, I wonder!).They were paired up in this game with another player (OK, so the player was fictitious too, just there to help manipulate their guilt).Basically the deal was that some of the subjects were made to feel that they had underperformed on the second round of the game, so that they had let the other player down.In the third round, they were given the opportunity to self punish. Instead of just receiving points for correct answers (as in the previous rounds), now they would get points taken away for wrong ones.The key to the experiment was that some participants chose the level of their own punishment, while others got to chose the level of their partner's punishment.The graph sums up the results nicely. In the control condition, there was no guilt and the level of self-punishment and partner punishment were similarly low.In the guilty condition, there was no change in the partner punishment. But there was a large increase in self-punishment.It seems likely that this self punishment only takes place when there is no opportunity to right the wrong. In another experiment, they asked people to envisage a variety of scenarios about borrowing money for college from their parents, and then goofing off. They were then given some options on what to do next.Some students were presented with the scenario where they had no opportunity to make up for their wrongdoing by working harder. These students were more likely to choose the self-punishment course (denying themselves the pleasure of a skiing trip).So, there you go. Do the religious ideas of penance and atonement result from a subliminal need to self-punish? And, if they do, what could possibly be the function of it (from a biological/evolutionary perspective)?__________________________________________________________________________Nelissen RM, & Zeelenberg M (2009). When guilt evokes self-punishment: evidence for the existence of a Dobby Effect. Emotion (Washington, D.C.), 9 (1), 118-22 PMID: 19186924 This article by Tom Rees was first published on Epiphenom. It is licensed under Creative Commons.
... Read more »
Nelissen RM, & Zeelenberg M. (2009) When guilt evokes self-punishment: evidence for the existence of a Dobby Effect. Emotion (Washington, D.C.), 9(1), 118-22. PMID: 19186924
Late last year, Science published a bombshell - Lombardi et al's Detection of an infectious retrovirus, XMRV, in blood cells of patients with chronic fatigue syndrome. This paper reported the presence of a recently-discovered virus in 67% of the blood samples from 101 people with chronic fatigue syndrome (CFS).The question of whether people with CFS are suffering from an organic illness, or whether their condition is partially or entirely psychological in nature, is the Israel vs. Palestine of modern medicine - as a brief look at the Wikipedia talk pages will show. So when Lombardi et al linked CFS to xenotropic murine leukaemia virus-related virus (XMRV), they were hailed as heroes by some, less so by others. For some balanced coverage of this paper, see virology blog. Everyone agreed though that Lombardi et al was, as the saying goes, "important if true"...But it wasn't, at least not everywhere, according to a paper out today in PLoS ONE: Erlwein et al's Failure to Detect the Novel Retrovirus XMRV in Chronic Fatigue Syndrome. The findings are all there in the title - unlike Lombardi et al, these researchers didn't find XMRV in the blood of any of their blood samples from 186 CFS patients.Still, before people start proclaiming that the original finding has been "debunked", or decrying these results as flawed, some things to bear in mind...This was a different country. Erlwein et al used patients attending the CFS clinic at King’s College Hospital, London, England. The patients in the original study were drawn from various parts of the USA. So the new results don't mean that the original findings were wrong, merely that they don't apply everywhere. Notably, XMRV has previously been detected in prostrate cancer cells from American patients, but not European ones, so geographic differences seem to be at work. So maybe XMRV does cause CFS, it's just that the virus doesn't exist in Europe, for whatever reason - but bear in mind that even the original study never showed causation, only a correlation. There are many viruses that infect people in certain parts of the world and don't cause illness.On the other hand, it was a similar group of patients in terms of symptoms: Diagnosing CFS can be difficult, as there are no biological tests to confirm the condition, but Erlwein et al say thatBoth studies use the widely accepted 1994 clinical case definition of CFS. Lombardi et al. reported that their cases ‘‘presented with severe disability’’ and we provide quantifiable evidence confirming high levels of disability in our subjects. Our subjects were also typical of those seen in secondary and tertiary care in other centres.But the first study selected patients with "immunological abnormalities", although we're given few details...These are patients that have been seen in private medical practices, and their diagnosis of CFS is based upon prolonged disabling fatigue and the presence of cognitive deficits and reproducible immunological abnormalities. These included but were not limited to perturbations of the 2-5A synthetase/RNase L antiviral pathway, low natural killer cell cytotoxicity (as measured by standard diagnostic assays), and elevated cytokines particularly interleukin-6 and interleukin-8.The biological methods were similar: Both studies used a standard technique called nested PCR. (Lombardi et al also used various other methods, but their headline finding of XMRV in 67% of CFS patients vs just 4% of health people came from nested PCR.) PCR is a way of greatly increasing the amount of a certain sequence of DNA in a sample. If there's even a little bit to start with, you end up with lots. If there's none, you end up with none. It's easy to tell the difference between lots and none.But there were some differences. The first study only looked at a certain kind of white blood cells, whereas the new study used DNA from whole blood. Also, the first study targeted a larger span of viral DNA - from 419 to 1154:For identification of gag, 419F and 1154R were used as forward and reverse primers.Than the second one, which examined the section between positions 411 and 606. As a result, primer sequences used - which determine the DNA detected - were different. However, the authors of the new study claim that they would definitely have detected XMRV DNA if it had been there, because they used the same methods on control samples with the virus added, and got positive results...The positive control was a dilution of a plasmid with a full-length XMRV (isolate VP62) insert, generously gifted by Dr R. Silverman.Silverman was one of the authors of the original paper - so hopefully, both research teams were studying the same virus. But (although I'm no virologist) it seems possible that the new study might have been unable to detect XMRV if the DNA sequence of the virus from British patients was differed at certain key ways - the whole point about nested PCR is that it's extremely specific.Finally, there are stories behind these papers. The first study, that suggested that XMRV causes CFS, was conducted by the Whittemore Peterson Institute, who firmly believe that CFS is an organic disorder and who are now offering XMRV diagnostic tests to CFS patients. By contrast, the authors of the new study include Simon Wessely, a psychiatrist. Wessely is the most famous (or notorious) advocate of the idea that psychological factors are the key to CFS; he believes that it should be treated with psychotherapy.I'm sure we'll be hearing a lot more about XMRV in the coming months, so stay tuned.Otto Erlwein, Steve Kaye, Myra O. McClure, Jonathan Weber, Gillian Wills1, David Collier, Simon Wessely, Anthony Cleare (2010). Failure to Detect the Novel Retrovirus XMRV in Chronic Fatigue Syndrome PLoS ONE, 5 (1)Lombardi VC, Ruscetti FW, Das Gupta J, Pfost MA, Hagen KS, Peterson DL, Ruscetti SK, Bagni RK, Petrow-Sadowski C, Gold B, Dean M, Silverman RH, & Mikovits JA (2009). De... Read more »
Erlwein, O., Kaye, S., McClure, M., Weber, J., Wills, G., Collier, D., Wessely, S., & Cleare, A. (2010) Failure to Detect the Novel Retrovirus XMRV in Chronic Fatigue Syndrome. PLoS ONE, 5(1). DOI: 10.1371/journal.pone.0008519
Lombardi VC, Ruscetti FW, Das Gupta J, Pfost MA, Hagen KS, Peterson DL, Ruscetti SK, Bagni RK, Petrow-Sadowski C, Gold B.... (2009) Detection of an infectious retrovirus, XMRV, in blood cells of patients with chronic fatigue syndrome. Science (New York, N.Y.), 326(5952), 585-9. PMID: 19815723
The recent film adaptation of Maurice Sendak's Where the Wild Things are prompted much debate about whether it's appropriate to subject children to material which they could find frightening. It's rather topical then that a new research paper has looked at young children's understanding of fear reduction strategies, finding them to be more precocious than previously realised. Liat Sayfan and Kirsten Lagattuta presented 48 children aged between 4 and 7 years with picture-based short stories. The children were asked to imagine that they were the central character. The stories involved the child, either alone or with a companion, catching sight of a possible threat - either what could be a dangerous creature, such as a bear, or what might be an imaginary frightening creature, such as a ghost. The pictures were drawn such that the presence or not of the threats was ambiguous. Even the youngest children recognised that people differ in how vulnerable they are to fear, seeing adults as being less prone than children and men less prone than women. The girls were more sensitive to these differences than the boys.Another gender difference was that, at all ages, the girls tended to propose more avoidant fear reduction strategies - such as running and hiding - compared with the boys' suggestion of more aggressive strategies, including going on the attack. Surprisingly perhaps, children at all ages suggested that the story characters could use psychological (e.g. 'imagine that my mummy is there') as well as behavioural (e.g. 'go to my room') strategies to overcome their fears, although this tendency did increase with age. Another developmental change was that the older children proposed more 'reality affirming strategies' (e.g. 'I can remember that ghosts aren't real') whereas the four- and five-year-olds proposed more so-called 'positive pretense' strategies (e.g. 'I'll use a sword to fight the dragon').'These data advance current knowledge about the development of children's understanding of mind, emotion, and coping during childhood,' the researchers said._________________________________Sayfan L, & Lagattuta KH (2009). Scaring the monster away: what children know about managing fears of real and imaginary creatures. Child development, 80 (6), 1756-74 PMID: 19930350
... Read more »
Sayfan L, & Lagattuta KH. (2009) Scaring the monster away: what children know about managing fears of real and imaginary creatures. Child development, 80(6), 1756-74. PMID: 19930350
How does popularity affect how we judge music?We tend to say we like what other people like. No-one wants to stand out and risk ridicule by saying they don't enjoy universally loved bands, like The Beatles... unless they're trying to fit into a subculture where everyone hates The Beatles.But do people just pretend to like what others like, or can perceived popularity actually change musical preferences? Do The Beatles actually sound better because we know everyone loves them? An amusing Neuroimage study from Berns et al aimed to answer this question with the help of 27 American teens, an fMRI scanner, and MySpace.The teens were played 15 second clips of music, and had to rate each one a 5 star scale of quality. Before the experiment they listed their preferred musical genres, and they were only given music from genres they liked. To make sure no-one had heard the songs before, the researchers went on MySpace and found unsigned artists...A total of 20 songs were downloaded in each of the following genres: Rock, Country, Alternative/Emo/Indie, Hip-Hop/Rap, Jazz/Blues, and Metal (identified by the MySpace category).The twist was that each song was played twice: the first time with no information about its popularity, and then again, either with or without a 5 star popularity score shown on the screen. Cleverly, this was based on the number of MySpace downloads. This meant that the subjects had a chance to change their rating based on what they'd just learned about the song's popularity.What happened? Compared to doing nothing, hearing music activated large chunks of the brain, which is not very surprising. In some areas, activity correlated with how highly the listener rated the song:The regions showing activity correlated with likability were largely distinct from the auditory network and were restricted to bilateral caudate nuclei, and right lateral prefrontal cortices (middle and inferior gyri). Negative correlations with likability were observed in bilateral supramarginal gyri, left insula, and several small frontal regions.The headline result is that a song's popularity did not correlate with activity in this "liking music network", and nor did activity in these areas correlate with each teen's individual "conformism" score, i.e. how willing they were to change their ratings in response to learning about the song's popularity. Berns et al interpreted this as meaning that, in this experiment, popularity did not affect whether the volunteers really enjoyed the songs or not.Instead, activity in other areas was associated with conformism:we found a positive interaction in bilateral anterior insula, ACC/SMA, and frontal poles. Given the known roles of the anterior insula and ACC in the cortical pain matrix, this suggests that feelings of anxiety accompanied the act of conforming....Interestingly, the negative interaction revealed significant differences in the middle temporal gyrus... the popularity sensitive individuals showed significantly less activation. This suggests that sensitivity to popularity is also linked to less active listening.*This paper is a good example of using neuroimaging data to try to test psychological theories, in this case, the theory that social pressure influences musical enjoyment. This is makes it better than many fMRI studies because, as I have warned, without a theory to test it's all too easy to just make up a psychological story to explain any given pattern of neural responses.But there's still an element of this here: the authors suggest that conformism is motivated by anxiety, not because anyone reported suffering anxiety, but purely because it was associated with activity in the anterior insula etc. This is putting a lot of faith in the idea that anterior insula etc activity means anxiety - it could mean a lot of other things. There's also the question of whether letting people rate the songs for the first time before telling them about the popularity is the best way of measuring social pressures.The real gaping hole in this study, though, is that we're not told anything about the correlations between music preference and conformism. Are kids who like "Alternative/Emo/Indie" music genuinely free-thinkers, or are they really the biggest conformists of all? The paper doesn't tell us. In the absence of empirical evidence, we'll have to rely on South Park...Stan: But if life is only pain, then...what's the point of living?Fringe-flicking Goth: Just to make life more miserable for the conformists. (flicks fringe)Stan: Alright, so how do I join you?Goth Leader: If you wanna be one of the non-conformists, all you have to do is dress just like us and listen to the same music we do.- South Park, "Raisins"Berns, G., Capra, C., Moore, S., & Noussair, C. (2010). Neural mechanisms of the influence of popularity on adolescent ratings of music NeuroImage, 49 (3), 2687-2696 DOI: 10.1016/j.neuroimage.2009.10.070... Read more »
Berns, G., Capra, C., Moore, S., & Noussair, C. (2010) Neural mechanisms of the influence of popularity on adolescent ratings of music. NeuroImage, 49(3), 2687-2696. DOI: 10.1016/j.neuroimage.2009.10.070
Rocking out on the guitar is by far one of my most cherished pastimes. At the angst ridden age of 15 I picked up a cheap Ibanez strat and learned my very first Nirvana song, "Teen Spirit". Little did I know a good night's rest would play such a crucial role in my learning those simple power chords. Furthermore, who would've thought my desire to become the next grunge icon would determine the rate at which I learned during those quiet nights of sleep. According to a study by Fischer and Born, published in the most recent journal of SLEEP, they found that anticipating a reward can determine the amount of memory consolidation during at the important time of offline processing.too.Fischer S, & Born J (2009). Anticipated reward enhances offline learning during sleep.... Read more »
Fischer S, & Born J. (2009) Anticipated reward enhances offline learning during sleep. Journal of experimental psychology. Learning, memory, and cognition, 35(6), 1586-93. PMID: 19857029
The TV show Lie To Me focuses on the exploits of an expert in lie-detection as he solves perplexing crimes in his high-tech Washington laboratory. It's actually fun to watch, especially since it appears to make some effort to get the science right (a real-life expert on lie-detection, Paul Ekman, serves as a science adviser on the show).
One of the show's premises is that only highly-trained experts (most importantly, its protagonist, Cal Lightman) are capable of sniffing out a well-schooled liar. This too is based in fact. Most of us are very bad at spotting liars, taking their seemingly earnest facial expressions as the real thing. Ekman's research, along with many others, has shown that it's possible to detect subtle differences between authentic emotional expressions and the real thing. Since telling a lie invokes its own distinctive emotions, it's possible to see remnants of these emotions by carefully watching a liar in the act of deceit, even when the liar masks his or her true feelings with a feigned emotion.
But what if there was a shortcut in sniffing out a lie, relying on our own instinctual behavior? Would it be possible to improve the lie-detecting abilities of ordinary people without all that training? A team led by Mariëlle Stel had a hunch that our tendency to mimic the physical and facial expressions of the people we are speaking to might help us to tell when they are lying.
They recruited 92 volunteers to participate in a very short conversation. The volunteers were paired up randomly, and one person from each pair was randomly assigned to be the truth-teller or liar. This person was asked before meeting the other participant if he or she would like to make a donation to Amnesty International, and then, randomly, told to either tell the truth or lie about it, with a one-euro reward if they could convince the partner they were telling the truth. Read the rest of this post... | Read the comments on this post...... Read more »
A recording of a lecture by dr Ani Patel from the Neuroscience Institute in San Diego, including an exposé on why beat induction (and/or synchronizing to a beat) might be special to 'musical animals':Patel, A., Iversen, J., Bregman, M., & Schulz, I. (2009). Experimental Evidence for Synchronization to a Musical Beat in a Nonhuman Animal Current Biology DOI: 10.1016/j.cub.2009.03.038... Read more »
Patel, A., Iversen, J., Bregman, M., & Schulz, I. (2009) Experimental Evidence for Synchronization to a Musical Beat in a Nonhuman Animal. Current Biology. DOI: 10.1016/j.cub.2009.03.038
Do you write about peer-reviewed research in your blog? Use ResearchBlogging.org to make it easy for your readers — and others from around the world — to find your serious posts about academic research.
If you don't have a blog, you can still use our site to learn about fascinating developments in cutting-edge research from around the world.
Research Blogging is powered by SMG Technology.
To learn more, visit seedmediagroup.com.