According to a new paper, a full half of neuroscience papers that try to do a (very simple) statistical comparison are getting it wrong: Erroneous analyses of interactions in neuroscience: a problem of significance.Here's the problem. Suppose you want to know whether a certain 'treatment' has an affect on a certain variable. The treatment could be a drug, an environmental change, a genetic variant, whatever. The target population could be animals, humans, brain cells, or anything else.So you give the treatment to some targets and give a control treatment to others. You measure the outcome variable. You use a t-test of significance to see whether the effect is large enough that it wouldn't have happened by chance. You find that it was significant.That's fine. Then you try a different treatment, and it doesn't cause a significant effect against the control. Does that mean the first treatment was more powerful than the second?No. It just doesn't. The only way to find that out would be to compare the two treatments directly - and that would be very easy to do, because you have all the data to hand. If you just compare the two treatments to control you might end up with this scenario:Both treatments are very similar but one (B) is slightly better so it's significantly different from control, while A isn't. But they're basically the same. It's probably just fluke that B did slightly better than A. If you compared A and B directly you'd find they were not significantly different.An analogy: Passing a significance test is like winning a prize. You can only do it if you're much better than the average. But that doesn't mean you're much better than everyone who didn't win the prize, because some of them will have almost been good enough.Usain Bolt is the fastest man in the world (when he's not false-starting himself out of races). Much faster than me. But he's not much faster than the second fastest man in the world.Nieuwenhuis S, Forstmann BU, & Wagenmakers EJ (2011). Erroneous analyses of interactions in neuroscience: a problem of significance. Nature neuroscience, 14 (9), 1105-7 PMID: 21878926... Read more »
Nieuwenhuis S, Forstmann BU, & Wagenmakers EJ. (2011) Erroneous analyses of interactions in neuroscience: a problem of significance. Nature neuroscience, 14(9), 1105-7. PMID: 21878926
Do men and women differ in their cognitive capacities? It's been a popular topic of conversation since as far back as we have records of what people were talking about.While it's now (almost) generally accepted that men and women are at most only very slightly different in average IQ, there are still a couple of lines of evidence in favor of a gender difference.First, there's the idea that men are more variable in their intelligence, so there are more very smart men, and also more very stupid ones. This averages out so the mean is the same.Second, there's the theory that men are on average better at some things, notably "spatial" stuff involving the ability to mentally process shapes, patterns and images, while women are better at social, emotional and perhaps verbal tasks. Again, this averages out overall.According to proponents, these differences explain why men continue to dominate the upper echelons of things like mathematics, physics, and chess. These all tap spatial processing and since men are more variable, there'll be more extremely high achievers - Nobel Prizes, grandmasters. (There are also presumably more men who are rubbish at these things, but we don't notice them.)The male spatial advantage has been reported in many parts of the world, but is it "innate", something to do with the male brain? A new PNAS study says - probably not, it's to do with culture. But I'm not convinced.The authors went to India and studied two tribes, the Khasi and the Karbi. Both live right next to other in the hills of Northeastern India and genetically, they're closely related. Culturally though, the Karbi are patrilineal - property and status is passed down from father to son, with women owning no land of their own. The Khasi are matrilineal, with men forbidden to own land. Moreover, Khasi women also get just as much education as the men, while Karbi ones get much less.The authors took about 1200 people from 8 villages - 4 per culture - and got them to do a jigsaw puzzle. The quicker you do it, the better your spatial ability. Here were the results. I added the gender-stereotypical colours.In the patrilineal group, women did substantially worse on average (remember that more time means worse). In the matrilineal society, they performed as well as men. Well, a tiny bit worse, but it wasn't significant. Differences in education explained some of the effect, but only a small part of it.OK.This was a large study, and the results are statistically very strong. However, there's a curious result that the authors don't discuss in the paper - the matrilineal group just did much better overall. Looking at the men, they were 10 seconds faster in the matrilineal culture. That's nearly as big as the gender difference in the patrilineal group (15 seconds)!The individual variability was also much higher in the patrilineal society, for both genders.Now, maybe, this is a real effect. Maybe being in a patrilineal society makes everyone less spatially aware, not just women; that seems a bit of a stretch, though.There's also the problem that this study essentially only has two datapoints. One society is matrilineal and has low gender difference in visuospatial processing. One is patrilineal and has a high difference. But that's just not enough data to conclude that there's a correlation between the two things, let alone a causal relationship; you would need to study lots of societies to do that. Personally, I have no idea what drives the difference, but this study is a reminder of how difficult the question is.Hoffman M, Gneezy U, List JA (2011). Nurture affects gender differences in spatial abilities. Proceedings of the National Academy of Sciences of the United States of America PMID: 21876159... Read more »
Hoffman M, Gneezy U, & List JA. (2011) Nurture affects gender differences in spatial abilities. Proceedings of the National Academy of Sciences of the United States of America. PMID: 21876159
After a period of heavy use, hard disks tend to get 'fragmented'. Data gets written all over random parts of the disk, and it gets inefficient to keep track of it all.
That's why you need to run a defragmentation program occasionally. Ideally, you do this overnight, while you're asleep, so it doesn't stop you from using the computer.
A new paper from some Stanford neuroscientists argues that the function of sleep is to reorganize neural connections - a bit like a disk defrag for the brain - although it's also a bit like compressing files to make more room, and a bit like a system reset: Synaptic plasticity in sleep: learning, homeostasis and disease
The basic idea is simple. While you're awake, you're having experiences, and your brain is forming memories. Memory formation involves a process called long-term potentiation, which is essentially the strengthening of synaptic connections between nerve cells.
Yet if LTP is strengthening synapses, and we're learning all our lives, wouldn't the synapses eventually hit a limit? Couldn't they max out, so that they could never get any stronger?
Worse, the synapses that strengthen during memory are primarily glutamate synapses - and these are dangerous. Glutamate is a common neurotransmitter, and it's even a flavouring, but it's also a toxin.
Too much glutamate damages the very cells that receive the messages. Rather like how sound is useful for communication, but stand next to a pneumatic drill for an hour, and you'll go deaf.
So, if our brains were constantly forming stronger glutamate synapses, we might eventually run into serious problems. This is why we sleep, according to the new paper. Indeed, sleep deprivation is harmful to health, and this theory would explain why.
The authors argue that during deep, dreamless slow-wave sleep (SWS), the brain is essentially removing the "extra" synaptic strength formed during the previous day. But it does so in a way that preserves the memories. A bit like how defragmentation reorganizes the hard disk to increase efficiency, without losing data.
One possible mechanism is 'synaptic scaling'. When some of the inputs onto a given cell become stronger, all of the synapses on that cell could weaken. This would preserve the relative strength of the different inputs while keeping the total inputs constant. It's known that synaptic scaling happens in the brain, although it's not clear whether it has anything to do with sleep.
There are other theories of the restorative function of sleep, but this one seems pretty plausible. It stands in contrast to the idea that sleep is purely a form of inactivity designed to save energy, rather than being important in itself.
What this paper doesn't explain, and doesn't try to, is dreaming, REM sleep, which is very different to slow-wave sleep. REM is not required for life, so long as you get SWS, and some animals don't have REM, but they all have SWS, although in some animals, only one side of the brain has it at a time.
So it makes sense, but what's the evidence? There's quite a bit - but, it all comes from very simple animals, like flies and fish.
The pictures above show that, in various parts of the brain of the fruit fly, measures of synaptic strength are increased in flies that have been awake for some time, compared to recently rested ones. In general, synapses increase during the wake cycle and then return to baseline during sleep.
There's similar evidence from fish. But the authors admit that no-one has yet shown that the same is true of any mammals - let alone humans.
I'd say that this is important, because the fly brain is literally a million times smaller than ours. Synaptic overgrowth is surely more serious problem for a fly. They just have fewer neurons to play with. Sleep may have evolved to prune extra connections in primitive brains, and then shifted to playing a very different role in ours.
Wang G, Grone B, Colas D, Appelbaum L, & Mourrain P (2011). Synaptic plasticity in sleep: learning, homeostasis and disease. Trends in Neurosciences PMID: 21840068
... Read more »
Wang G, Grone B, Colas D, Appelbaum L, & Mourrain P. (2011) Synaptic plasticity in sleep: learning, homeostasis and disease. Trends in neurosciences. PMID: 21840068
Drugs that could modify or erase memories could soon be possible. We shouldn't rush to judge them unethical, says a Nature opinion piece by Adam Kolber, of the Neuroethics & Law Blog.
The idea of a pill that could make you forget something, or that could modify the emotional charge of a past experience, does seem rather disturbing.
Yet experiments on animals have gone a long to revealing the molecular mechanisms behind the formation and maintanence of memory traces. Much of the early work focussed on dangerously toxic drugs but recently more targeted approaches have appeared.
Kolber argues that we should not shy away from research in this area or brand the whole idea unethical. Rather we should consider the costs and benefits on a case-by-case basis.
The fears about pharmaceutical memory manipulation are overblown. Thoughtful regulation may some day be appropriate but excessive hand-wringing now over the ethics of tampering with memory could stall research into preventing post-traumatic stress in millions of people. Delay could also hinder people who are already debilitated by harrowing memories from being offered the best hope yet of reclaiming their lives. He says that
Given the close connection between memory and a sense of self, some bioethicists...worry that giving people too much power to alter their life stories could ultimately weaken their sense of identity and make their lives less genuine.
These arguments are not persuasive. Some memories, such as those of rescue workers who clean up scenes of mass destruction, may have no redeeming value. Drugs may speed up the healing process more effectively than counselling, arguably making patients more true to themselves than they would be if a traumatic experience were to dominate their lives.This is a complex issue. I can see his point, although I'm not sure the rescue worker example is the best one. A rescue worker, at least a professional one, has chosen to do that kind of work. The experiences that are part of that job are ones they decided to have - or at least that they knew were a realistic possibility - and that may be an expression of their identity.
The argument is perhaps more convincing in the case of someone who, quite unexpectedly, suffers an out-of-the-blue trauma. In this case, the trauma has nothing to do with their lives; if it interferes with their ability to function, it might "stop them from being themselves".
Kolber ends by quoting a fascinating story from Time magazine in 2007, which I didn't catch at the time:
Take a scenario recounted by a US doctor in 2007 (ref. 9). The doctor had biopsied a suspected cancer patient and sent a tissue sample to a pathologist while the woman was still in the operating room. Thinking she was completely sedated, the pathologist announced a bleak prognosis over the intercom.
The patient, who had received only local anaesthesia, heard the news and began to shriek, “Oh my God. My kids!” An anaesthesiologist standing by quickly injected her with propofol, a sedative that causes some people to forget what happened a few minutes before they were injected.
When the woman woke up, she had no memory of hearing her prognosis. Kolber A (2011). Neuroethics: Give memory-altering drugs a chance. Nature, 476 (7360), 275-6 PMID: 21850084... Read more »
Boiron, a multinational pharmaceutical company, have threatened an Italian blogger with legal action, the BMJ reports.
Many people are concerned when big pharmaceutical companies do this kind of thing. So I don't think we should make any exception merely because Boiron's pharmaceuticals happen to be homeopathic ones.
Samuel Riva, who blogs (in Italian) at blogzero.it, put up some articles critical of homeopathy
which included pictures of Boiron’s blockbuster homoeopathic product Oscillococcinum, marketed as a remedy against flu symptoms. The pictures were accompanied by captions, which joked about the total absence of any active molecules in homoeopathic preparationsBoiron wrote to Riva's internet provider threatening legal action, if the offending references to Boiron weren't taken down. They also wanted them to lock Riva out of his blog, the BMJ says. In response Riva removed the references to Boiron, including the pictures and captions, but kept the posts on homeopathy in general.
Above you can see a new picture I made of a Boiron product, with some captions you may find interesting. I've made sure to limit these to quotes from Wikipedia, and from Boiron USA's own website, and some simple mathematical calculations.
Beyond that, I make no comment whatsoever.
Turone F (2011). Homoeopathy multinational Boiron threatens amateur Italian blogger. BMJ (Clinical research ed.), 343 PMID: 21840920... Read more »
Turone F. (2011) Homoeopathy multinational Boiron threatens amateur Italian blogger. BMJ (Clinical research ed.). PMID: 21840920
PLoS ONE offers the confessions of a former medical ghostwriter: Being the Ghost in the Machine.
The article (which is open access and short, so well worth a read) explains how Linda Logdberg became a medical writer; what excited her about the job; what she actually did; and what made her eventually give it up.
Ghostwriting of course has a bad press at the moment and it's recently been banned by some leading research centres. Ghostwriting certainly is concerning, because of what it implies about the process leading up the publication.
However, it doesn't create bad science. A bad paper is bad because of what it says, not because of who (ghost)wrote it. Real scientists can write bad papers without a ghostwriter's help.
When pharmaceutical companies pay a ghostwriter, they are not doing this to get access to special dark arts that real scientists are innocent of. As far as I can see, it's just more efficient to use a specialist writer to do your scientific sins, when you're doing it all the time.
Rather like every evil sorcerer has an apprentice to do the day-to-day work of sacrificing animals and mixing potions.
My career came to an end over a job involving revising a manuscript supporting the use of a drug for attention deficit-hyperactivity disorder (ADHD), with a duration of action that fell between that of shorter- and longer-acting formulations.
However, I have two children with ADHD, and I failed to see the benefit of a drug that would wear off right at suppertime, rather than a few hours before or a few hours after. Suppertime is a time in ADHD households when tempers and homework arguments are often at their worst.
...Attempts to discuss my misgivings with the [medical] contact met with the curt admonition to ‘‘just write it.’’ But perhaps because this particular disorder was so close to home, I was unwilling to turn this ugly duckling of a ‘‘me-too’’ drug into a marketable swan.Many scientists will recall being in that kind of situation, albeit in a different context.
When writing a grant application, for example, you are almost literally trying to sell your proposed research to the awarding committee, on several levels. You need to sell the importance of the scientific question; the likely practical benefits of the research; the chance of success using your methods; what makes you the right person to do this work, and so on.
Writing a paper is much the same, although in this case you're selling research you've already done, and the data you collected.
Turning ugly ducklings into fundable, or publishable, swans, is part and parcel of modern science. Of course, the ducklings are not always as ugly as in the case Logdberg describes, but they are rarely as beautiful as they eventually end up.
Logdberg, L. (2011). Being the Ghost in the Machine: A Medical Ghostwriter's Personal View PLoS Medicine, 8 (8) DOI: 10.1371/journal.pmed.1001071... Read more »
Logdberg, L. (2011) Being the Ghost in the Machine: A Medical Ghostwriter's Personal View. PLoS Medicine, 8(8). DOI: 10.1371/journal.pmed.1001071
A news feature in Nature asks whether placebo controls are always a good idea: Why Fake It?
The piece looks at experimental neurosurgical treatments for Parkinson's, such as "Spheramine". This consists of cultured human cells, which are implanted directly into the brain of the sufferer. The idea is that the cells will grow and help produce dopamine, which is deficient in Parkinson's.
Peggy Willocks, a 44 year old teacher, took part in a trial of the surgery in 2000. She says it helped stave off the symptoms for years, but the development of Spheramine was axed in 2008 after a controlled trial found it didn't work any better than a placebo.
The placebo was "sham surgery" i.e. putting the patient through a full surgical procedure, and making holes in their skull, but without doing anything to their brain.
It's cheap and easy to do a placebo controlled trial of a drug - all you need is a sugar pill. But with neurosurgery, it's clearly a lot more involved. A placebo has to be believable. Convincing sham surgery is expensive, time-consuming, and it has real risks, albeit small ones.
Is it ethical to put patients through that?
That, I think, can only be decided on a trial-by-trial basis. It depends on the likely benefits of the treatment, and whether the trial is scientifically sound. Obviously, it'd be wrong to do sham surgery as part of a flawed trial that won't tell us anything useful.
The Nature article, however, goes further than this, and suggests that placebo controlled trials may be unsuitable for testing these kinds of treatments, failing to detect a real benefit in some patients:
There are hints from some of the failed phase II trials that patients followed up beyond study endpoints might tell a more positive story. Some say, therefore, that sham controls are sinking the prospects of valuable drugs.
Anders Björklund, a neuroscientist at Lund University in Sweden who is collaborating with [Roger Barker of Cambridge], says that sham surgery can lead researchers to throw out a strategy prematurely if the trial fails because of technical or methodological glitches rather than a true lack of efficacy.A patient advocate agrees:
According to Perry Cohen, who leads a network of patient activists called the Parkinson Pipeline Project, that’s exactly what is happening. He had always questioned the need for sham surgery, he says, but after the string of phase II failures, “We started saying, ‘Hey, this is a problem. These trials failed, but we know they are working for some people.’”...Cohen [says] that patients have different priorities and that researchers must take these into account. Researchers use placebo controls to weed out false positives. But for patients, the real ogre is the false negatives — which can sink a therapy before it has been optimized. I'm not sure about this. If I had Parkinson's, I would certainly hate to miss out on the genuine cure because a trial had failed to recognize that it worked. But equally, I would not be happy to be given a rubbish treatment that would have failed a placebo controlled trial, but never got one, because of arguments like this.
Placebo controlled trials can fail to detect benefits if they are too short, too small, methodologically flawed, or whatever. Certainly, a trial can be placebo controlled, and still crap. But the answer is surely to do better trials, not no trials.
It may well be that we shouldn't rush to do placebo controlled trials until later in the development process, when the technique has been properly refined. But the history of medicine is littered with treatments that "we know work for some people" - that didn't.
Katsnelson, A. (2011). Experimental therapies for Parkinson's disease: Why fake it? Nature, 476 (7359), 142-144 DOI: 10.1038/476142a... Read more »
Katsnelson, A. (2011) Experimental therapies for Parkinson's disease: Why fake it?. Nature, 476(7359), 142-144. DOI: 10.1038/476142a
According to a new paper, yours truly is bipolar.
I've written before of my experience of depression, and the fact that I take antidepressants, but I've never been diagnosed with bipolar.
I've taken a few drugs in my time. On certain dopamine-based drugs I got euphoric, filled with energy, talkative, confident, with no need for sleep, and a boundless desire to do stuff, which is textbook hypomania. So I think I know what it feels like, and I can confidently say that it has never happened to me out of the blue.
On antidepressants, I have had some mild experiences of this type. Ironically, the closest I've come to it was when I quit an SSRI antidepressant. I've also experienced periods of irritability and agitation on antidepressants. Either way, that's antidepressants. Bipolar is when you get high on your own supply of neurotransmitters.
Well, it used to be. Jules Angst et al have got some new, broader criteria for "bipolarity" in depression. They say that manic symptoms in response to antidepressants do count, exactly like out-of-the-blue mania.
What's more, under the new "Bipolar Specifier" criteria, there's no minimum duration. Under existing criteria the symptoms have to last 4 or 7 days, depending on severity. Under the new regime if you've ever been irritable, high, agitated or hyperactive, on antidepressants or not, you meet "Bipolar Specifier" criteria, so long as it was marked enough that someone else noticed it.
All you need is:
an episode of elevated mood, an episode of irritable mood, or an episode of increased activity with at least 3 of the symptoms listed under Criterion B of the DSM-IV-TR associated with at least 1 of the 3 following consequences: (1) unequivocal and observable change in functioning uncharacteristic of the person’s usual behavior, (2) marked impairment in social or occupational functioning observable by others, or (3) requiring hospitalization or outpatient treatment.
The bipolar net just got bigger. And they caught me in it. Me and 47% of depressed people in their study. They recruited 509 psychiatrists from around the world, and got each of them to assess between 10 and 20 consecutive adult depressed patients who were referred to them for evaluation or treatment. A total of 5635 patients were included.
Only 16% met existing DSM-IV criteria for bipolar disorder, so the new system with 47% identified an "extra" 31%, trebling the number of bipolar cases.
A cynic would say that this is a breathtaking piece of psychiatric marketing. You give people antidepressants, then you diagnose them with bipolar on the basis of their reaction to those drugs, thus justifying selling them yet more drugs.
The cynic would not be surprised to learn that this study was sponsored by pharmaceutical company Sanofi.
All investigators recruited received fees, on a per patient basis, from sanofi-aventis in
recognition of their participation in the study....The sponsor of this study (sanofi-aventis) was involved in the study design, conduct, monitoring, data analysis, and preparation of the report.In fairness, the authors do show that patients meeting their criteria tend to have characteristics typical of bipolar people. And they show that their system is at least as good as DSM-IV at picking out these cases:
For example, DSM-IV bipolar patients had a younger age of onset than DSM-IV depressed ones. "Bipolar specifier" patients did too, compared to the 53% who didn't meet the criteria. Same for a family history of manic symptoms, multiple episodes, and shorter episodes. All of those are pretty well established correlates of bipolar disorder.
That's fine, and the results are better than I expected when I picked up this paper. But all this shows us is that the bipolar specifier was no worse than the DSM-IV criteria as applied in this study.
It doesn't tell us whether either was any good.
DSM-IV criteria were used in a mechanical cookbook fashion - symptoms were assessed by the psychiatrist, written down, sent back to the study authors, who then diagnosed them if they ticked enough boxes. Is that a good approach? We don't know.
Most importantly, we have no idea whether these people would do better being treated as bipolar rather than as depressed. The difference being that bipolar people get mood stabilizers. Maybe these people would benefit from mood stabilizers, maybe not. Existing literature on mood stabilizers in bipolar people can't be assumed to generalize to these 47%.
In the discussion, the authors argue that antidepressants are not much good in bipolar people, whereas mood stabilizers are. Fun fact: Sanofi make many of the most popular formulations of valproic acid/valproate , a big selling mood stabilizer.
I think that is no coincidence. Maybe that sounds crazy, but hey, what do you expect? I'm bipolar.
Angst J, Azorin JM, Bowden CL, Perugi G, Vieta E, Gamma A, Young AH, & for the BRIDGE Study Group (2011). Prevalence and Characteristics of Undiagnosed Bipolar Disorders in Patients With a Major Depressive Episode: The BRIDGE Study. Archives of general psychiatry, 68 (8), 791-798 PMID: 21810644
... Read more »
Angst J, Azorin JM, Bowden CL, Perugi G, Vieta E, Gamma A, Young AH, & for the BRIDGE Study Group. (2011) Prevalence and Characteristics of Undiagnosed Bipolar Disorders in Patients With a Major Depressive Episode: The BRIDGE Study. Archives of general psychiatry, 68(8), 791-798. PMID: 21810644
What if there was a drug that didn't just affect the levels of chemicals in your brain, it turned off genes in your brain? That possibility - either exciting or sinister depending on how you look at it - could be remarkably close, according to a report just out from a Spanish group.The authors took an antidepressant, sertraline, and chemically welded it to a small interfering RNA (siRNA). A siRNA is kind of like a pair of genetic handcuffs. It selectively blocks the expression of a particular gene, by binding to and interfering with RNA messengers. In this case, the target was the serotonin 5HT1A receptor.The authors injected their molecule into the brains of some mice. The sertraline was there to target the siRNA at specific cell types. Sertraline works by binding to and blocking the serotonin transporter (SERT), and this is only expressed on cells that release serotonin; so only these cells were subject to the 5HT1A silencing.The idea is that this receptor acts as a kind of automatic off-switch for these cells, making them reduce their firing in response to their own output, to keep them from firing too fast. There's a theory that this feedback can be a bad thing, because it stops antidepressants from being able to boost serotonin levels very much, although this is debated.Anyway, it worked. The treated mice showed a strong and selective reduction in the density of the 5HT1A receptor in the target area (the Raphe nuclei containing serotonin cells), but not in the rest of the brain.Note that this isn't genetic modification as such. The gene wasn't deleted, it was just silenced, temporarily one hopes; the effect persisted for at least 3 days, but they didn't investigate just how long it lasted.That's remarkable enough, but what's more, it also worked when they administered the drug via the intranasal route. In many siRNA experiments, the payload is injected directly into the brain. That's fine for lab mice, but not very practical for humans. Intranasal administration, however, is popular and easy.So siRNA-sertraline, and who knows what other drugs built along these lines, may be closer to being ready for human consumption than anyone would have predicted. However... the mouse's brain is a lot closer to its nose than the human brain is, so it might not go quite as smoothly.The mind boggles at the potential. If you could selectively alter the gene expression of selective neurons, you could do things to the brain that are currently impossible. Existing drugs hit the whole brain, yet there are many reasons why you'd prefer to only affect certain areas. And editing gene expression would allow much more detailed control over those cells than is currently possible.Currently available drugs are shotguns and sledgehammers. These approaches could provide sniper rifles and scalpels. But whether it will prove to be safe remains to be seen. I certainly wouldn't want to be first one to snort this particular drug.Bortolozzi, A., Castañé, A., Semakova, J., Santana, N., Alvarado, G., Cortés, R., Ferrés-Coy, A., Fernández, G., Carmona, M., Toth, M., Perales, J., Montefeltro, A., & Artigas, F. (2011). Selective siRNA-mediated suppression of 5-HT1A autoreceptors evokes strong anti-depressant-like effects Molecular Psychiatry DOI: 10.1038/mp.2011.92... Read more »
Bortolozzi, A., Castañé, A., Semakova, J., Santana, N., Alvarado, G., Cortés, R., Ferrés-Coy, A., Fernández, G., Carmona, M., Toth, M.... (2011) Selective siRNA-mediated suppression of 5-HT1A autoreceptors evokes strong anti-depressant-like effects. Molecular Psychiatry. DOI: 10.1038/mp.2011.92
Antipsychotics, originally designed to control the hallucinations and delusions seen in schizophrenia, have been expanding their domain in recent years. Nowadays, they're widely used in bipolar disorder, depression, and as a new paper reveals, increasingly in anxiety disorders as well.The authors, Comer et al, looked at the NAMCS survey, which provides yearly data on the use of medications in visits to office-based doctors across the USA.Back in 1996, just 10% of visits in which an anxiety disorder was diagnosed ended in a prescription for an antipsychotic. By 2007 it was over 20%. No atypical is licensed for use in anxiety disorders in the USA, so all of these prescriptions are off-label.Not all of these prescriptions will have been for anxiety. They may have been prescribed to treat psychosis, in people who also happened to be anxious. However, the increase was accounted for by the rise in non-psychotic patients, and there was a rise in the rate of people with only anxiety disorders.The increase was driven by the newer, "atypical" antipsychotics.Whether the modern trend for prescribing antipsychotics for anxiety is a good or a bad thing, is not for us to say. The authors discuss various concerns ranging from the side effects (obesity, diabetes and more), to the fact that there have only been a few clinical trials of these drugs in anxiety.But what's really disturbing about these results, to me, is how fast the change happened. Between 2000 and 2004, use doubled from 10% to 20% of anxiety visits. That's an astonishingly fast change in medical practice.Why? It wasn't because that period saw the publication of a load of large, well-designed clinical trials demonstrating that these drugs work wonders in anxiety disorders. It didn't.But as Comer et al put it:An increasing number of office-based psychiatrists are specializing in pharmacotherapy to the exclusion of psychotherapy. Limitations in the availability of psychosocial interventions may place heavy clinical demands on the pharmacological dimensions of mental health care for anxiety disorder patients. In other words, antipsychotics may have become popular because they're the treatment for people who can't afford anything better.These data show that antipsychotics were over twice as likely to be prescribed to African American patients; the poor i.e. patients with public health insurance; and children under 18.Comer JS, Mojtabai R, & Olfson M (2011). National Trends in the Antipsychotic Treatment of Psychiatric Outpatients With Anxiety Disorders. The American journal of psychiatry PMID: 21799067... Read more »
Comer JS, Mojtabai R, & Olfson M. (2011) National Trends in the Antipsychotic Treatment of Psychiatric Outpatients With Anxiety Disorders. The American journal of psychiatry. PMID: 21799067
Brain maturation continues for longer than previously thought - well up until age 30. That's according to two papers just out, which may be comforting for those lamenting the fact that they're nearing the big Three Oh.This challenges the widespread view that maturation is essentially complete by the end of adolescence, in the early to mid 20s.Petanjek et al show that the number of dendritic spines in the prefrontal cortex increases during childhood and then rapidly falls during puberty - which probably represents a kind of "pruning" process. That's nothing new, but they also found that the pruning doesn't stop when you hit 20. It continues, albeit gradually, up to 30 and beyond.This study looked at post-mortem brain samples taken from people who died at various different ages. Lebel and Beaulieu used diffusion MRI to examine healthy living brains. They scanned 103 people and everyone got at least 2 scans a few year years apart, so they could look at changes over time.They found that the fractional anisotropy (a measure of the "integrity") of different white matter tracts varies with age in a non-linear fashion. All tracts become stronger during childhood, and most peak at about 20. Then they start to weaken again. But not all of them - others, such as the cingulum, take longer to mature.Also, total white matter volume continues rising well up to age 30.Plus, there's a lot of individual variability. Some people's brains were still maturing well into their late 20s, even in white matter tracts that on average are mature by 20. Some of this will be noise in the data, but not all of it.These results also fit nicely with this paper from last year that looked at functional connectivity of brain activity.So, while most maturation does happen before and during adolescence, these results show that it's not a straightforward case of The Adolescent Brain turning suddenly into The Adult Brain when you hit 21, which point it solidifies into the final product,Lebel C, & Beaulieu C (2011). Longitudinal development of human brain wiring continues from childhood into adulthood. The Journal of Neuroscience, 31 (30), 10937-47 PMID: 21795544Petanjek, Z., Judas, M., Simic, G., Rasin, M., Uylings, H., Rakic, P., & Kostovic, I. (2011). Extraordinary neoteny of synaptic spines in the human prefrontal cortex Proceedings of the National Academy of Sciences DOI: 10.1073/pnas.1105108108... Read more »
Lebel C, & Beaulieu C. (2011) Longitudinal development of human brain wiring continues from childhood into adulthood. The Journal of neuroscience : the official journal of the Society for Neuroscience, 31(30), 10937-47. PMID: 21795544
Petanjek, Z., Judas, M., Simic, G., Rasin, M., Uylings, H., Rakic, P., & Kostovic, I. (2011) Extraordinary neoteny of synaptic spines in the human prefrontal cortex. Proceedings of the National Academy of Sciences. DOI: 10.1073/pnas.1105108108
According to the BBC, a new study has found that northern peoples have bigger eyes - and bigger brains.Actually, the paper in question talked about eyes but didn't make much of the brain finding, which is confined to the Supplement. Nonetheless, they did find an effect on brain size too. Peoples living further from the equator have larger eye sockets and also larger total cranial capacity (brain volume), apparantly. The authors include Robin Dunbar of "Dunbar's Number" fame.Their idea is that humans evolved larger eyes because further from the equator, there's on average less light, so you need bigger eyes to collect more light and see well.They looked at 19th century skulls stored in museum collections, and measured the size of the eye sockets (orbits). They did this by filling them with a bunch of little glass balls and counting how many balls fit. They had a total of 73 "healthy adult" skulls from 12 different places, ranging from Scandinavia to Kenya.Latitude essentially meant northern-ness because only one population (Australian Aborigines) were from far south of the equator.Total brain size also increased with latitude, but eye size increased even faster, so the eye:brain ratio increased. They don't really discuss the brain size finding, except to suggest that it might be accounted for by increased visual cortex (though there's no direct evidence of that), but here it is, showing latitude vs. cranial capacity in ml.The idea that northern peoples are brainier unfortunately has a long history. For example, it's been suggested that the coldness of northern climes meant that life was harder, so people evolved to be smarter to survive.The heat of the Sahara was easy living compared to the deadly horrors of an English winter, in other words. Hmm.The idea that higher latitudes are darker, so you'd need bigger eyes, and then a bigger brain (at least the visual parts of the brain) to process what you see, is certainly more plausible than that theory. However, the data in this paper seem pretty scanty.Measuring skulls by filling them with little balls was cutting edge neuroscience in the 19th century. However, nowadays, we have MRI scanners. Although usually intended to image the brain, many MRI scans of the head also give an excellent image of the skull and eyes. Millions of people of all races get MRI scans every year.Nowadays, people have medical records, so we can tell exactly how healthy people are. The people who became these skulls in a museum were said to be healthy, but how healthy a 19th century Indian or Kenyan could hope to be, by modern standards, I'm not sure. Certainly there's an excellent chance that they were malnourished and I suspect this would make your eyes and skull smaller.Pearce, E., & Dunbar, R. (2011). Latitudinal variation in light levels drives human visual system size Biology Letters DOI: 10.1098/rsbl.2011.0570... Read more »
Pearce, E., & Dunbar, R. (2011) Latitudinal variation in light levels drives human visual system size. Biology Letters. DOI: 10.1098/rsbl.2011.0570
"It's pretty painless. Basically you just need to lie there and make sure you don't move your head".This is what I say to all the girls... who are taking part in my fMRI studies. Head movement is a big problem in fMRI. If your head moves, your brain moves and all fMRI analysis assumes that the brain is perfectly still. Although head movement correction is now a standard part of any analysis software, it's not perfect.It may be a particular problem in functional connectivity studies, which attempt to measure the degree to which different parts of the brain are "talking" to each other, in terms of correlated neural activity over time. These are extremely popular nowadays. It's even been claimed that this data may help us understand consciousness itself (although we've heard that before).A new paper offers some important words of caution. It shows that head motion affects estimates of functional connectivity. The more motion, the weaker the measured connectivity in long-range networks, while shorter range connections were stronger. Also, men tended to move more than women.The effect was small - head movement can't explain more than a small fraction of the variability in connectivity.The authors looked at 1,000 scans from healthy volunteers. They just had to lie in the scanner at rest. They looked at functional connectivity, using standard "motion correction" methods, and correlated it with head movement (which you can measure very accurately from the MRI images themselves.) Men tended to move more than women. Could this explain why women tend to have higher functional connectivity?Disconcertingly, head movement was associated with low long range / high short range connections, which is exactly what's been proposed to happen in autism (although in fairness, not all the evidence for this comes from fMRI).This clearly doesn't prove that the autism studies are all dodgy, but it's an issue. People with autism, and people with almost any mental or physical disorder, on average tend to move more than healthy controls.One caveat. Could it be that brain activity causes head movement, rather than the reverse? The authors don't consider this. Head movement must come from the brain, of course. Probably from the motor cortex. The fact that motor cortex functional connectivity was positively associated with movement does suggest a possible link.However, this paper still ought to make anyone who's using functional connectivity worry - at least a little.Head motion is a particularly insidious confound. It is insidious because it biases between-group studies often in the direction of the hypothesized difference....even though there is considerable variation that is not due to head motion, in any given instance, a between-group difference could be entirely due to motion. Van Dijk, K., Sabuncu, M., & Buckner, R. (2011). The Influence of Head Motion on Intrinsic Functional Connectivity MRI NeuroImage DOI: 10.1016/j.neuroimage.2011.07.044... Read more »
Van Dijk, K., Sabuncu, M., & Buckner, R. (2011) The Influence of Head Motion on Intrinsic Functional Connectivity MRI. NeuroImage. DOI: 10.1016/j.neuroimage.2011.07.044
The past decade has been a bad one for antidepressant manufacturers. Quite apart from all the bad press these drugs have been getting lately, there's been a remarkable lack of new antidepressants making it to the market. The only really novel drug to hit the shelves since 2000 has been agomelatine. There were a couple of others that were just minor variants on old molecules, but that's it.This makes "Lu AA21004" rather special. It's a new antidepressant currently in development and by all accounts it's making good progress. It's now in Phase III trials, the last stage before approval. And a large clinical trial has just been published finding that it works.But is it a medical advance or merely a commercial one?Pharmacologically, Lu AA21004 is kind of a new twist on an old classic . Its main mechanism of action is inhibiting the reuptake of serotonin, just like Prozac and other SSRIs. However, unlike them, it also blocks serotonin 5HT3 and 5HT7 receptors, activates 5HT1A receptors and partially agonizes 5HT1B.None of these things cry out "antidepressant" to me, but they do at least make it a bit different.The new trial took 430 depressed people and randomized them to get Lu AA21004, at two different doses, 5mg or 10mg, or the older antidepressant venlafaxine at the high-ish dose of 225 mg, or placebo.It worked. Over 6 weeks, people on the new drug improved more than those on placebo, and equally as well as people on venlafaxine; the lower 5 mg dose was a bit less effective, but not significantly so.The size of the effect was medium, with a benefit over-and-above placebo of about 5 points on the MADRS depression scale, which considering that the baseline scores in this study averaged 34, is not huge, but it compares well to other antidepressant trials.Now we come to the side effects, and this is the most important bit, as we'll see later. The authors did not specifically probe for these, they just relied on spontaneous report, which tends to underestimate adverse events.Basically, the main problem with Lu AA21004 was that it made people sick. Literally - 9% of people on the highest dose suffered vomiting, and 38% got nausea. However, the 5 mg dose was no worse than venlafaxine for nausea, and was relatively vomit-free. Unlike venlafaxine, it didn't cause dry mouth, constipation, or sexual problems.So that's lovely then. Let's get this stuff to market!Hang on.The big selling point for this drug is clearly the lack of side effects. It was no more effective than the (much cheaper, because off-patent) venlafaxine. It was better tolerated, but that's not a great achievement to be honest. Venlafaxine is quite notorious for causing side effects, especially at higher doses.I take venlafaxine 300 mg and the side effects aren't the end of the world, but they're no fun, and the point is, they're well known to be worse than you get with other modern drugs, most notably SSRIs.If you ask me, this study should have compared the new drug to an SSRI, because they're used much more widely than venlafaxine. Which one? How about escitalopram, a drug which is, according to most of the literature, one of the best SSRIs, as effective as venlafaxine, but with fewer side effects.Actually, according to Lundbeck, who make escitalopram, it's even better than venlafaxine. Now, they would say that, given that they make it - but the makers of Lu AA21004 ought to believe them, because, er, they're the same people. "Lu" stands for Lundbeck.The real competitor for this drug, according to Lundbeck, is escitalopram. But no-one wants to be in competition with themselves.This may be why, although there are no fewer than 26 registered clinical trials of Lu AA21004 either ongoing or completed, only one is comparing it to an SSRI. The others either compare it to venlafaxine, or to duloxetine, which has even worse side effects. The one trial that will compare it to escitalopram has a narrow focus (sexual dysfunction).Pharmacologically, remember, this drug is an SSRI with a few "special moves", in terms of hitting some serotonin receptors. The question is - do those extra tricks actually make it better? Or is it just a glorified, and expensive, new SSRI? We don't know and we're not going to find out any time soon.If Lu AA21004 is no more effective, and no better tolerated, than tried-and-tested old escitalopram, anyone who buys it will be paying extra for no real benefit. The only winner, in that case, being Lundbeck.Alvarez E, Perez V, Dragheim M, Loft H, & Artigas F (2011). A double-blind, randomized, placebo-controlled, active reference study of Lu AA21004 in patients with major depressive disorder. The International Journal of Neuropsychopharmacology , 1-12 PMID: 21767441... Read more »
Alvarez E, Perez V, Dragheim M, Loft H, & Artigas F. (2011) A double-blind, randomized, placebo-controlled, active reference study of Lu AA21004 in patients with major depressive disorder. The international journal of neuropsychopharmacology / official scientific journal of the Collegium Internationale Neuropsychopharmacologicum (CINP), 1-12. PMID: 21767441
A new paper claims to show the neural activity associated with consciously seeing something: Awareness-related activity in prefrontal and parietal cortices in blindsight reflects more than superior visual performance. You might think it would be easy to find the neural correlates of seeing stuff. Just pop someone in the scanner and show them a picture.However, it's not that simple, because that wouldn't tell you which brain activations were associated with concious awareness as such, as opposed to all of the other things that happen when we see a picture, many of which may be unconscious.The new paper makes use of a patient, "GY", who has what's known as blindsight, a mysterious phenomenon caused by damage to the primary visual cortex on one side of the brain. In GY's case this was caused by head trauma at age 8. He's now 52, and is unable to see anything on the right side of his visual field. He only sees half the world.However, he is still able to respond to some kinds of visual stimuli on the right, as if he could see them. But he reports that he doesn't. Blindsight is a rare phenomenon but one that's been extensively studied, because of its obvious scientific and indeed philosophical interest.In this study the authors used fMRI to try to work out the neural correlates of concious awareness as opposed to unconcious responses. They showed GY a set of horizontal and vertical bars. His task was to say whether the horizontal bars were on top or not.The stimuli were shown on either the left or the right. The trick was that they set it up such that it was equally easy in either the "good" or the "blind" side of the brain. In order to do that, they had to make the contrast of the bars much less bright on the "good" side.What happened? As expected, behavioural performace was equal whether the stimuli were on the left or the right. GY got the judgement right about 75% of the time.However, his brain responded much more strongly to stimuli on the good side - stimuli that were consciously perceived. Activations appeared all over the cerebral cortex in the occipital, parietal and frontal lobes, as you can see in the pic at the top.The only area more activated by the unconscious stimuli was a tiny blob in the amygdala.So what does this show? Is it "the neural correlates of conscious awareness", that Holy Grail of neuro-philosophers?Maybe. It's a clever experimental design, which rules out some alternative explanations. It's hard to argue that the conciously perceived stimuli were just stronger, and hence more likely to affect the brain. They were actually much fainter.And it's hard to argue that this represents subconscious information processing, or the process of making the decision whether the horizontal bars were top or bottom, because that was also going on in the blind condition and performance was the same.Yet my concern is that the main route by which visual information gets into the cortex from the eyes, is via V1, the part which was damaged on one side. So in a sense it's no surprise at all that the cortex was more activated in the conscious condition.Maybe this is the whole point - maybe this study shows us that consciousness is to do with cortical processing. However, when you put it like that, it seems a bit of an anticlimax. I don't think anyone would seriously dispute that. The cortex does almost everything. The interesting debates are about where in the cortex consciousness happens, if indeed it's localized at all, and what kind of processing underlies it.It's unlikely that all of the activated areas were directly linked to conscious awareness. But we don't know which of them were.Persaud, N., Davidson, M., Maniscalco, B., Mobbs, D., Passingham, R., Cowey, A., & Lau, H. (2011). Awareness-related activity in prefrontal and parietal cortices in blindsight reflects more than superior visual performance NeuroImage DOI: 10.1016/j.neuroimage.2011.06.081... Read more »
Persaud, N., Davidson, M., Maniscalco, B., Mobbs, D., Passingham, R., Cowey, A., & Lau, H. (2011) Awareness-related activity in prefrontal and parietal cortices in blindsight reflects more than superior visual performance. NeuroImage. DOI: 10.1016/j.neuroimage.2011.06.081
Back in June, the U.S. Supreme Court ruled that a Californian law banning the sale of violent videogames to children was unconstitutional because it violated the right to free speech.However, the ruling wasn't unanimous. Justice Stephen Breyer filed a dissenting opinion. Unfortunately, it contains a whopping piece of bad neuroscience. The ruling is here. Thanks to the Law & Neuroscience Blog for noticing this.Breyer says (on page 13 of his bit)Cutting-edge neuroscience has shown that “virtual violence in video game playing results in those neural patterns that are considered characteristic for aggressive cognition and behavior.”He then cites this fMRI study from 2006. It's from the same group as this one I wrote about recently.Breyer quotes this study as part of a discussion of the evidence linking violent video game use to violence. I have nothing to say about this, but I will point out than the fact that violent crime fell heavily in America after 1990, which is when the Super Nintendo and Sega Megadrive were invented.Anyway, does this study show that playing violent games causes aggressive brain activity? Not exactly. By which I mean "no".They scanned 13 young men playing a shooter game. The main finding was that during "violent" moments of the game, activity in the rostral ACC and the amygdala activity falls. At least this is the interpretation the authors give.OK, but even if this neural response is "characteristic for aggressive cognition and behavior", it only lasted a few seconds. There's no evidence at all that this causes any lasting effects on brain function, or behaviour.The real problem though is that the whole thing is based on the theory that violence is associated with reduced amygdala (and rACC) activity.The authors cite various studies to this effect, but they don't distinguish between reduced activity as an immediate neural response to violence, as in this study, and reduced activity in people with high exposure to violent media, in response to non-violent stimuli.This is rather like saying that because having a haircut reduces your total hair, and because bald people have no hair, haircuts cause baldness. Short-term doesn't automatically become long-term.Besides, the whole idea that amygdala deactivation = violence is a bit weird because they used to destroy people's amydalas to reduce violent aggression in severe mental and neurological illness:Different surgical approaches have involved various stereotactic devices and modalities for amygdaloid nucleus destruction, such as the injection of alcohol, oil, kaolin, or wax; cryoprobe lesioning; mechanical destruction; diathermy loop; and radiofrequency lesioning...Lovely. It even worked sometimes, apparantly. Although it killed 4% of people. You can't reduce the activity of a region much more than by destroying it, yet destroying the amygdala reduced violence, or at the very least, didn't make it worse.The truth is that aggression isn't a single thing. Everyone knows that there are two main kinds, "in cold blood" and "in the heat of the moment". Killing someone in a spontaneous bar brawl is one thing, but carefully planning to sneak up behind them and stab them is quite another.Just based on what we know about the rare cases of amygdala-less people, I would imagine that destroying the amygdala would reduce violence "in the heat of the moment", which is motivated by anger and fear. The kind of patients who got this surgery seem to have been that kind of violent person, not the cold calculating kind.So, even if violent video games reduced amygdala activity long term, that would probably reduce some kinds of violence.Weber, R., Ritterfeld, U., & Mathiak, K. (2006). Does Playing Violent Video Games Induce Aggression? Empirical Evidence of a Functional Magnetic Resonance Imaging Study Media Psychology, 8 (1), 39-60 DOI: 10.1207/S1532785XMEP0801_4... Read more »
Weber, R., Ritterfeld, U., & Mathiak, K. (2006) Does Playing Violent Video Games Induce Aggression? Empirical Evidence of a Functional Magnetic Resonance Imaging Study. Media Psychology, 8(1), 39-60. DOI: 10.1207/S1532785XMEP0801_4
An important paper just out asks, Could adult hippocampal neurogenesis be relevant for human behavior?Neuroscientists, and the media, are very excited by hippocampal neurogenesis - the ongoing creation of new neurons in an area called the dentate gyrus of the hippocampus. This is because it was thought, for a long time, that no new neurons were created in the adult brain. It turned out that this was wrong.There's lots of exciting suggestive evidence that the process is involved in learning and memory, responses to stress, depression, and the action of antidepressants, to name just a few, although this is controversial.However, there's a big question which has rarely been considered: how much neurogenesis are we talking about? Are there enough new cells that it would be realistic for them to be doing important stuff, or is it just a little trickle?The most common source of skepticism toward a functional role for adult neurogenesis is the perception that too few new neurons are added in adulthood to have a signiﬁcant impact. Interestingly, this concern, while valid, is usually raised informally and rarely in the scientiﬁc literature. Very few studies have addressed this issue...The new paper reviews the evidence. Firstly, they point out that in the hippocampus, there's a group of cells called dentate gyrus granule cells which are unusual in that activity in just a few of these cells can have big downstream consequences. And these are the cells that new born neurons turn into.Each granule cell contacts only 10–15 CA3 pyramidal cells...a single granule cell is able to trigger ﬁring in downstream CA3 targets...Because of this “detonator” action...a single granule neuron can potentially have a large impact despite representing only a tiny fraction of the population.So new cells may play an important role. But exactly how many are there? They re-analyze data from their own lab in rats, and, making a few assumptions, arrive at the following rough estimate: in 3 month old rats, there are 650k "young" cells less than 8 weeks old; even in 2 year old rats (ancient, for a rat) there are 50k.This is enough to have a big impact downstream:Since there are approximately 500,000 CA3 pyramidal cells, and each granule cell contacts 11–15 pyramidal cells, this suggests that even in the oldest animals, each CA3 pyramidal cell could receive a direct contact from a young granule cellThat's all in rats, though. What about humans? It's hard to tell. The problem is that the best way to assess the rate of neurogenesis is to inject a drug called BrdU and then study the brain post-mortem. Unfortunately, this drug can cause cancer so you can't just give it to people for the purposes of science. The only time it's used in humans is (ironically) to help detect cancer.However, one study did manage to look at BrdU staining in the hippocampus, using people who'd been injected with BrdU for cancer (not brain cancer) and then died. This study found, the authors say, rates of neurogeneis at least as high as in rats, considering the low dose of BrdU, the fact that the patients were old, and stressed (by having cancer).They admit that this is just one study, and comparing doses between rats and humans is inexact. They nonetheless conclude:Are these numbers potentially sufﬁcient to exert a functional impact in humans? We feel that the answer to this question is an overwhelming "yes".Snyder JS, & Cameron HA (2011). Could adult hippocampal neurogenesis be relevant for human behavior? Behavioural brain research PMID: 21736900... Read more »
Snyder JS, & Cameron HA. (2011) Could adult hippocampal neurogenesis be relevant for human behavior?. Behavioural brain research. PMID: 21736900
A new paper claims to have found A novel functional brain imaging endophenotype of autism.They used fMRI to show that the brains of teenagers with autism showed no activation differences to looking at smiling happy faces, or afraid faces, compared to unemotional ones. In teens without autism, there was strong activation in many emotional and face-related brain regions. The unaffected brothers and sisters of the autistic people showed intermediate effects.This is a fine study. The finding that siblings of people with autism have weakened neural responses to emotional faces is quite important as it suggests that this finding correlates (to some degree) with your position on the autism "spectrum".The abstract of the paper actually downplays this, and says "The response in unaffected siblings did not differ significantly from the response in autism". However, there was a significant linear trend of group, and looking at the graphs, it's clear the siblings were In The Middle, like Malcolm.There's plenty more nice things you could do with these results, which is an unusally large and rich dataset (120 people - 40 in each group). You could see, for example, whether siblings tend to be similar in terms of neural response. You could see whether the siblings who are most alike in brain response, are closest in symptoms. Or just look a the structural data on brain size and shape to see if there are characteristic differences between siblings that make one of the autistic and the other not.There are a few problems. Most of the analyses are subject to the non-independence problem, because they defined their regions of interest based on the areas that showed a significant happy vs neutral face effect in the control group. So it's no surprise that when they generated graphs from these areas, the control group showed the strongest effect. However, they also do whole-brain analyses which avoid this problem and I don't think it undermines the main results.So it's a decent study. But is this a "biomarker", or "endophenotype", as the title of the paper has itThese are both hot topics in neuroscience at the moment. As the authors put it (emphasis mine):An endophenotype is a heritable feature associated with a condition, present in affected individuals regardless of whether their condition is manifested, which co-segregates with the condition in families and which is present in unaffected family members at a higher rate than in the general population.In such family members, endophenotypes represent instances in which genes associated with a particular condition exert measurable effects in individuals in whom they are insufﬁcient to cause the condition itself...The promise of characterizing endophenotypes lies in their hypothesized intermediate position between genotype and phenotype... the etiology of the endophenotype is likely to be correspondingly simpler: it can be said to be ‘closer to the level of gene action’.The idea, in other words, is that if we can find a difference in the brains of people with autism, and their unaffected relatives who (presumably) share some of the same genes, we might have found a mechanism by which the genes ultimately cause the symptoms.It might be easier, in other words, to find the genes for brain-not-lighting-up-to-happy-faces, than it will be to find genes for autism. Then once we've found those, we can use them to better understand autism.Everyone's talking about biomarkers and endophenotypes, and in some fields, scarcely a paper comes out nowadays that doesn't lay claim to having found one. My concern is that, while in theory endophenotypes seem "closer to the genetics" because they're "biological" rather than "behavioural", this is just a philosophical illusion based on the idea that the mind is not the brain.We actually have no idea whether brain-not-lighting-up-to-happy-faces is closer to genetics than autistic behaviour is. I'd say that our default assumption should be that everything is exactly the same "distance" from DNA, that is to say, everything is the product of complex interactions between genes and environment.Some things are under the more or less exclusive control of a small number of genes, and these are called "genetic", but it's important not to assume that just because something's "in the brain", it's probably "more genetic" in this sense. The brain is a product of the environment as well.If you scanned my brain while playing an audio recording of Urda love poetry, not much would happen. I don't know Urdu. In someone who did speak Urdu, all kinds of language and emotional areas would light up. That doesn't mean Urdu-brain-response is genetic. It's exactly as genetic as speaking-Urdu, which isn't genetic.Spencer, M., Holt, R., Chura, L., Suckling, J., Calder, A., Bullmore, E., & Baron-Cohen, S. (2011). A novel functional brain imaging endophenotype of autism: the neural response to facial expression of emotion Translational Psychiatry, 1 (7) DOI: 10.1038/tp.2011.18... Read more »
Spencer, M., Holt, R., Chura, L., Suckling, J., Calder, A., Bullmore, E., & Baron-Cohen, S. (2011) A novel functional brain imaging endophenotype of autism: the neural response to facial expression of emotion. Translational Psychiatry, 1(7). DOI: 10.1038/tp.2011.18
In theory, medicine works like this. You get some signs or symptoms. You go to the doctor, and depending on those, you get a diagnosis. Your doctor decides on the best available treatment on that basis.The logic of this system depends upon the sequence. A diagnosis is meant to be an objective statement about the nature of your illness; treatments (if any) come afterwards. It would be odd if the treatments on offer influenced what diagnosis you got.An interesting paper just out suggests that exactly this kind of reverse influence has happened. The authors looked at what happened in the USA in 2003 when antidepressants were slapped with a "black box" warning, cautioning against their use in children and adolescents, due to concerns over suicide in young people.They used the data from the annual National Ambulatory Medical Care Survey (NAMCS) and the National Hospital Ambulatory Medical Care Survey (NHAMCS). These record data on the number of patients visiting their doctor regarding different illnesses, and what medications were prescribed if any.What happened? The warning led to a reduction in the use of antidepressants. No surprise there, but unexpectedly, this wasn't because teens who visited their doctor regarding depression, were less likely to get given these drugs.Actually, the proportion of depression visits, that were also antidepressant visits, was almost unchanged:The proportion of depression visits with an antidepressant prescribed, having risen from 54% in 1998–1999 to 66% in 2002–2003, remained stable in 2004–2005 (65%) and in 2006–2007 (64%)The difference was caused by a reduction in the number of teens getting diagnosed with depression - or rather, the number of visits where depression was mentioned; we can't tell if this meant doctors were less likely to diagnose, or patients were less likely to complain, or whatever.This graph shows the story. After 2003, both antidepressant visits and depression visits fall, while the proportion of "antidepressant & depression" visits to the total depression visits (purple line), is constant.The effect seen is just a correlation - it might have been a coincidence that all this happened after the black box warning in 2003. It seems very likely to be causal, though. Antidepressant use was rising steadily up until that point - and given that in adults, both depression and antidepressant visits rose after 2003.It's also dangerous to pile too many heavy conclusions on the back of one study. But having said that -In other words, getting diagnosed with depression - at least if you're a teenager in the USA - is not just a function of having certain symptoms. The treatments on offer are a factor in determining whether you're diagnosed.One alternative view, is that the fall in depression visits represents the fact that kids on antidepressants tend to have multiple visits - in order to monitor their progress, adjust dosage etc. So when antidepressant use fell, the number of visits fell. But if it were true, we'd presumably expect to see a fall in the proportion of visits that dealt with antidepressants, which we didn't.This is disturbing either way you look at it. If you think the pre-2003 diagnoses were appropriate, then after 2003, kids must have been going undiagnosed with depression. On the other hand, if you think post-2003 was a welcome move away from over-diagnosis of depression, then pre-2003 must have been bad.As to what happened to the kids who would have got a diagnosis of depression post-2003 were it not for the black box warning, we've got no way of knowing.Why did this happen? Psychologist Abraham Maslow famously said "It's tempting, if the only tool you have is a hammer, to treat everything as if it were a nail." The history of psychiatry bears this out.Sigmund Freud's psychoanalysis was essentially the theory that most mental disturbance was a 'neurosis' or 'complex' of the kind that's best treated by lying on a coach and talking about your dreams and your childhood, which as luck would have it, was exactly what Freud had just invented.Along came psychiatric drugs, and suddenly everything was a 'chemical imbalance'. I've previously suggested that the invention of SSRI antidepressants, in particular, may have changed the concept of depression into one which was most amenable to treatment with SSRIs.Recently, we're seeing the rise of the view that everything from psychosis to paedophilia is about 'cognitive biases' that can be treated by the latest treatment paradigm, CBT.We always think we've hit the nail on the head.Chen SY, & Toh S (2011). National trends in prescribing antidepressants before and after an FDA advisory on suicidality risk in youths. Psychiatric services (Washington, D.C.), 62 (7), 727-33 PMID: 21724784... Read more »
Chen SY, & Toh S. (2011) National trends in prescribing antidepressants before and after an FDA advisory on suicidality risk in youths. Psychiatric services (Washington, D.C.), 62(7), 727-33. PMID: 21724784
Some animals - such as dolphins and whales - are able to "sleep with half their brain". One side of the brain goes into sleep-mode activity while the other remains awake.But a remarkable new study has revealed that something similar may happen in humans as well - every night.The research used a combination of scalp EEG, and electrodes planted inside the brain, to record brain activity from 5 people undergoing surgery to help cure severe epilepsy. The subjects were then allowed to go to sleep for the night, while recording took place.As expected, after falling asleep, the EEG showed delta wave activity - strong, slow waves of electrical activity (0.5 to 4 Hz) which are typical of deep, dreamless "slow wave sleep".However, the electrodes inside the brain told a different story. While they recorded delta waves most of the time, they also showed that there were episodes, lasting from a few seconds to up to 2 minutes, in which the motor cortex suddenly went into "waking mode". Delta waves disappeared, and were replaced with fast, unpredictable activity.This image shows one episode, lasting just 5 seconds. The hotter the color, the more activity in a particular frequency. The higher the band, the higher the frequency. This shows a clear burst of high frequency activity in the motor cortex. The other parts of the brain showed the opposite effect - even stronger slow wave activity - at the same time.Another area, the dorsolateral prefrontal cortex, also showed this phenomenon occasionally, but it was much less common than in the motor cortex.There's a few caveats. These patients had severe epilepsy, and they were taking anti-convulsant drugs. This wouldn't obviously create the effects seen here, but we can't rule it out. Still, these results are intriguing.They challenge the view of slow wave sleep as a "whole brain" phenomenom. We've known for a while that this isn't true of animals and in certain sleep disorders, but this is first demonstration in healthy humans.It may help to explain the mysterious fact that, although slow wave sleep is often referred to as "dreamless", there are consistent reports that people woken up from this phase of sleep do report dreaming (or at least thinking) about things.While episodic arousal of the motor cortex probably wouldn't explain this per se, if the same thing happens in the visual cortex or other sensory areas, it might create dreams.Nobili L, Ferrara M, Moroni F, De Gennaro L, Russo GL, Campus C, Cardinale F, & De Carli F (2011). Dissociated wake-like and sleep-like electro-cortical activity during sleep. NeuroImage PMID: 21718789... Read more »
Nobili L, Ferrara M, Moroni F, De Gennaro L, Russo GL, Campus C, Cardinale F, & De Carli F. (2011) Dissociated wake-like and sleep-like electro-cortical activity during sleep. NeuroImage. PMID: 21718789
Do you write about peer-reviewed research in your blog? Use ResearchBlogging.org to make it easy for your readers — and others from around the world — to find your serious posts about academic research.
If you don't have a blog, you can still use our site to learn about fascinating developments in cutting-edge research from around the world.
Research Blogging is powered by SMG Technology.
To learn more, visit seedmediagroup.com.