The Neurocritic

318 posts · 398,984 views

Born in West Virginia in 1980, The Neurocritic embarked upon a roadtrip across America at the age of thirteen with his mother. She abandoned him when they reached San Francisco and The Neurocritic descended into a spiral of drug abuse and prostitution. At fifteen, The Neurocritic's psychiatrist encouraged him to start writing as a form of therapy.

The Neurocritic
318 posts

Sort by Latest Post, Most Popular

View by Condensed, Full

  • March 19, 2012
  • 01:16 PM

Does the Human Dorsal Stream Really Process Elongated Vegetables?

by The Neurocritic in The Neurocritic

What do zucchini and hammers have in common? Both might be processed by the dorsal stream.The primate visual system is divided into ventral ("what") and dorsal ("where") visual streams that are specialized for object recognition and spatial localization, respectively (Mishkin et al., 1983; Haxby et al., 1991).Goodale and Milner (1992) conceptualized the two pathways as "vision for perception" and "vision for action":We propose that the ventral stream of projections from the striate cortex to the inferotemporal cortex plays the major role in the perceptual identification of objects, while the dorsal stream projecting from the striate cortex to the posterior parietal region mediates the required sensorimotor transformations for visually guided actions directed at such objects.Other researchers have extended the degree of specialization shown by the visual and semantic systems. Some studies have suggested there might be category-specific processing of living and non-living things (e.g., animals and tools), although the reasons for this specialization are a matter of debate (Caramazza & Shelton, 1998; Thompson-Schill et al., 1999). Chao and Martin (2000) found that pictures of tools activated the left posterior parietal cortex in the dorsal stream to a greater extent than pictures of animals, houses, and faces. The idea is that objects with salient motor-based properties (hammers) should recruit "vision for action" cortical regions to a greater extent than objects without such affordances (zucchini).More recently, Almeida and colleagues (2008) used two different visual masking techniques in a priming study designed to isolate the influence of the dorsal stream:We used two techniques to render prime pictures invisible: continuous flash suppression (CFS), which obliterates input into ventral temporal regions, but leaves dorsal stream processes largely unaffected, and backward masking (BM), which allows suppressed information to reach both ventral and dorsal stream structures. Their results suggested that categorically related primes suppressed under CFS still facilitated reaction times to tool targets, but not to animal targets. In other words, participants were faster to classify tools when preceded by a picture of a tool than when preceded by a picture of an animal, and this priming effect held up when the ventral stream was unavailable.A new study by Sakuraba et al. (2012) wanted to clarify which specific attributes of tools are processed by the dorsal stream, so they used a greater variety of categorically related and unrelated prime stimuli suppressed under CFS, as shown below.Fig. 1 (Sakuraba et al., 2012). Procedure using CFS. Different images were presented into the subject's left and right eyes by using anaglyphs. Dynamic high-contrast random-noise patterns (10 Hz) were presented to the dominant eye, while low-luminance, low-contrast prime stimuli were presented to the nondominant eye. Subjects could report the dynamic noise but not the static image. Each trial started with a fixation cross for 500 ms, followed for 200 ms by a prime stimulus suppressed by CFS. Finally, a target stimulus masked by 70% additive noise was presented until the subject responded (maximum duration: 3 s) by pressing a key to indicate the category of the target stimulus. One of the manipulated attributes was shape. Non-elongated ("stubby") tools, elongated vegetables, and stubby vegetables1 were used as primes for elongated tools (e.g., hammer, ax, fork, etc.). Other conditions used geometric shapes as primes.In Experiment 2, we used tool pictures without elongated shape components, namely stubby tools (e.g., a punch, a squeezer, a mouse, and so on). ... In Experiment 3, elongated stick figures were used as prime stimuli. In Experiment 4, elongated and stubby vegetable pictures were presented as prime stimuli. Because the elongated shapes involve an orientation component, we could not exclude the possibility that orientation, rather than shape attribute, explained the results. Therefore, we conducted Experiment 5 to clarify this. We used elongated stick figures, diamond shapes, and cut circles that were rotated in 45° increments as prime stimuli.Interestingly, membership in the category of tools per se was irrelevant; it was the shape of the prime that mattered.Fig. 4 (Sakuraba et al., 2012). Priming effect in Experiment 4 and 5.2 Light and dark gray bars represent mean priming effects to tool targets and animal targets, respectively. Error bars indicate SEM. The pictures represent examples of the prime stimuli we used. This throws a wrench (so to speak) into the dorsal stream as the vision for action pathway, unless you normally use a zucchini to pound your nails into the wall. The less dramatic interpretation is that the categorical information obtained by viewing pictures of tools isn't neatly respected by the dorsal stream, but visually-guided reaching and grasping remain unscathed.Footnote1 The "stubby vegetables" were my favorite part of the paper.2 You might want to quibble with the size and functional significance of the priming effect. Although statistically significant, it was rather small.ReferencesAlmeida J, Mahon BZ, Nakayama K, Caramazza A. (2008). Unconscious processing dissociates along categorical lines. Proc Natl Acad Sci. 105:15214-18.Caramazza A, Shelton JR. (1998). Domain-specific knowledge systems in the brain: the animate-inanimate distinction. J Cogn Neurosci. 10:1-34.Chao LL, Martin A. (2000). Representation of manipulable man-made objects in the dorsal stream. Neuroimage 12:478-84.Goodale MA, Milner AD. (1992). Separate visual pathways for perception and action. Trends Neurosci. 15:20-5.Haxby JV, Grady CL, Horwitz B, Ungerleider LG, Mishkin M, Carson RE, Herscovitch P, Schapiro MB, Rapoport SI. (1991). ... Read more »

Sakuraba S, Sakai S, Yamanaka M, Yokosawa K, & Hirayama K. (2012) Does the human dorsal stream really process a category for tools?. The Journal of neuroscience : the official journal of the Society for Neuroscience, 32(11), 3949-53. PMID: 22423115  

  • February 19, 2012
  • 04:21 AM

That's Impossible! How the Brain Processes Impossible Objects

by The Neurocritic in The Neurocritic

Relativity, by M.C. Escher.The artwork of M.C. Escher is famous for its visual trickery. The human visual system tries to project the two dimensional image onto a three dimensional scene, but the perspective is contradictory: it cannot exist in the real world. These impossible constructions violate the laws of geometry and fascinate consumers of t-shirts, posters, and Apple products.How does the brain represent these illusory staircases and towers? While a fascinating topic of study in the field of object perception (Levy et al., 2004), Escher prints can make for overly complicated stimuli in neuroimaging experiments. Simpler 2D figures, such as the impossible objects drawn by Swedish artist Oscar Reutersvärd, have been used in fMRI experiments (Soldan et al., 2008).An extensive collection of 810 impossible objects is available from Impossible World, which is a fantastic resource1 maintained by Vlad Alexeev.Previous neuroimaging experiments have used the possible/impossible object decision task to study the neural correlates of perceptual priming, an implicit form of memory. Behaviorally, repeated presentation of possible objects results in faster decision times, and this priming effect is smaller (Soldan et al., 2008) or non-existent (Schacter et al., 1995) for impossible objects. Neurally, the phenomenon of repetition suppression, or the reduction in neural activity seen upon repeated stimulus presentation, is thought to reflect facilitated perceptual processing (and perhaps behavioral priming).2 Repetition suppression predicts behavioral priming for possible objects (Habeck et al., 2006):A set of occipital, parietal, and temporal brain regions decreased their activation across presentations, including bilateral middle occipital gyrus, left precuneus, right supramarginal gyrus, as well as some frontal and thalamic areas, such as right inferior frontal gyrus, left cingulate gyrus, and right thalamus. However, no such relationship was observed for impossible objects.The previous studies focused on varieties of repetition priming and whether there is a "structural description system" that facilitates the identification of perceptually coherent objects. A recently published article was specifically interested in the neural basis of impossible figures and how they are represented in the visual cortex (Wu et al., 2012). The stimuli were impossible and possible exemplars of the two-pronged trident (Fig. 1 below), shown at four different angles.Fig. 1 (Wu et al., 2012). Examples of stimulus figures used in impossible condition and possible condition. (a) Is an impossible figure and (b) is a possible figure [that] resembles the former.The paper started by reviewing the basic neuroanatomy of the visual system and its division into dorsal ("where") and ventral ("what") visual streams. Objects are primarily represented in the ventral stream, and the lateral occipital complex (LOC) is one area that seems to be specialized for object recognition. The authors predicted that impossible objects would be difficult for the LOC to process; therefore, additional regions would be recruited:In the present study, we thought that the 3D structures of impossible figures might be difficult to be represented by object-selective regions (such as the LOC), and the impossible perceptions might be derived from detecting the contradiction in interpretation of the 3D structure. Therefore, we postulated that both the brain regions in the dorsal visual pathway, such as the SPC [superior parietal cortex] related to the perceptual ambiguities resolving and perceptual content modifying, and the brain areas related to the object-selective regions in the ventral pathway would be involved in the impossible figures processing.Nineteen participants performed the possible/impossible object decision task (30 trials of each condition) while their brains were scanned. Four participants showed repetition priming in the task (first 15 trials of each condition slower than the last 15) and were excluded. The remaining subjects did not show priming.3 Personally, I would have used 30 unique possible and impossible figures to avoid priming effects entirely.What were the results? As predicted, regions in both dorsal and ventral visual streams showed greater activation for impossible than for possible figures: right superior parietal in the former and right fusiform and inferior temporal gyri in the latter.The right SPG in the dorsal visual pathway might be related to spatial information processing and the right LOC (FG and ITG) in the ventral visual pathway (the object-selective regions) might be related to the representation of the impossible 3D structure. Therefore, our results indicated that the impossible 3D structure might be difficult to be represented by human visual system, and the impossible perception might be derived from the detecting and resolving the contradiction in the subjects’ interpretations according to different perceptions triggered by 3D cues.Fig. 2 (Wu et al., 2012). Brain regions showing significant difference between impossible condition and possible condition [FEW-corrected threshold of P < 0.05 at the cluster level (P < 0.001, 10 contiguous voxels cutoff at the voxel level)].There were no brain regions that showed greater activation for possible objects.... Read more »

  • February 13, 2012
  • 01:35 AM

21st Century Treatments for Insomnia

by The Neurocritic in The Neurocritic

Are you having trouble sleeping? But you're not feeling that 19th century retro hipster insomniac vibe? Try some of these behavioral remedies recommended by the finest scientific and medical journals of today.What a Difference a Day MakesIs Intensive Sleep Retraining (ISR) a new overnight treatment for chronic insomnia (Harris et al., 2012)? ISR is conducted in one 25 hr session at a sleep lab, where the insomniac sleeps a maximum of 3 min every 30 min for a period of 25 hrs. Instant cure! (supposedly). The basic idea is that the person will learn they can fall asleep fairly quickly and easily, and this will translate directly to real life sleeping patterns.In a commentary accompanying the main article in Sleep, Spielman and Glovinsky (2012) describe it as:...a novel insomnia treatment that while radical in procedure is grounded in learning theory, a long-established conceptual framework for understanding insomnia. ISR combines two familiar components of sleep research—sleep deprivation and the polysomnographic recording of sleep onset—to yield an entirely new therapeutic procedure: repeated practice in falling asleep quickly. Massed practice in achieving sleep is here shown to possess a therapeutic value rivaling that of stimulus control therapy (SCT), that mainstay of behavioral sleep medicine, as well as offering a possible additive effect when administered in conjunction with SCT.ISR employs sleep laboratory technology to measure the speed of sleep onset, limit the duration of sleep, and allow immediate feedback to subjects as to whether objectively recorded sleep has occurred. It typically provides dozens of successful entries to sleep over the course of a single night and day. Then it is over, handing off responsibility for good sleep management to sleep hygiene recommendations.In contrast to ISR, there is already strong research support for stimulus control therapy (SCT), which is designed to:...reduce the anxiety or conditioned arousal individuals may feel when attempting to go to bed. Specifically, a set of instructions designed to reassociate the bed/bedroom with sleep and to re-establish a consistent sleep schedule are implimented. These include: 1) Going to bed only when sleepy; 2) Getting out of bed when unable to sleep; 3) Using the bed/bedroom only for sleep and sex (i.e., no reading, watching TV, etc); 4) Arising at the same time every morning; and 5) Avoiding naps.One question, then, is whether ISR is better than SCT, an accepted behavioral therapy for insomnia. Eighty participants in the study of Harris et al. were randomized into one of four groups: (1) ISR + sleep hygeine instruction (SH); (2) SCT + SH; (3) ISR + SCT; (4) SH alone, which served as the control condition. All participants kept a sleep diary, answered questionnaires, and wore an actigraph to measure motor activity. Those in the ISR groups slept no more than 5 hrs the night before they came to the lab.The highly intrusive ISR procedure involved arriving at 21:00.Following an explanation, the signing of an informed consent form, electrode application, and a quiet settling period, treatment began at 22:30. Treatment trials were conducted every half hour, finishing after 23:00 on night 2. Thereby, the ISR treatment routine allowed a series of 50 half-hourly sleep onset opportunities. ... Within each treatment trial, the opportunity for sleep onset was limited to a 20-min period, with the trial stopping if sleep onset had not occurred by this time. For those trials in which sleep was initiated, 3 consecutive minutes of sleep were permitted, prior to being awoken [the method of awakening was not described]. Upon awakening, treatment participants first rated their perception of whether sleep onset had occurred (on a Likert scale of 1 “No, definitely not” to 7 “Yes, definitely”). Following this response, participants were provided with information as to whether sleep onset had or had not occurred.Then they got out of bed to read or watch DVDs. After 10 trials of this nonsense, people were falling asleep in 5 min or less.Ultimately, did this punitive procedure work? Yes. But it wasn't significantly better than SCT for most of the subjective sleep measures used. All three active treatment conditions produced improvements in self-reported duration and efficiency of sleep, relative to the SH control. Of the 16 or so analyses at 2 of 7 selected time points (which did not seem to be corrected for multiple comparisons), there were some instances where ISR or the combined ISR + SCT treatment was better than stimulus control therapy (see below), but nothing earth shattering.In another graph (Fig. 5 - Mean sleep diary wake time after sleep onset), the SCT groups were superior to ISR at Week 1 and Post-Treatment.What about the objective sleep measures obtained by actigraphy?The actigraphy data failed to support significant changes in sleep, despite using an adjusted manual scoring method and a sensitivity setting in the scoring algorithm that calibrated actigraphy TST [total sleep time] to PSG [polysomnography] TST. Actigraphy has similarly failed to mirror subjective sleep changes in other treatment studies in insomnia, and objective measures (i.e., EEG) fail to replicate the extent of subjective sleep changes in clinical insomnia treatment studies.The authors concluded that actigraphy is useless and that subjective sleep report is the only thing that matters (basically).So what's next? Intensive Sleep Retraining is costly and available only from highly specialized centers.1 But the possibility of self-administered ISR is on the horizon, using portable EEG headsets, actigraphs, and vibrating alarms. Is there an app for that?Footnote1 I'm not sure that it's even being offered as a clinical treatment. The RCT was conducted in Australia.References... Read more »

  • January 28, 2012
  • 04:53 AM


by The Neurocritic in The Neurocritic

Today was the sixth anniversary of this blog. I'm not much for meta-blogging or general chattiness, but I thought I would highlight the nine posts (out of 700) with the most comments. Thank you for your support over the years, and keep the comments coming.9. Friston Is Freudian - Friday, March 12, 2010Neuropsychoanalysis is in the news again because of the recent publication of Neural correlates of the psychedelic state as determined by fMRI studies with psilocybin. In 2010, first author Carhart-Harris published an expansionist mega-opus (with Karl Friston) on The default-mode, ego-functions and free-energy: a neurobiological account of Freudian ideas, the basis for the present post and its follow-up.8. Is CBT Worthless? - July 03, 2009According to a meta-analysis by Lynch, Laws and McKenna, Cognitive Behavioral Therapy (CBT) is not helpful for those with schizophrenia and bipolar disorder, and any improvements seen in major depression are rather small.7. White Matter Differences in Pre-Op Transsexuals Should NOT be the Basis for Childhood Interventions - January 28, 2011 Contains a number of comments by transgendered individuals who took exception with various aspects of this post.6. The Precuneus and Recovery from a Minimally Conscious State - July 05, 2006 Includes a number of comments, over a two year period, from a father caring for his son.5. Voodoo Correlations in Social Neuroscience - January 05, 2009 On the infamous paper by Edward Vul, Christine Harris, Piotr Winkielman and Harold Pashler, ultimately retitled Puzzlingly High Correlations in fMRI Studies of Emotion, Personality, and Social Cognition.4. Glossolalia - November 04, 2006 Includes personal statements from many individuals who feel they speak in tongues.3. Bad News for the Genetics of Personality - August 07, 2010 A recent search for genetic variants that underlie differences in personality traits came up empty (Verweij et al., 2010).2. The Pseudoscience of Anti-Psychiatry in PLoS Medicine - August 01, 2006 Antipsychiatry is always a hot-button topic, and this early post attracted 44 comments.1. Airplane Headache - August 15, 2010 The winner by a mile, with 72 comments, is on a supposedly rare type of headache that occurs during take-off and landing (Atkonson & Lee, 2004). The pain appears to be unique to plane travel and not associated with other conditions. Neurological exam and brain imaging results in all published cases (n=14) have been normal. Clearly, there are more than 14 people who suffer from these excruciating headaches on airplanes. Triptan drugs (used to treat migraines and cluster headaches) may be effective in preventing airplane headaches (Ipekdal et al., 2011).Thank you for reading!

... Read more »

Edward Vul, Christine Harris, Piotr Winkielman, . (2009) Voodoo Correlations in Social Neuroscience. Perspectives on Psychological Science.

  • January 19, 2012
  • 07:27 PM

Deep Brain Stimulation for Bipolar Depression

by The Neurocritic in The Neurocritic

The Melancholia of Kirsten Dunst and Lars von Trier“Gray wool, clinging to my legs, it's heavy to carry along” The disastrous wedding reception of the severely depressed Justine precedes the end of the world, depicted as a highly stylized and artistic event feared by some but welcomed by others. Kirsten Dunst plays the role of von Trier's own melancholia, which was the inspiration for his film. The image above occurred out of context, at the very beginning, during the bombastic Wagnerian apocalyptic prelude to Part One, "Justine" and Part 2, "Claire." We don't hear Justine say those words until later, when she had lost the ability to care for herself. "She should be hospitalized," I thought at the time, and wondered why no one was getting her psychiatric help. But then we wouldn't have a movie that deals with internal struggle and suffering.Deep Brain Stimulation for Treatment-Resistant DepressionSevere depression that is refractory to treatment, i.e. unresponsive to psychotherapy, multiple trials of antidepressant drugs (often combined with atypical antipsychotics, mood stabilizers, benzodiazepines, etc.) and electroconvulsive therapy (ECT), takes a tremendous toll on the long-suffering patients and their families. An alternative treatment modality, deep brain stimulation (DBS), has been in clinical trials for intractable depression for nearly 10 yrs. It works using the same sort of device used in DBS for Parkinson's disease, which has been remarkably successful in alleviating symptoms. Electrodes are implanted deep in the brain, targeting the ventral portion of the anterior cingulate cortex, in Brodmann's area 25.Other brain regions have been targeted for DBS in major depressive disorder, including the nucleus accumbens, but today we'll focus on the work of Dr. Helen Mayberg and her colleagues at Emory University in Atlanta, Georgia.Figure 1 (Holtzheimer et al., 2012). Surgical targeting. Preoperative MRI shows the sagittal (A) and coronal (B) views of the planned optimal subcallosal cingulate (SCC) white matter target (red circle). The dotted black line indicates the subcallosal plane of interest, parallel to the anterior-posterior commissural line; the dotted white line indicates the rostral limit of the subcallosal plane; and the dotted red line indicates the midsubcallosal plane. The red circle indicates demarcation of the SCC white matter target and surrounding gray matter. C and D, Postoperative computed tomography scan merged with preoperative MRI showing a typical case with the deep brain stimulation electrodes in situ. Note that the contacts span the SCC gray and white matter in the vertical plane proximal to the split of the cingulum bundle and rostral medial frontal white matter tracts (C, red arrows, sagittal view). Contacts are numbered by convention (1-4 on the left, 5-8 on the right), inferior to superior. Contacts 2 and 3 are directly in the SCC white matter, and contacts 1 and 4 are in the inferior and superior gray matter, respectively. Why stimulate subcallosal cingulate/area 25 1 in depression? Previous neuroimaging studies by Mayberg and colleagues (2000) showed that resting glucose metabolism in this region is overly active in depressed people, and a reduction in activity was associated with antidepressant treatment response. Another key observation was made using a mood induction paradigm in healthy volunteers (Mayberg et al., 1999). After the participants remembered a sad autobiographical memory, their SCCs showed greater blood flow relative to a neutral mood state. Thus, the "sad cingulate" was implicated in normal sadness as well as in depression.The most recent DBS report, published in the Archives of General Psychiatry (Holtzheimer et al., 2012), is a follow-up after two years of chronic, high frequency stimulation of the subgenual cingulate white matter. The basic findings have been summarized elsewhere, including Providentia, with a review of possible mechanisms at The Scicurious Brain.The aspect of the study that I'd like to focus on today is the inclusion of patients with Bipolar-Type II (BP-II) for the first time, in addition to those with unipolar depression. Just as with the unipolar patients, those with BP-II had to be in a depressive episode for at least 1 yr.The specific DSM-IV diagnostic criteria for Bipolar II Disorder are:A. Presence (or history) of one or more Major Depressive Episodes. B. Presence (or history) of at least one Hypomanic Episode. C. There has never been a Manic Episode or a Mixed Episode. D. The mood symptoms in Criteria A and B are not better accounted for by Schizoaffective Disorder and are not superimposed on Schizophrenia, Schizophreniform Disorder, Delusional Disorder, or Psychotic Disorder Not Otherwise Specified. E. The symptoms cause clinically significant distress or impairment in social, occupational, or other important areas of functioning.Are the same neural circuits implicated in treatment-resistant depression also involved in BP-II? Remarkably, there is nothing in the literature that presents a rationale for using DBS for bipolar depression specifically, nor about why the subgenual cingulate white matter should be the target. A 2010 review by Lipsman, Lozano, and others from the Toronto neurosurgical group stated: "There are currently no trials or reports in the literature on the use of DBS for the exclusive treatment of bipolar diseas... Read more »

  • January 14, 2012
  • 04:10 AM

Remembering and Forgetting in Traumatized Ugandan Refugees

by The Neurocritic in The Neurocritic

Gulu, Uganda (vis photography)Most of us have memories from the past that we'd rather forget. When those memories are of a traumatic nature, they can more difficult to expel from our minds. Unwanted memories can be rejected by means of active inhibitory processes (Anderson & Levy, 2009), but these mechanisms are impaired in individuals with post-traumatic stress disorder, or PTSD (Zwissler et al., 2011): Essentially, PTSD patients have trouble remembering what they are supposed to remember and forgetting what they would rather not remember. They appear to have impaired memory control.A group of German investigators conducted a study on memory and forgetting in one of the more unsettling regions of the world: northern Uganda. The Lord's Resistance Army (LRA), a terrorist organization, has waged a long and brutal campaign to overthrow the government of Uganda:Rape, torture, and murder have become the group's hallmarks in the almost fifteen [twenty or twenty-five] years that they have terrorized the citizens of Northern Uganda. The ranks of the LRA are filled in large part (approximately 80%) by children, who are kidnapped and brainwashed into service with the group. Human rights NGOs place the number of children currently fighting with LRA at around 3,000. LRA members also kidnap children, particularly girls, to serve as sex slaves; some have even been given as "gifts" to arms dealers in Sudan.Zwissler and colleagues (2011) recruited severely traumatized participants for a study on directed forgetting, a memory task where instructions are given to remember some items but to forget others during the encoding phase. The participants were 51 young people (mean age=20.8 yrs, range 16–30) living in Internally Displaced Persons (IDP) camps near the city of Gulu in Northern Uganda. All were equally exposed to traumatic events such as abduction, but only 26 were diagnosed with PTSD, an anxiety disorder marked by intrusive memories and flashbacks.The participants had two years of education on average, and many were functionally illiterate. For this reason, pictures were used as the stimuli (instead of words, which are commonly used in this type of study). The pictures were neutral in valence to examine whether memory issues in this population would extend to non-emotional material.In the experiment, 28 pictures (Set A) were presented during an initial encoding phase. Each picture was followed by a symbol that signaled whether the preceding picture should be remembered or forgotten. During the test phase, all 28 pictures were presented, along with new pictures that served as "lures" that were similar to the initial set (Set B; see below).Fig. 1 (Zwissler et al., 2011). Illustration of the picture sets showing three representative target-distractor pairs: (a) set A; (b) Set B. (Original photographs were shown in colour.)For each stimulus, participants were told to indicate whether they had seen it before, regardless of the prior instruction to remember or forget. Overall accuracy in the task is shown in the figure below. The non-PTSD group had better memories for the pictures they were told to remember, compared to those they were told to forget. In contrast, the PTSD group showed no difference in accuracy for the to-be-remembered vs. the to-be-forgotten pictures.Fig 2 (Zwissler et al., 2011). A comparison of the effect of directed forgetting on discrimination accuracy in the two groups. PTSD, post-traumatic stress disorder. *Indicates significant differences.One way to view these results is that the participants with PTSD performed worse than controls for items they were supposed to remember, and were unable to invoke inhibitory processes to suppress memory for the to-be-forgotten items ("trouble remembering what they are supposed to remember and forgetting what they would rather not remember"). Breaking down task performance a little further, the PTSD group was more inclined to make "false alarm" errors to the lures related to pictures they were supposed to remember. This suggests that the details of the to-be-remembered pictures weren't encoded as well, and were more easily confused with related pictures they didn't see.The authors concluded that......traumatized individuals with (but not without) PTSD are impaired in their ability to selectively control episodic memory encoding. This impairment may contribute to clinical features of the disorder such as intrusions and flashbacks.However, "directed forgetting" is usually not a practical strategy when real life events are unfolding. Do these results this imply that the non-PTSD group was better able to dissociate themselves from traumatic events as they were occurring (or shortly thereafter)? Whether such a process can effectively occur at all during horrible tragedies is highly controversial (e.g., Terr vs. Loftus). The phenomenon is more often studied when applied to the retrieval of traumatic or unwanted memories (Anderson & Levy, 2009), not during the encoding phase.Tragically, there appears to be No End to LRA Killings and Abductions in central Africa, according to Human Rights Watch. These ongoing atrocities should not be ignored.So watch the video Dear Obama: A Message from Victims of the LRA.Further Reading on Forgetting:Forgetting is Key to a Healthy MindLetting go of memories supports a sound state of mind, a sharp intellect--and superior recallLiving and ForgettingI Forget...I Forgot......and it's Memory Week at the Guardian.ReferencesAnderson MC, Levy BJ. (2009). Suppressing unwanted memories. Curr Dir Psychol Sci. 18:184-194.... Read more »

  • January 6, 2012
  • 06:34 AM

Subjects Wanted to Drink Bourbon and Watch Erotic Films

by The Neurocritic in The Neurocritic

Our fun New Year's Eve post reviewed the suspected brain mechanisms of an alcohol blackout, or an episode of amnesia after a bout of heavy drinking (Rose & Grant, 2010). Alcohol-induced alterations of hippocampal circuits are thought to disrupt memory encoding, which can lead to two different types of blackout: en bloc, a complete loss of memory for the affected time period; and fragmentary, where bits and pieces of memories remain. The en bloc blackout is more likely to occur when a large quantity of alcohol is ingested in a short period of time.In 1970, less was known about the causes and mechanisms of alcohol blackouts. A study by Goodwin and colleagues set out to learn more:Some psychiatrists believe blackouts to be a functional disturbance, related to guilt or anxiety. Others believe they reflect a toxic effect of alcohol on the brain. There are few data to support either concept. Blackouts usually occur erratically; comparable amounts of alcohol do not always produce memory loss, and it has not been possible to predict when an intoxicated person will suffer amnesia. During the amnesic interval, the person may function reasonably well and perform complicated acts, suggesting that the amnesia is retrograde. To confirm this, however, memory must be observed systematically during drinking periods followed by amnesia, which heretofore has not been done.Human subjects protection programs were less stringent (or non-existent) 40 years ago, so it was no problem to enlist unemployed alcoholic day laborers to drink a pint of bourbon and undergo memory testing of an unusual sort.The subjects were recruited from individuals seeking daily employment at the Casual Labor Division of the Missouri Employment Office in St. Louis. Previous experience with this source had shown that a sizable proportion of the job applicants are alcoholic. Individuals were eligible for the study if they met two criteria; (1) willingness and ability to drink more than a pint of whiskey in a few hours , and (b) good physical health.Ten men (mean age 41 yrs) were recruited as participants. Eight of them were alcoholics, and five experienced frequent blackouts. After fasting for 5 hrs, the experimental procedure was to drink 16-18 ounces of 86 proof bourbon on the rocks in 4 hrs. Cognitive testing began after the first hour of drinking to assess immediate, short-term, and remote memory as well as calculation abilities. In brief, the drunk participants looked at toys and watched porn and were asked to recall these items after a delay (Goodwin et al., 1970):The tests were as follows. (a) Every 30 min the subject was shown a toy for 1 min, after which it was removed from sight. Two minutes later the subject was asked to recall the toy. If recalled correctly immediate memory was considered intact. Thirty minutes later he as asked again to recall the toy to measure short term memory...(b) Every 30 min the subject was shown for 1 min a scene from an erotic movie. He was asked to describe and discuss the scene. If he could recall the scene 2 min later, this was considered a reflexion of intact immediate memory. Thirty minutes later he was again asked to recall the scene to measure short term memory.(c) Remote memory was tested by asking questions about early upbringing (name of schools, teachers, and so on) and events that occurred during the previous 2 days.(d) Ability to perform calculations was tested by asking the subject to do simple multiplication and subtraction tasks.Other than wanting to create a highly desirable means of employment for the target population, why did the authors use erotic movies as stimuli?? (1) So that memory for arousing, emotional material could be compared to memory for the [presumably] more neutral toys; (2) Because very drunk men would be unwilling or unable to cooperate if more boring tests were used; and (3) Toys and porn were highly memorable to sober male subjects 24 hrs later.The results indicated that all 10 subjects performed well when tested at the 2 min retention interval, but half of them showed impaired memory when tested 30 min and 24 hrs later. These same 5 participants were the ones with histories of blackouts. The authors speculated that the faster rise in blood alcohol levels (BAL) in the blackout subjects could be a reason for their amnesia. However, this explanation is implausible because the blackout group had poor memories at the earliest 30 min time point, when there was no difference in BAL between the groups.So what have we learned? Retrieval of remote memories and simple calculation abilities are not impaired by heavy drinking, is possible to predict amnesia during a drinking bout: on formal testing individuals who are later to experience amnesia will have a specific loss of short therm memory correlated in time with the period of amnesia, while other types of memory are intact.A final lesson is that it was much easier to publish in Nature in 1970. Group comparisons with n=5? Fine and dandy. Pesky statistics and error bars on figures? Who needs them? Those were the days my friend...ReferencesGOODWIN, D., OTHMER, E., HALIKAS, J., & FREEMON, F. (1970). Loss of Short Term Memory as a Predictor of the Alcoholic “Blackout” Nature, 227 (5254), 201-202. DOI: 10.1038/227201a0Rose, M. & Grant, J. (2010). Alcohol-Induced Blackout. Journal of Addiction Medicine, 4 (2), 61-73.
... Read more »

  • December 31, 2011
  • 11:21 PM

Alcohol Blackout

by The Neurocritic in The Neurocritic

This post is for all you New Year's Eve party goers who don't remember where you were or what you did. If that's the case, then you experienced an alcohol-induced blackout. Haven't you always wondered about the clinical manifestations and neurobiological mechanisms of alcohol-induced blackouts? Maybe you have, but you can't remember.A definitive review of the phenomenon by Rose and Grant (2010) explains that there are two different types of blackout: en bloc, a complete loss of memory for the affected time period; and fragmentary, where bits and pieces of memories remain. The en bloc blackout is more likely to occur when a large quantity of alcohol is ingested within a small time period.What causes an alcohol blackout? A good source of information on the topic is the NIAAA website: What Happened? Alcohol, Memory Blackouts, and the Brain:Alcohol primarily interferes with the ability to form new long–term memories, leaving intact previously established long–term memories and the ability to keep new information active in memory for brief periods. ... Blackouts are much more common among social drinkers—including college drinkers—than was previously assumed, and have been found to encompass events ranging from conversations to intercourse. Mechanisms underlying alcohol–induced memory impairments include disruption of activity in the hippocampus, a brain region that plays a central role in the formation of new autobiographical memories.Rose and Grant (2010) summarize the suspected hippocampal mechanisms as follows:Blackouts are caused by breakdown in the transfer of short-term memory into long-term storage and subsequent retrieval primarily through dose-dependent disruption of hippocampal CA1 pyramidal cell activity. The exact mechanism is believed to involve potentiation of gamma-aminobutyric acid-alpha [GABA-A]-mediated inhibition and interference with excitatory hippocampal N-methyl-d-aspartate [NMDA] receptor activation, resulting in decreased long-term potentiation [LTP].In addition...Another possible mechanism involves disrupted septohippocampal theta rhythm activity because of enhanced medial septal area gamma-aminobutyric acid [GABA]-ergic neurotransmission. Women are more susceptible to alcohol blackouts than men (and recover more slowly) because of their generally less muscular body composition, and gender differences in pharmacokinetics.Cheers to knowing what's happening in your brain after downing a few too many Jell-O shots. If you can remember tomorrow...ReferenceRose, M., & Grant, J. (2010). Alcohol-Induced Blackout Journal of Addiction Medicine, 4 (2), 61-73 DOI: 10.1097/ADM.0b013e3181e1299d

... Read more »

Rose, M., & Grant, J. (2010) Alcohol-Induced Blackout. Journal of Addiction Medicine, 4(2), 61-73. DOI: 10.1097/ADM.0b013e3181e1299d  

  • December 24, 2011
  • 09:07 PM

Orthopedic Surgeons vs. Anesthesiologists

by The Neurocritic in The Neurocritic

from Subramanian et al. 2011 [Image: Clive Featherstone]Every year, BMJ [British Medical Journal] has a special Christmas issue with spoof articles and silly studies. Favorites from the past include:Sword Swallowing And Its Side EffectsSex, aggression, and humour: responses to unicyclingRage Against the Machine Syncope and the Texting SignWhy are the letters "z" and "x" so popular in drug names?The clear winner this year is a revenge piece by a group of orthopaedic surgeons and trainees (Subramanian et al., 2011) that convincingly demonstrates they are stronger and smarter [?] than their colleagues the anesthesiologists (anaesthetists in the UK). The motivation for such a study is as follows:A humorous anaesthetic colleague recently repeated the following popular saying while an operating table was being repaired with a mallet: “typical orthopaedic surgeon—as strong as an ox but half as bright.” Making fun of orthopaedic surgeons is a popular pastime in operating theatres throughout the country. This pursuit has recently spread to the internet; a humorous animation entitled “orthopedia vs anesthesia” had received more than half a million hits at the time of writing.1 Several comparisons of orthopaedic surgeons to primates have been published, and the medical literature contains suggestions that orthopaedic surgery requires brute force and ignorance.2 3 4The stereotypical image of the strong but stupid orthopaedic surgeon has not been subject to scientific scrutiny. Previous studies have shown that the average hand size of orthopaedic surgeons is larger than that of general surgeons.2 3 However, a search of the worldwide scientific literature found no studies assessing the strength or intelligence of orthopaedic surgeons. In the absence of a cohort of willing oxen as a control group, and given that the phrase is popular with anaesthetists, we designed this study to compare the mean grip strength of the dominant hand and the intelligence test score of orthopaedic surgeons and anaesthetists.The participants in the study were 36 male orthopaedic surgeons (there were no female OS's available) and 40 male anaesthetists (the six women who volunteered were excluded). They were recruited from three hospitals over a 2 week period. Grip strength was tested with a hand dynamometer. The proxy for IQ was rather unacceptable however - the Mensa Brain Test version 1.1.0 administered on an iPhone 4. The validity of the IQ measure was apparently irrelevant and the results were actually laughable, befitting the spoof study format.Fig 3 (Subramanian et al. 2011). Box plot of intelligence test score by specialty. Upper and lower whiskers represent 1.5 times and −1.5 times interquartile range; upper and lower hinges represent 25% and 75% quartiles; middle represents median or 50% quartileThe means for IQ were 98.38 for the anaesthetists vs. 105.19 for the orthopaedists, which squeaked in as a significant difference at p=0.0489 (although there was quite a bit of overlap between the groups). A few more things are notable about this result:(1) mean IQ for the entire population is 100, and it would be surprising if licensed professionals who are graduates of medical schools and residency programs were of entirely average intelligence. A 2002 paper {PDF} on male high school graduates in Wisconsin (class of 1957)1 found that doctors had the highest IQ (120) of all professions.(2) One anaesthetist tested in the mentally retarded range: below 70, which is over two standard deviations from the average IQ of 100. This would make him a very unique savant anesthesiologist. At any rate, if you threw out his score, the difference between groups would no longer be significant.(3) Test results from two more anaesthetists suggested borderline intellectual functioning (between 70–84). A couple more doctors from each category were right on the edge of this borderline, which again would be highly improbable.The authors did acknowledge some of these weaknesses:The intelligence scores were lower than anticipated for IQ in the medical profession. This is likely to be a reflection of the way in which intelligence was tested, and the scores derived from the rather difficult Mensa brain test may not be directly comparable to IQ scores. We selected the abbreviated Mensa test carried out by touch screen for speed and convenience. Full formal IQ testing is more time consuming and cumbersome and would have affected doctors’ willingness to participate in this study.Moving on to the other major finding, it was no surprise that the orthopaedic surgeons had greater grip strength than the anaesthetists (p=0.0274). But a quick peak at Fig 2 shows that one exceptionally strong OS helped drive this small difference.But who am I to be a scrooge and spoil the fun of the orthopaedic surgeons, who think they finally have intellectual bragging rights over the anesthesiologists?ConclusionThe stereotypical image of male orthopaedic surgeons as strong but stupid is unjustified in comparison with their male anaesthetist counterparts. The comedic repertoire of the average anaesthetist needs to be revised in the light of these data. However, we would recommend caution in making fun of orthopaedic surgeons, as unwary anaesthetists may find themselves on the receiving end of a sharp and quick witted retort from their intellectually sharper friends or may be greeted with a crushing handshake at their next encounter.Footnote1 Obviously, that study (Hauser, 2002) was based on very old data from one US state. Results may differ in the contemporary UK, as described in this authoritative report in the Daily Mail: Why doctors are not as clever as they used to be.Reference... Read more »

  • December 19, 2011
  • 12:56 AM

The Disconnection of Psychopaths

by The Neurocritic in The Neurocritic

Functional connectivity between the right amygdala and anterior vmPFC is reduced in psychopaths. From Fig. 2 of Motzkin et al., (2011).The last post discussed the case of a 14 yr old boy with congenital brain abnormalities and severe antisocial behavior said to be "consistent with" psychopathy. This label is quite stigmatizing and the diagnosis is a controversial one (Skeem et al., 2011),1 particularly in children. What is psychopathy, exactly? According to Ermer and colleagues (2011),Psychopathy is a serious personality disorder marked by affective and interpersonal deficiencies, as well as behavioral problems and antisocial tendencies (Cleckley, 1976). Affective and interpersonal traits (termed Factor 1) include callousness and a profound inability to experience remorse, guilt, and empathy; antisocial and behavioral problems (termed Factor 2) include impulsivity, stimulation seeking, and irresponsibility. These symptoms tend to manifest at an early age, continue throughout adulthood, and pervade numerous aspects of psychopaths’ daily functioning.As for the brain regions implicated in psychopathy, dysfunction in the amygdala and ventromedial prefrontal cortex (vmPFC) have been suspected for quite some time (Abbott, 2001; Blair, 2007; Koenigs et al., 2011). From this perspective, a recent study on the structural and functional connectivity of these two regions (Motzkin et al., 2011) isn't entirely groundbreaking. However, the logistics of conducting those experiments were anything but simple: the participants were male inmates of a medium security prison in Wisconsin.Kent Kiehl outside the mobile scanner he has used to look at the brains of inmates at a New Mexico prison. Credit: Nature News.Dr. Kiehl's work with criminal psychopaths has been featured in the New Yorker. In the present study, diffusion tensor imaging (DTI) and resting state fMRI were used to examine the structural and functional connectivity of the vmPFC (Motzkin et al., 2011), which has been associated with decision making and the regulation of emotional behavior. The Psychopathy Checklist-Revised (PCL-R) (Hare, 2003)2 was administered to the participants. Those with scores >30 were classified as psychopaths, while the non-psychopaths scored <20.In the DTI study of structural connectivity, 13 non-psychopaths were compared to 14 psychopaths, 7 of whom were low-anxious or primary psychopaths and 7 were high-anxious/secondary psychopaths.3 The uncinate fasciculus (UF) is the main pathway connecting the vmPFC and the anterior temporal lobe (including the amygdala). Fractional anisotropy (FA), a measure of white matter structural integrity, was compared between the groups for regions of interest (after scaling for overall whole brain FA, which was reduced in the psychopaths). For comparison, FA in other frontal-temporal tracts was examined. Results indicated that FA was indeed reduced in the right UF, which replicates an earlier study (Craig et al., 2009).Fig. 1 (modified from Motzkin et al., 2011). DTI results: reduced white matter integrity is specific to the right UF in psychopaths. b, The UF ROI (red) superimposed on an entire UF tract, as computed with tractography. f, Bar plots of mean scaled FA values in the UF. Psychopaths exhibited significantly lower scaled FA values only in right UF. Error bars indicate SEM. *p < 0.05. Resting state fMRI, which measures spontaneous, low-frequency fluctuations in BOLD signal, was used to examine functional connectivity. In this experiment, 20 prisoners who were not psychopaths were compared to 20 prisoners who were considered psychopaths. Based on the DTI study, the right amygdala was used as a "seed region" to examine functional connectivity with vmPFC. In addition, the medial parietal cortex (precuneus and posterior cingulate) was used as a seed region, because this area is also reciprocally interconnected with vmPFC. If the amygdala-vmPFC and medial parietal-vmPFC circuits are not functioning properly, it was expected that correlated activity between regions would be lower. And indeed, that's what was observed in the psychopaths: resting vmPFC BOLD signal was less correlated with amygdala and medial parietal activity.Fig. 3 (modified from Motzkin et al., 2011). Functional connectivity between medial parietal cortex and vmPFC is reduced in psychopaths. [...] The group difference map indicates two separate clusters within vmPFC where non-psychopaths have significantly greater connectivity with the precuneus/PCC seed than psychopaths (vmPFC and rACC). b, Group differences in connectivity were assessed in the vmPF... Read more »

Sundt Gullhaugen A, & Aage Nøttestad J. (2011) Looking for the hannibal behind the cannibal: current status of case research. International journal of offender therapy and comparative criminology, 55(3), 350-69. PMID: 20413645  

Motzkin JC, Newman JP, Kiehl KA, & Koenigs M. (2011) Reduced prefrontal connectivity in psychopathy. The Journal of neuroscience : the official journal of the Society for Neuroscience, 31(48), 17348-57. PMID: 22131397  

  • December 15, 2011
  • 12:51 AM

Born This Way?

by The Neurocritic in The Neurocritic

A group of investigators from the University of Iowa have published a case report about a 14 year old boy with severe antisocial behavior (Boes et al., 2011):He is aggressive, manipulative, and callous; features consistent with psychopathy. Other problems include: egocentricity, impulsivity, hyperactivity, lack of empathy, lack of respect for authority, impaired moral judgment, an inability to plan ahead, and poor frustration tolerance.MRI findings revealed a small congenital malformation in his left ventromedial prefrontal cortex (vmPFC), which has been associated with decision making and the regulation of emotional behavior (Grabenhorst & Rolls, 2011; Mitchell, 2011).Figure 1 (Boes et al., 2011). MRI Images. 1A. This is an oblique coronal T2 image at the level immediately anterior to the horn of the lateral ventricles. Note the hyperintense white matter just deep to the gyrus rectus (indicated by arrow) with a linear extension tapering as it courses toward the anterior horn of the ventricle. Also note the cortical thickening of the left gyrus rectus relative to the right gyrus rectus. 1B. This coronal T1 MPRAGE image shows thickening of the left ventromedial prefrontal cortex [PFC] and blurring of the gray-white interface in this same region. 1C. This is a surface rendering of B.W.'s brain viewing the medial left hemisphere surface with thickened cortex highlighted, which approximates the lesion site.The boy (patient B.W.) had MRI scans at the ages of 4, 11, and 13. The three main neuroanatomical findings remained stable across the three scans. Although the abnormalities appear to be relatively minor, the authors described them as consistent with a focal cortical dysplasia affecting portions of Brodmann areas 11, 12, 25, and 32.Did these anatomical anomalies cause B.W.'s aberrant behavior? He reached normal developmental milestones until the age of 4, when he started having seizures. He was prescribed divalproate, an anticonvulsant (and mood stabilizer) which temporarily controlled his seizures. But they returned between the ages of 6-11 yrs, when he started having complex partial seizures every few months. Complex partial seizures are typically associated with an alteration of consciousness and foci in the medial temporal lobes, although they can also originate in the frontal lobes (Williamson et al., 1985). More details of B.W.'s behavior from the case history:At age six B.W.'s parents reported the onset of defiance at home and at school, including: stealing, lying, aggression, rage, rude language, and disobedience. His parents referred to this as his 'contraband' period because he would consistently bring prohibited items to school (e.g. a pocketknife). He also stole cookies and would sell them to peers. The parents were very concerned about this behavior because it did not seem characteristic of B.W.‟s previous temperament. Moreover, neither parent nor any sibling of B.W. had similar behavioral problems. He was seen by a child psychologist and diagnosed with oppositional defiant disorder and started counseling, which was discontinued after a few visits.During ages seven to nine B.W.'s parents describe a 'cause and effect problem' in which he would behave badly and be punished and the following day would engage in the same behavior that led to the punishment. Along with his lack of response toward punishment, B.W. was impulsive and showed a lack of respect toward authority, including teachers and parents. In an effort to provide greater structure and discipline than the school could provide the parents decided to begin home-schooling B.W. and his siblings when he was nine years old... Despite behavioral problems and lack of self-motivation he was noted to be intelligent and academically capable. The following year a child psychiatrist diagnosed B.W. with attention deficit hyperactivity disorder and bipolar disorder, for which he was prescribed carbemazepine, topiramate,1 and dexmethylphenidate [the d-enantiomer of Ritalin]. Counseling was again attempted briefly without effect.At age 11 B.W. presented to the emergency room of a large tertiary care center with his mother for suicidal ideation. While at a nearby shopping mall he expressed feelings of hopelessness, unworthiness, and wanting “to kill myself… I would cut or burn myself.” The talk of suicide had been ongoing for two months and had been accompanied by suicidal gestures such as jumping from a second story deck onto a trampoline and a superficial laceration to the left hand because “I wanted to kill myself.” Along with the suicidal gestures the parents were alarmed about escalating aggression, destructive behavior, wandering off, and hypersexual behavior that included masturbation, accessing porn sites on the web, and asking younger peers to disrobe in a domineering manner (despite being pre-pubescent at the time). During the admission interview he reported that he had been hearing voices at night from God and the devil motivating him to do good and bad things, respectively.When he was hospitalized, B.W. admitted that he was fabricating the psychotic and suicidal symptoms, along with his self-reported levels of high anxiety and depression. Hospital staff found him manipulative and easily angered. He was given the diagnoses of oppositional defiant disorder, ADHD, and mood disorder not otherwise specified. He was no longer considered bipolar. After discharge his antisocial behavior escalated. He started fires, assaulted the principle, resisted arrest, threatened his mother with a knife, and hit his father over the head with a wrench "in cold blood, without any emotion."Neuropsychological testing revealed his IQ and cognitive functioning to be in the average range, although he had problems with planning and with the Iowa Gambling Task, where he could not learn which decks were safe (vs. risky). He was also given a moral judgment task:On his first attempt, B.W. skipped several questions and scribbled over the entire second sheet and drew a goblin. He completed the task at a later date.He eventually tested at a “relatively immature, preconventional, stage of moral development, in which moral dilemmas were approached primarily from the perspective of avoiding negative consequences for one's self.”On the Antisocial Process Screening Device filled out by his parents, his scores were at the 99-100th percentile for callous-unemotional, narcissism, impulsivity, and total score.Recently (summer 2011), B.W. underwent intracranial mapping of the left ventromedial frontal and anterior temporal regions for monitoring of seizure foci, and subsequent surgical resection of left prefrontal and left temporal regions (including the amygdala). Pathological examination of this tissue revealed dysplastic neurons. Post-surgery, B.W. is on lamotrigine and has remained seizure-free.The authors concluded that B.W.’s bad behavior was caused by the vmPFC abnormality for the following reasons:1) The behavioral and neuropsychological profile described in the results section is strikingly consistent with prior cases of focal vmPFC lesions. … The severity of behavioral problems is more extreme than previously reported following vmPFC damage but this may represent an extension of prior reports of more severe outcomes following early-onset lesions... 2) There is a complete absence of externalizing and antisocial behavioral problems in B.W.’s family, suggesting a lower likelihood of a genetic predisposition. … 3) B.W. has exceptionally few social risk factors. He has intelligent, extraordinarily caring ... Read more »

Boes, A., Hornaday Grafft, A., Joshi, C., Chuang, N., Nopoulos, P., & Anderson, S. (2011) Behavioral effects of congenital ventromedial prefrontal cortex malformation. BMC Neurology, 11(1), 151. DOI: 10.1186/1471-2377-11-151  

  • December 2, 2011
  • 12:31 AM

Meth Really Isn't That Bad for You? (Part 2)

by The Neurocritic in The Neurocritic

Methamphetamine Use and Risk for HIV/AIDS... Methamphetamine is very addictive, it can be injected, and it can increase sexual arousal while reducing inhibitions. Because of these attributes, public health officials are concerned that users may be putting themselves at increased risk of acquiring or transmitting HIV infection―a valid concern, considering that methamphetamine use has been linked with increased numbers of HIV infections in some populations [1]. 1 Meth addiction can cause alterations in brain function and cognitive performance, according to hundreds of published studies (reviewed in Barr et al., 2006; Baicy & London, 2007). However, a new paper concludes that prior studies have exaggerated the harmful effects of methamphetamine on brain structure and function, cognition, mental health, and dental health (Hart et al., 2011).So who's right? The previous post (Meth Really Isn't That Bad for You... Or is it?) covered the acute effects of meth on cognitive performance. This post will focus on the cognitive consequences of chronic meth abuse. The bulk of the literature suggests that long-term use "leads to neurocognitive deficits in a dose-dependent manner, with deficits relating to both the frequency and severity of METH dependence" (McCann et al., 2008). In that study, chronic meth users performed worse than controls on some tests of memory and attention, with intact performance on other tests. Another paper found similar differences between controls and former meth users (abstinent anywhere from 3 months to over 1o yrs) on some tests but not others (Johanson et al. , 2006). Those authors were cautious in interpreting their findings:In the present study, MA users showed deficits in the DSST [Digit Symbol Substitution Test] of the WAIS-III relative to the controls. However, neither the mean standard score (9.63) nor individual scores were greater than one SD (3) below the age-controlled norm (10.0). This finding suggests that although MA may produce long-term, possibly irreversible deficits in speed and accuracy of information manipulation, these deficits are relatively small and for some may not reach clinical significance. ...and...In the present investigation, MA users showed significantly poorer performance on several of the subtests of the CVLT [California Verbal Learning Test] including both cued and noncued short and long delayed recall. However, despite this statistically significant difference compared to controls, their performance was not outside the normal range for their age group. Thus, the functional significance of these differences in memory function is questionable. Nevertheless, it seems likely that these deficits are permanent because they were not correlated with duration of abstinence. It is obvious that the possibility remains that these deficits predated drug use but the present study cannot address this possibility.These statements were much appreciated in the review article, which repeatedly downplayed observations of poorer performance as having any functional significance whatsoever. However, I will draw your attention to Johanson et al. 's inclusionary criteria and their table below:To qualify for the study, MA participants had to report at least one 3-month period when they experienced MA-induced toxic symptoms (agitation, sleeplessness, paranoia, or tremors). Table 5 (Johanson et al. , 2006). Other self-reported symptomsClearly, every symptom listed above is of functional significance. Meth use was detrimental to many areas of their lives when they were using. However, those participants were given the cognitive tests a mean of 3.4 yrs after they stopped using. We'll return to the issue of recovery a bit later.Returning now to the comparison of acute low dose meth vs. chronic abuse, in the abstract Hart et al. (2011) stated:In general, the data on acute effects show that methamphetamine improves cognitive performance in selected domains, that is, visuospatial perception, attention, and inhibition. Regarding long-term effects on cognitive performance and brain-imaging measures, statistically significant differences between methamphetamine users and control participants have been observed on a minority of measures.Let’s take a closer look. Of the 16 studies on the acute effects of meth shown in Table 1 of Hart et al., five of them (by the authors) tested inhibitory control. Meth had no effect on inhibition performance in any of those studies.Am I just being persnickety? Well, if the authors are going to say that long-term meth abuse results in “poorer performance on a minority of cognitive tasks” [which appears to be true in many cases], they should be more precise when describing their own data. Perhaps something like this: "Acute meth sometimes improves the performance of infrequent users on a minority of cognitive tasks, but these results are inconsistent (see Table 1)."Again, what about the specific cognitive impairments observed in chronic abusers? There was a didactic paragraph on the use on the word “impairment”:The literature on methamphetamine use is focused on ‘impairment,’ and seems to conflate two different meanings of this term. One meaning is captured by the canonical situation in which one group of participants performs statistically significantly less well on a task than does a control group. Although there is a statistically significant difference, its clinical relevance, or everyday import, is rarely specified. A second meaning of ‘impairment’ is that of a substantial loss of function, a dysfunction, in which performance may even fall outside of normal range and bears clinical significance. (The two meanings probably represent end points on a continuum of meanings of ‘impairment’ that appears in the general literature on group differences.) The problem in the literature on methamphetamine use is that in many studies the results support only the first or difference interpretation, but the results are discussed in terms of the ‘dysfunctional’ interpretation. In essence, the English word ‘impairment’ (or ‘deficit’) is ambiguous, and researchers in this field often switch meanings in moving from actual findings to discussion of the implications of these findings.One reason for avoiding use of the word “impairment” is to reduce the stigma attached to meth abuse, which is an important goal. To that end, it’s puzzling that the authors failed to cite some of the literature on recovery.GREEN RED BLUEOne such paper (Salo et al., 2009) compared 38 recently-abstinent meth abusers (3 weeks to 6 months), 27 longer-abstinent meth abusers (at least 1 yr), and 33 controls on the Stro... Read more »

Salo, R., Nordahl, T., Galloway, G., Moore, C., Waters, C., & Leamon, M. (2009) Drug abstinence and cognitive control in methamphetamine-dependent individuals. Journal of Substance Abuse Treatment, 37(3), 292-297. DOI: 10.1016/j.jsat.2009.03.004  

  • November 29, 2011
  • 06:28 AM

Meth Really Isn't That Bad for You... Or is it?

by The Neurocritic in The Neurocritic

Image from All Around The House™We all know that meth is a highly addictive, harmful stimulant drug that rots your teeth and makes you paranoid, stupid, unemployed, and homeless -- thereby ruining your life. So just say NO! to meth. Right, kids?Methamphetamine (meth) and other stimulants are best known for their effects on the dopamine system, and hence for their propensity to be reinforcing and addictive. But meth actually increases the release and blocks the reuptake of all three monoamine neurotransmitters (norepinephrine and serotonin as well as dopamine). Meth addiction can cause alterations in brain function and cognitive performance, according to hundreds of published studies (reviewed in Barr et al., 2006; Baicy & London, 2007). The NIDA website lists a multitude of adverse effects from chronic heavy use:Long-term methamphetamine abuse has many negative health consequences, including extreme weight loss, severe dental problems (“meth mouth”), anxiety, confusion, insomnia, mood disturbances, and violent behavior. Chronic methamphetamine abusers can also display a number of psychotic features, including paranoia, visual and auditory hallucinations, and delusions...However, a new review article by Hart et al. (2011) concludes that prior studies have exaggerated the harmful effects of methamphetamine on brain structure and function, cognition, mental health, and dental health. In my view, one problem with this endeavor arises in the very first sentence of the abstract:The prevailing view is that recreational methamphetamine use causes a broad range of severe cognitive deficits, despite the fact that concerns have been raised about interpretations drawn from the published literature. This article addresses an important gap in our knowledge by providing a critical review of findings from recent research investigating the impact of recreational methamphetamine use on human cognition.Many people can use meth recreationally, in modest doses, without becoming dependent. In fact, the review begins by noting the performance enhancing effects of meth in high-functioning, healthy adults who are occasional users. These laboratory studies are conducted in a very controlled environment, using oral administration of pharmaceutical grade methamphetamine. No one disputes that acutely administered meth can have beneficial effects on cognitive performance (Barr et al., 2006):Numerous studies have confirmed that MA abuse is associated with cognitive impairment. Unlike the acute effects of a single low dose of MA, which can improve cognitive processing speed, attention, concentration and psychomotor performance,77,78 long-term exposure to MA may result in profound neuropsychological deficits (see Nordahl et al2).But how does acute meth affect the performance of meth abusers? Here, the authors cite their own work on the intranasal administration of 3 doses + placebo to 11 meth abusers (Hart et al., 2008). The same computerized battery of 5 cognitive tests was given to the participants during each session. The results in their entirety:Figure 4 shows how methamphetamine altered performance over time on selected measures.1 As can be seen, methamphetamine improved performance on both of the selected tasks. On the DAT [divided attention task, for vigilance], all active methamphetamine doses decreased the mean hit latency and increased the maximum tracking speed (P<0.05). On the DSST [digit-symbol substitution task, for visuospatial processing], only the two intermediate doses (12 and 25 mg) significantly improved performance. Relative to placebo, both doses increased the total number of trial attempts and correct responses (P<0.03). No other significant performance effects were noted.There is no explanation of why these two tasks were "selected" instead of the other three. Nor is there any indication of how this performance compares to "normative data" or to participants who are not meth abusers. This is a bit ironic, because the most annoying critique within the review is the repeated failure to accept the performance of control subjects as valid. Sure, acute meth did speed up performance on "selected" measures of "selected" tasks, but was this generally better or worse than what's observed in those without a history of long-term meth abuse?When evaluating whether meth really isn't that bad for you, my focus is on the chronic effects of meth in long-term abusers of the drug. I'll return to this critical issue in the next post.Footnote1 An intriguing aspect of the data is that a massive performance drop was seen from time 0 to time 15 min in the placebo condition. One could speculate that the participants knew by then that they weren't on meth. The "Good Drug Effects" and "Stimulated" self-report ratings peaked at 15 min post-snort, so there's a disappointment-related decrement on placebo.Figure 4. Selected performance effects as a function of methamphetamine dose and time. Error bars represent one SEM. Overlapping error bars were omitted for clarity.ReferencesBaicy K, London ED. (2007). Corticolimbic dysregulation and chronic methamphetamine abuse. Addiction 102 Suppl 1:5-15.Barr AM, Panenka WJ, MacEwan GW, Thornton AE, Lang DJ, Honer WG, Lecomte T. (2006). The need for speed: an update on methamphetamine addiction. J Psychiatry Neurosci. 31:301-13.... Read more »

Hart, C., Gunderson, E., Perez, A., Kirkpatrick, M., Thurmond, A., Comer, S., & Foltin, R. (2007) Acute Physiological and Behavioral Effects of Intranasal Methamphetamine in Humans. Neuropsychopharmacology, 33(8), 1847-1855. DOI: 10.1038/sj.npp.1301578  

  • November 14, 2011
  • 03:08 PM

The Return of Physiognomy

by The Neurocritic in The Neurocritic

Physiognomy "is the assessment of a person's character or personality from their outer appearance, especially the face." Although one might think of physiognomy as an outdated pseudoscience, along with its brethren craniometry and phrenology, facial phenotyping has undergone a resurgence of interest. Most recently, a study by Wong et al. (2011) looked at facial width and financial success in male CEOs:Can head shape determine chances of business success?Research suggests that the shape of a chief executive's head can show whether he will be successfulBut why even ask such a question? In general, the authors noted that certain psychological traits (e.g., extraversion) are associated with leadership ability, so they wondered whether an objective physical trait could predict leadership success. More specifically, they examined whether the facial width-to-height ratio (WHR) of 55 male CEOs was related to the financial performance of their companies. There's actually a sizable literature on facial WHR and aggressiveness in men:Researchers have theorized that this relationship exists because higher facial WHRs make men seem more physically imposing, which minimizes the chance of retribution for their aggressive actions (Stirrat & Perrett, 2010). In addition, facial WHR is a sexually dimorphic trait thought to be influenced by the effects of testosterone during adolescence. It can be objectively measured from photographs, which in this case were obtained from internet sources. The Fortune 500 firms were selected based on extensive media coverage and availability of online photos.The 55 firms in our sample represented a range of industries, including computer manufacturing, transportation, and retail; on average, the firms had generated $38 billion in sales and had 119,684 full-time employees. The organizations in the sample included General Electric, Hewlett-Packard, and NIKE, Inc.Results indicated that high facial WHR did indeed predict financial performance. Is this because of a more aggressive leadership style? Other studies have found a relationship between facial WHR and physical aggression (Carré & McCormick, 2008). Does this mean that successful CEOs are more likely to win bar fights (adjusted for age)? Or to spend a greater amount of time in the penalty box, so to speak?Canadian researchers Carré and McCormick (2008) actually did find a correlation between facial WHR in hockey players and time spent in the penalty box, which was used as a proxy for physical aggressiveness. So should the most violent hockey players be the leaders of Fortune 500 companies? Perhaps, if they're companies with "cognitively simple" leadership teams,1 because the facial-financial link was stronger for CEOs of such firms.Or not. Wong et al. (2011) conclude:In sum, our study has advanced leadership research by showing that objective facial metrics of male leaders, as well as the broader context in which these leaders make decisions, are closely related to organizational performance. Although men with high facial WHRs may be aggressive and untrustworthy in interpersonal interactions (Carré & McCormick, 2008; Stirrat & Perrett, 2010), our research suggests that, at a societal level, organizational success may compensate for individual transgressions...What Luscious Lips You HaveThe above studies found significant physiognomic patterns in men, but these results did not hold for women. In contrast, a recent study (Brody & Costa, 2011)2 claimed that a female facial feature, prominence of the upper lip tubercule, correlated with....... the ability to achieve vaginal orgasm!Why would you ever propose such a thing? The infamous Stuart Brody has an agenda, and it's that unprotected penile-vaginal sex is the only mature and worthwhile form of sex.A clinical observation (by the present senior author in discussion with colleagues) of an association between a novel visible marker (of likely prenatal origin) and enhanced likelihood of vaginal orgasm among coitally experienced women led to the hypothesis empirically tested in the present study. The hypothesis is that a more prominent tubercle of the upper lip is associated with vaginal orgasm (measured both as ever having had a vaginal orgasm, as well as vaginal orgasm consistency in the past month). Now Professor Brody, what sort of "clinical observation" led you to this fanciful idea? Oh I don't know, perhaps the same one that led you to propose that you can tell by the way she walks (see Scicurious, Dr. Isis, and Jezebel). For extensive critiques of the methodology used in these studies (e.g., definitions of various sexual activities, bias, self-selection, etc.), I recommend reading Dr Petra.Back to the lip tubercle... Why the lip tubercle? Why not 2D:4D digit ratio, which is influenced by prenatal androgens? Brody and Costa:There is substantial variability in the degree to which the tubercle of the lip develops. Other than its mention in the basic anatomic literature and surgical literature (especially with regard to reconstruction of labial malformation or as part of a package of aesthetic modifications to the lips), we do not know of scientific literature on aspects of the tubercle of the lip that might directly impinge upon sexual function.OK then, the idea was pulled out of a "clinical observation" hat. Were there any other facial characteristics or bodily features that were examined but not found to correlate with penile-vaginal intercourse (PVI)?Then we have the offensive speculation is possible that a flatter or absent tubercle might have something in common with the at times subtle lip abnormalities associated with subtle neuropsychologic abnormalities in marginal cases of fetal alcohol syndrome...Ladies! If you have a flat or absent tubercle, you're neuropsychologically and sexually abnormal! And how was the tubercle defined? By the participants themselves, who looked in a mirror and interpreted the verbal definitions3 as they saw fit [91 of the 405 women who completed the online survey were excluded because they didn't have a mirror handy].... Read more »

  • October 31, 2011
  • 03:45 AM

Buried Alive!

by The Neurocritic in The Neurocritic

The pathological fear of being buried alive is called taphophobia1 [from the Greek taphos, or grave]. Being buried alive seems like a fate worse than death, the stuff of nightmares and horror movies and Edgar Allan Poe short stories. What could be pathological about such a fear? When taken to extremes, it can become a morbid, all-consuming obsession. In 1881, psychiatrist Enrico Morselli wrote about "two hitherto undescribed forms of Insanity" (English translation, 2001):As the result of some observations I have made in recent years, I propose to add two new and previously undescribed varieties to the various forms of insanity with fixed ideas, whose underlying phenomenology is essentially phobic. The two new terms I would like to put forward, following the nomenclature currently accepted by leading clinicians, are dysmorphophobia and taphephobia.The first condition consists of the sudden appearance and fixation in the consciousness of the idea of one’s own deformity; the individual fears that he has become deformed (dysmorphos) or might become deformed, and experiences at this thought a feeling of an inexpressible ansieta (anxiety). The second condition, taphephobia, consists of the sick person’s being plagued, at his approach to the time of his own death, by a fear of the possibility of being buried alive (taphe, grave), this fear becoming the source of a terribly distressing anguish. It is not necessary for me to give a very detailed description of these two new forms of rudimentary paranoia I have discovered and named, since in so doing I would only be repeating descriptions that have long been available among the many and varied forms of paranoia in books and the most important journals of psychiatry; instead, I shall limit myself to making some general comments on the conditions. The ideas of being ugly and of being buried whilst in a state of apparent death are not, in themselves, morbid; in fact, they occur to many people in perfect mental health, awakening however only the emotions normally felt when these two possibilities are contemplated. But, when one of these ideas occupies someone’s attention repeatedly on the same day, and aggressively and persistently returns to monopolize his attention, refusing to remit by any conscious effort; and when in particular the emotion accompanying it becomes one of fear, distress, anxiety and anguish, compelling the individual to modify his behaviour and to act in a pre-determined and fixed way, then the psychological phenomena have gone beyond the bounds of normal, andmay validly be considered to have entered the realm of psychopathology.Dysmorphophobia has come to be known as body dysmorphic disorder, a preoccupation with perceived defects in one's appearance (Buhlmann & Winter, 2011).Although taphophobia seems irrational now with modern definitions of brain death,2 it was a more prevalent (and realistic) fear in the 19th century. "Safety coffins" with air tubes, bells, flags, and burning lamps were a booming business. However, these contraptions failed to assuage an inventor with severe taphophobia (Dossey, 2007):One of the most popular safety devices in Victorian England was the Bateson Revival Device, invented by George Bateson, who made a fortune in sales. The gadget came to be known as Bateson’s Belfry. It consisted of an iron bell mounted on the coffin lid just above the deceased’s head, with a cord connected to the hand “such that the least tremor shall directly sound the alarm.” Ironically, his invention did nothing to relieve his own all-consuming fear of premature burial. In 1886, driven mad by his dread, he committed suicide by dousing himself with linseed oil and setting himself on fire.Would you rather burn to death or suffocate in a coffin? Excruciating physical pain vs. sheer panic,3 bloodied limbs, and mental anguish? Not a pleasant choice.Footnotes1 Also spelled taphephobia.2 Which are still controversial, nonetheless (Teitelbaum & Shemi, 2011).3 Well, not if you're The Bride in Kill Bill Vol. 2.ReferencesBuhlmann U, Winter A. (2011). Perceived ugliness: an update on treatment-relevant aspects of body dysmorphic disorder. Curr Psychiatry Rep. 13:283-8.Dossey L. (2007). The undead: botched burials, safety coffins, and the fear of the grave. Explore (NY). 3:347-54.Morselli, E., & Jerome, L. (2001). Dysmorphophobia and taphephobia: two hitherto undescribed forms of Insanity with fixed ideas. History of Psychiatry, 12 (45), 103-107 DOI: 10.1177/0957154X0101204505 [Introduction]Morselli, E. (2001). Dysmorphophobia and taphephobia: two hitherto undescribed forms of Insanity with fixed ideas. History of Psychiatry, 12 (45), 107-114 DOI: 10.1177/0957154X0101204506 [Translation of original Italian]Teitelbaum J, Shemi SD.... Read more »

  • October 23, 2011
  • 03:44 AM

Activation of the Hate Circuit While Reading 'Depression Uncouples Brain Hate Circuit'

by The Neurocritic in The Neurocritic

A recent article published in Molecular Psychiatry has the curious title, 'Depression uncouples brain hate circuit' (Tao et al., 2011). Hate circuit, you ask? Is there really any such thing? Is the existence of a distinctive brain circuit for hate so well-established that we ought to go about including it in the title of our papers? And what does it mean for this circuit to be uncoupled in depression? That depressed people no longer have coherent feelings of hatred?The current article refers to the one prior fMRI study on the topic, which examined the 'Neural correlates of hate' (Zeki & Romaya, 2008). The 17 participants were chosen because they expressed intense hatred for a particular individual. Sixteen people hated an ex-lover or a competitor at work, and one person hated a famous political figure (see Hate On Halloween for details of that study). Participants viewed pictures of a person they hate, and the resultant BOLD signal changes were compared to when they viewed pictures of neutral people.And the groundbreaking hypothesis? Love and hate might be represented by different brain states! Who knew?We hypothesized that the pattern of activity generated by viewing the face of a hated person would be quite distinct from that produced by viewing the face of a lover. The results identified 7 regions that were significantly more active for the Hated Face condition than for the Neutral Face condition. The flaming figure above illustrates a few of them, including the medial frontal gyrus [the anterior cingulate cortex (ACC) and the pre-SMA], the right putamen, and bilateral premotor cortex. The other regions were the frontal pole and our friend, bilateral insula [activated in all sorts of conditions from speech to working memory to reasoning to pain to disgust to the allure of Chanel No. 5 and "love" of iPhones]. An additional correlation analysis related degree of hatred to level of activation across 5,225 voxels (using an uncorrected statistical threshold of p≤0.01) and found three regions to be most related: right insula, right premotor cortex, and right ACC.These results set the stage for the current study by Tao et al. (2011), which compared the resting state functional connectivity patterns between controls and severely depressed individuals. The "resting state" or "default mode network" (DMN) is the brain activity observed when there is no active task (Raichle et al., 2001). In other words, the participants are free to daydream about their lover or to think about dinner or to remember the amusing movie from last night or to focus on feelings of despair. A specific group of brain regions has been identified as the DMN, and these are deactivated when participants have to perform a demanding cognitive or perceptual task.A new feature of the present study was the community mining algorithm used to determine coherent resting state networks among the 90 regions of interest (ROIs). First, a template was formed based on data from 37 healthy controls. Then the network connectivity for the control template was compared to that of two depressed groups: 15 unmedicated first-episode major depressive disorder (FEMDD) patients and 24 resistant major depressive disorder (RMDD) patients.The 6 "communities" or resting state networks are illustrated below. Note that RS1/DMN (red in the top figure) isn't identical to the typical DMN (orange in the bottom figure).Top: Adapted from Fig 1C (Tao et al., 2011). Left: Medial view of the surface of the brain. Right: The lateral view of the surface of the brain. Different colors represent different communities. Bottom: Adapted from Buckner et al. (2008). One big difference between the two schemes is that the dorsal ACC /SMA active task network (blue in the bottom left figure) is part of the DMN in Tao et al.'s community structure (red in the top left figure).But wait, where is the 'hate circuit'?? It emerged with some bizarre post hoc hand waving:It can be seen from Figures 2, 3 and 4 that the strongest evidence for reduced connectivity compared with control subjects in both FEMDD and RMDD is that between the insula and putamen in both brain hemispheres (s=0.4 and 0.25 for FEMDD and RMDD, respectively). Additionally, the link between the left superior frontal gyrus and the right insula is also reduced (s=0.2991 and 0.2658). Thus, the links between the three main components of the ‘hate circuit’ have become largely uncoupled. OK, I thought Zeki's 'hate circuit' included bilateral premotor cortex and the frontal pole, plus the putamen only in the right hemisphere. But somehow, the community mining algorithm determined that the insula [part of RSN4 - auditory network] has links with the putamen [RSN6 - subcortical network] and the dorsal superior frontal gyrus [RSN1 - DMN, and perhaps not the same area as in Zeki & Romaya] only in controls but not in the depressed participants. And this set of links in controls comprises the 'hate circuit' and nothing else.Adapted from Fig 4a (Tao et al., 2011). The common links of the first-episode major depressive disorder (FEMDD) and resistant major depressive disorder (RMDD) networks. Red lines are links that appear in depression network only while blue lines are links that appear in n... Read more »

Tao, H., Guo, S., Ge, T., Kendrick, K., Xue, Z., Liu, Z., & Feng, J. (2011) Depression uncouples brain hate circuit. Molecular Psychiatry. DOI: 10.1038/mp.2011.127  

  • October 11, 2011
  • 06:27 AM

Rising Mortality Rates for People with Serious Mental Illness

by The Neurocritic in The Neurocritic

Fig 1 (Hoang et al., 2011). Trend in standardised 365 day all cause mortality ratio for all people discharged from hospital with principal diagnosis of bipolar disorder or schizophrenia.The "mortality gap" is the differential between the mortality rates for the general population and for persons with serious mental illness (schizophrenia and bipolar disorder). A new study from England examined hospital records for psychiatric patients discharged between 1999 and 2006, and determined how many had died within one year (Hoang et al., 2011). The authors expected to see a drop in the mortality gap over time due to government programs:Over the past decade several strategies have been implemented in England and Wales aimed at reducing the mortality gap between people with serious mental illness and the general population, including those to address deliberate self harm and to reduce suicide (7 8 9), to decrease smoking (10 11 12), alcoholism, and drug misuse (13 14) and to deal with other lifestyles associated with increased mortality (15 16). Recent studies have suggested that the rate of suicide has been stabilising among people with mental disorders as a whole (17 18 19 20 21); however, trends in mortality for people with schizophrenia or bipolar disorder remain poorly characterised, particularly the relative contributions of natural and unnatural causes. The United Kingdom government’s recent mental health strategy states that “more people with mental health problems will have good physical health” as one of its objectives, specifically stating that “fewer people with mental health problems will die prematurely” (22). It is therefore timely to review the level of and trends in these recognised inequalities.However, as illustrated in Fig. 1 above, the opposite trend was observed, with increased mortality for those with schizophrenia and bipolar disorder. The standardized mortality ratios show a rise from ~30-60% greater than the general population to about double the population average:For people discharged with schizophrenia, the ratio was 1.6 in 1999 and 2.2 in 2006 (P<0.001 for trend). For bipolar disorder, the ratios were 1.3 in 1999 and 1.9 in 2006 (P=0.06 for trend). Ratios were higher for unnatural than for natural causes. About three quarters of all deaths, however, were certified as natural, and increases in ratios for natural causes, especially circulatory disease and respiratory diseases, were the main components of the increase in all cause mortality. These results are alarming (but not new, unfortunately) and similar to those reported by Chang et al. (2011) - see Improving the Physical Health of People With Serious Mental Illness. In that post, I mentioned the possible role of "second generation" or atypical antipsychotics, which can cause substantial weight gain and hence diabetes, hypertension, cardiovascular problems, high cholesterol, and stroke. To counteract these serious side effects, a regular part of mental health treatment should include programs that promote better physical health: smoking cessation and nutritionists and structured exercise classes in addition to standard psychiatric care and substance abuse treatment. For example, a six month intervention pilot study enrolled 63 overweight participants at psychiatric rehabilitation day programs and showed promising initial results (Daumit et al., 2010).These concerns were mentioned earlier in a systematic review of the literature by Saha et al. (2007), who urged immediate action:“in light of the potential for second-generation antipsychotic medications to further adversely influence mortality rates . . . optimizing the general health of people with schizophrenia warrants urgent attention.”ReferencesChang CK, Hayes RD, Perera G, Broadbent MT, Fernandes AC, Lee WE, Hotopf M, Stewart R. (2011). Life expectancy at birth for people with serious mental illness and other major disorders from a secondary mental health care case register in London. PLoS ONE 6(5):e19590.Hoang, U., Stewart, R., & Goldacre, M. (2011). Mortality after hospital discharge for people with schizophrenia or bipolar disorder: retrospective study of linked English hospital episode statistics, 1999-2006. BMJ, 343:d5422. DOI: 10.1136/bmj.d5422Daumit GL, Dalcin AT, Jerome GJ, Young DR, Charleston J, Crum RM, Anthony C, Hayes JH, McCarron PB, Khaykin E, Appel LJ. (2010). A behavioral weight-loss intervention for persons with serious mental illness in psychiatric rehabilitation centers. Int J Obes (Lond). 35(8):1114-23.Saha S, Chant D, McGrath J (2007). A systematic review of mortality in schizophrenia: is the differential mortality gap worsening over time? Arch Gen Psychiatry 64:1123-31. ~~~~~~~~~~~October 10 was World Mental Health Day, an event designed to raise public awareness of mental health issues:This year the theme is "Investing in mental health". Financial and human resources allocated for mental health are inadequate espec... Read more »

  • October 6, 2011
  • 12:19 PM

New York Times on Addiction and The Insula

by The Neurocritic in The Neurocritic

In Clue to Addiction, Brain Injury Halts SmokingBy BENEDICT CAREYPublished: January 26, 2007Scientists studying stroke patients are reporting today that an injury to a specific part of the brain, near the ear, can instantly and permanently break a smoking habit. People with the injury who stopped smoking found that their bodies, as one man put it, “forgot the urge to smoke.”The finding, which appears in the journal Science, is based on a small study [Naqvi et al., 2007]. But experts say it is likely to alter the course of addiction research, pointing researchers toward new ideas for treatment.While no one is suggesting brain injury as a solution for addiction, the finding suggests that therapies might focus on the insula, a prune-size region under the frontal lobes that is thought to register gut feelings and is apparently a critical part of the network that sustains addictive behavior.Hey, wait a minute!Didn't the NYT just publish an authoritative piece to the contrary? In You Love Your iPhone. Literally., Martin Lindstrom claimed the insula was a signifier of love and compassion, not addiction:WITH Apple widely expected to release its iPhone 5 on Tuesday, Apple addicts across the world are getting ready for their latest fix.But should we really characterize the intense consumer devotion to the iPhone as an addiction? A recent experiment that I carried out using neuroimaging technology suggests that drug-related terms like “addiction” and “fix” aren’t as scientifically accurate as a word we use to describe our most cherished personal relationships. That word is “love.”. . .But most striking of all was the flurry of activation in the insular cortex of the brain, which is associated with feelings of love and compassion. The subjects’ brains responded to the sound of their phones as they would respond to the presence or proximity of a girlfriend, boyfriend or family member. OK, OK, we all know by now that royal proclamations of brain function based on logical fallacies and unpublished (and never-to-be-peer-reviewed) commercial studies are not to be believed. NYT did publish a retort to this silliness, a Letter to the Editor (The iPhone and the Brain) in which "Forty-five neuroscientists respond to a recent Op-Ed about using brain imaging to analyze our attachment to digital devices." We also know that the insula is activated in a substantial percentage of all neuroimaging studies (Yarkoni et al. 2011; PDF). Reflecting this ubiquity, The Neurocritic blog archive contains 73 unique posts with the word "insula."But what of addiction and the insula? In their 2007 Science paper, Naqvi and colleagues performed a retrospective study of 69 stroke patients (all smokers): 19 with lesions in the insula and 50 with lesions elsewhere. The color coding in the figure below depicts the number of individuals with damage in specific brain regions.Fig. 1 (Naqvi et al., 2007). Number (N) of patients with lesion in each of the regions identified in this study, mapped onto a reference brain. Boundaries of anatomically defined regions are drawn on the brain surface. Regions not assigned a color contained no lesions. (Top) All patients. The horizontal line marks the transverse section of the brain shown in the top row. The vertical line marks the coronal section shown in the bottom row. (Middle) Patients with lesions that involved the insula. (Bottom) Patients with lesions that did not involve the insula.The likelihood of post-stroke smoking cessation did not differ between the insula and non-insula groups, but those with insula lesions who did quit smoking reported that it was easy to do so. The authors concluded that......smokers with brain damage involving the insula, a region implicated in conscious urges, were more likely than smokers with brain damage not involving the insula to undergo a disruption of smoking addiction, characterized by the ability to quit smoking easily, immediately, without relapse, and without persistence of the urge to smoke. The problem with this assertion is that it relies on memory for events that occurred an average of 8 yrs earlier, which could be subject to recall bias (Vorel et al., 2007). A better design would be a prospective study that follows patients from the time of stroke and then assesses subsequent smoking behavior. In fact, Bienkowski et al. (2010) performed such a study and failed to see a difference between their insula and non-insula groups at a 3 month follow-up. This suggests that the insula does not play a special role in addiction.What does this mean for iPhone love? Is Lindstrom right? Unlikely! He would have to demonstrate that insular strokes cause an inability to feel love for iPhones (or anything else, for that matter). Such a finding would suggest that an intact insula is necessary for the experience of love and compassion, and that the activity in his fMRI experiment was not a mere epiphenomenon.In the real world of peer-reviewed neuroimaging research, however, that sort of converging evidence is rarely obtained.Further ReadingNYT Editorial + fMRI = complete crapthe New York Times blows it big time on brain imagingNeuromarketing means never having to say you're peer reviewed (but here's your NYT op-ed space)fMRI Shows My Bullshit Detector Going Ape Shit Over iPhone Lust...and pollyannaish comment by Martin LindstromNYT Letter to the Editor: The uncut versionArticles on insular cortex from The Amazing World of Psychiatry: A Psychiatry BlogThe Insula Is The New Black... No Longer an Island, the Insula Is Now a Hub of High FashionReferencesBienkowski P, Zatorski P, Baranowska A, Ryglewicz D, Sienkiewicz-Jarosz H. (2010). Insular lesions and smoking cessation after first-ever ischemic stroke: a 3-month follow-up. Neurosci Lett. 478:161-4.... Read more »

Naqvi, N., Rudrauf, D., Damasio, H., & Bechara, A. (2007) Damage to the Insula Disrupts Addiction to Cigarette Smoking. Science, 315(5811), 531-534. DOI: 10.1126/science.1135926  

  • September 25, 2011
  • 05:40 AM

The Neurophysiology of Pain During REM Sleep

by The Neurocritic in The Neurocritic

In the last post, we learned about The Phenomenology of Pain During REM Sleep. Real life pain can intrude into dreams, as was shown for experimentally induced pain (Nielsen et al., 1993) and in hospitalized burn patients (Raymond et al., 2002). In this post we'll hear about a fascinating experiment that recorded laser evoked potentials directly from the brains of epilepsy patients who were being surgically monitored for seizures (Bastuji et al. 2011). Only under rare circumstances can intracranial electrodes be placed in the brains of humans, and the current study had the unique opportunity to record from three major pain regions simultaneously: the posterior insula (Brodmann area 13), the parietal operculum (somatosensory area S2), and the mid-anterior cingulate cortex (BA 24). These areas comprise the so-called "Pain Matrix"1 (PM), ornetwork of cortical structures that respond consistently to noxious mechanical or thermal stimuli. The lateral structures of the PM (posterior insula and suprasylvian operculum) are thought to subserve intensity coding and localization of pain inputs, while the medial PM system (anterior and mid-cingulate cortex) is linked to the attentional (orienting and arousing) components of pain.In the present study, Bastuji et al. (2011) recorded laser evoked potentials (LEPs) from these brain regions during different stages of sleep, as well as while the patients were awake. LEPs are a specific type of EEG response time-locked to the application of painful laser heat stimuli. When recorded from the scalp, a sequence of three LEPs is generated in rapid succession, within the first 400 milliseconds after laser stimulation. As described in a review by Plaghki and Mouraux (2005),Laser heat stimulators selectively activate Aδ and C-nociceptors ["pain receptors"] in the superficial layers of the skin. Their high power output produces steep heating ramps, which improve synchronization of afferent volleys and therefore allow the recording of time-locked events, such as laser-evoked brain potentials. Study of the electrical brain activity evoked by Aδ- and C-nociceptor afferent volleys revealed the existence of an extensive, sequentially activated, cortical network.The advantage of recording intracranial LEPs is that you know precisely when the pain-related activity occurred, as well as where the brain response was located (unlike with standard EEG). Two major components were observed: Component 1 (C1), peaking at ~200 ms post-stimulus and Component 2 (C2), peaking at ~300 ms. Because the components were of varying polarities depending on brain region, they weren't labelled according to the customary N2/P2 as seen on the scalp. Of primary interest was what happened to these components during Stage 2 sleep and REM sleep (see Fig. 3A below).Figure 3A (modified from Bastuji et al. 2011). Grand average LEPs in referential recording mode during wakefulness, sleep stage 2, and paradoxical sleep in the operculum (bottom), the insula (middle), and the mid-anterior cingulate (top). Traces recorded by the electrode contact yielding the largest amplitudes are superimposed on those from the adjacent contact. On the left part of the figure, for each structure, the coordinates of the contacts where the maximal amplitudes of the C1–C2 components were recorded are indicated on mean sagittal MRIs.Typically, painful stimuli at the nociceptive threshold will cause awakening ~30% of the time. In this study, the stimulus intensity of the laser2 was set individually in each participant to be slightly above pain threshold. C1 and C2 decreased in amplitude in all three brain regions during Stage 2 sleep, relative to wakefulness. During REM sleep, however, both components remained stable in amplitude (relative to Stage 2) in the operculum and insula, but they decreased dramatically in the cingulate. Recall that the medial mid-anterior cingulate cortex (ACC) is associated with the attentional and affective components of pain, while the lateral opercular and insular cortices are more related to the sensory aspects of pain. The authors suggest that this dissociation between the lateral and medial pain systems is what allows the experience of pain in dreams without being alerted enough to wake up. The fact that larger mid-ACC LEPs can predict when motor responses to pain will occur supports this interpretation.CODA (Notes from an Actual Pain Dream)Lately I've had a painful orthopedic issue (in real life). I also have a cat who is fond of laying on my legs at night, which is not comfortable at all under the circumstances. Yesterday morning, I had a terrible nightmare in which my real life leg pain was projected onto someone else in an exceptionally gruesome way. I was driving along an unknown neighborhood street when suddenly a man appeared in front of my car. It wasn't clear if he was on the hood or on the trunk of the car ahead of me or suspended in the air in a dream-like way. At any rate, if that wasn't bad enough, he pulled up the body of a man who had fallen under my car and had both his legs amputated from being run over -- one leg was amputated below the knee, the other was at the hip. The gravely injured man was still alive. I was absolutely horrified. All I could do is say "oh my god oh my god oh my god" over and over. At some point my car rolled backward down a steep hill and the other motorists behind me were exclaiming "oh my god oh my god" as well.It was an awful nightmare, and in the dream I was quite traumatized by the entire experience. Did I feel excruciating pain when I woke up? No, not really, just the usual ache.Further ReadingLEPs and pain perception can be reduced while looking at one's own hand or at beautiful artwork:It Hurts Less When I Can See ItPain & Paintings: Beholding Beauty Reduces Pain Perception and Laser Evoked PotentialsFootnotes1 "So-called" because the Pain Matrix might not be that specific to nociception after all (Iannetti & Mouraux, 2010).2 Laser pulses were delivered to the back of the hand opposite to the hemisphere with the implanted electrodes.References... Read more »

  • September 18, 2011
  • 05:26 AM

The Phenomenology of Pain During REM Sleep

by The Neurocritic in The Neurocritic

Coarse — Pain in DreamsHave you ever felt pain in dreams? I have. Once I dreamed I was lying on my stomach, getting a tattoo on my calf against my will. Because it was a particularly malevolent tattoo studio, I cried out in the dream. When I woke up, I felt no pain at all. It was false, a figment of the Pain Matrix. Another time a monkey bit me on the arm. Once again, the pain vanished upon awakening.I think these examples of what I'll call "fake pain" are unusual. More common are instances when you get a calf cramp or have pins and needles in your arm while sleeping, and this real life pain gets incorporated into dreams about tattoos or monkey bites. But even these possibilities have been discounted as unlikely, because of limitations on which sensory modalities can be represented in dreams (Nielsen et al., 1993):One possibility is that pain is beyond the representational capability of image formation processes -- that neither pain memories nor pain images are reproducible in the dreaming mode. A second possibility is that the sensory systems that might contribute to the representation of pain imagery are not functional during dreaming. This possibility is consistent with the finding that the high threshold polysynaptic afferent fibers that conduct pain sensations are actively inhibited during REM sleep in cats.But plenty of people have reported feeling pain in dreams, so why construct hypotheses about why it's impossible? So skeptics Tore A. Nielsen and three fellow psychology graduate students, along with an undergrad art therapy student, conducted experiments on themselves in a 1993 paper. They inflated a blood pressure cuff above the knee of their colleagues 5 min into a bout of REM sleep1 [to produce ischemia of the leg muscles, i.e. pins and needles or paralysis].Results indicated that pain sensations occurred in 13 out of 42 stimulation trials with usable dream reports (31%). In contrast, only one of the 21 non-stimulated control dreams contained a reference to pain (4.8%). Many of the dreams were realistic and took place in a sleep lab-like setting. Others were more fantastic; one was set at a rodeo, another at a dance party in a barn [the authors lived in Montreal]. Some were lucid2, like the "ugly shoe" dream:I'm in a small store trying on a pair of ugly shoes. I started walking. Then I staggered forward because I was waking up and not fully conscious. You were laughing at me. I said "come on, its not funny, I'm trying to wake up!" This is the second or third time I've been trying to wake up.Some of the participants were more likely to experience pain dreams than others. Subject B, who reported pain dreams on 70% of the stimulation trials, had knee surgery a few years prior and still felt numbness or tingling sensations on occasional. Most of the time, the pain sensations occurred in the appropriate leg for all participants. Interestingly, the "crampy pressure", "tingling", or "hurting a bit" sensations felt upon awakening were much less intense than those that occurred during the dream.When interpreting these subjective reports, one has to consider an expectation or priming effect, since all the students were focused on dream research, with extensive experience in the sleep lab. However, this was not the case in a study of 28 hospitalized burn patients (Raymond et al., 2002). Obviously, the severity of suffering in burn patients is intense and chronic, unlike having temporary "pins and needles" in your leg. Over a period of 5 days, pain dreams comprised 30% of all reported dreams, which is quite comparable to the artificial BP cuff study. The patients who reported pain dreams (39%) had more nightmares, worse sleep quality, and more post-traumatic stress symptoms. The other 61% of the patients did not have any pain dreams. Why?What sort of neurophysiological activity can account for painful sensations that are experienced during REM sleep? We'll find out in the next post.Footnotes1 It wasn't clear how they monitored for REM, since EEG methods were not described. However, the transcript of one dream suggested that EEG was in fact recorded:Then I was trying to get comfortable on the bed. All the electrodes but one for the EEG had fallen off; the others were dangling free.The dream transcript continues:You said that this was too bad. I had tossed around in bed trying to get comfortable. It was really cold and hurt my backside. There was almost no mattress; I was on a board. I was saying to you that we had hit rock bottom in this bed.The interesting part about this segment is that there was no BP cuff applied; out of 14 dreams this was the only one without external stimulation (kind of like my "fake pain" dreams).2 The subject was aware they were dreaming and tried to control the action.ReferencesNielsen TA, McGregor DL, Zadra A, Ilnicki D, & Ouellet L (1993). Pain in dreams. Sleep, 16 (5), 490-8 PMID: 7690981Raymond I, Nielsen TA, Lavigne G, Choinière M. (2002). Incorporation of pain in dreams of hospitalized burn victims. Sleep 25:765-70.

... Read more »

Nielsen TA, McGregor DL, Zadra A, Ilnicki D, & Ouellet L. (1993) Pain in dreams. Sleep, 16(5), 490-8. PMID: 7690981  

join us!

Do you write about peer-reviewed research in your blog? Use to make it easy for your readers — and others from around the world — to find your serious posts about academic research.

If you don't have a blog, you can still use our site to learn about fascinating developments in cutting-edge research from around the world.

Register Now

Research Blogging is powered by SMG Technology.

To learn more, visit