Post List

Research / Scholarship posts

(Modify Search »)

  • February 9, 2011
  • 10:00 AM
  • 845 views

Competence, Participation, Opportunity in Science Communication

by Janet Krenn in Talking Winston


“…the main concern of community activities is now increasingly about public participation, rather than public competence [of science].”


A recent study in Public Understanding of Science reveals that individuals that report “high” interest in science and technology make up a majority of the members of the general public who participate in science/policy decision making. Yet some that are very interested actually may lack a basic science competence, and what good is any discussion when one group only knows or understands some of the facts? 
The Study
In the study, “Participation and Competence as Joint Components in a Cross-National Analysis of Scientific Citizenship,” authors Mejlgaard and Stares investigate a method for uniting two disparate approaches to the public-science relationship: developing scientific competence and having individuals participate in the science/policy process. By developing methods for evaluating the two together, communicators and theorists could move toward cultivating and measuring a Super Citizen, fit to “feed scientific concerns into the decision making process.” (The authors are the theorists that want to measure; I am the communicator that wants to cultivate.)
To reach their goal, the authors use a survey of more than 30,000 Europeans, but the sample has some major restrictions. For starters, the survey used was not tailored for this study, and so although the respondent pool is large, only a few questions address participation and 13 address competence. Of the three questions the authors use to evaluate participation, two look at participation between citizens, such as reading and discussing articles, but only one question evaluates participation that could more likely affect decisions, attending public meetings.
Using these limited items, the authors associate competence with interest and knowledge and consider participation as the sum peer-to-peer discussion (what they term horizontal participation) and attendance at public meetings (vertical participation). Then, they propose a model that plots different combinations of interest and knowledge along a diagram of participation, broken down into three sections: horizontal & vertical participation, horizontal participation only, and no participation. Those with high interest, regardless of knowledge, represent the only groups in the horizontal & vertical participation portion of the diagram. Those with moderate interest paired with high knowledge and those with low interest paired with higher knowledge fall in the horizontal participation only section. Those with either moderate or low interest paired with low knowledge fall under the non-participant category.
As with any study, the authors’ see merit in their approach and suggest further research. As with any interesting study, the reader tends to agree.
The Science Communication Opportunity and Challenge
As I suggested earlier, I have different goals when looking at this study than did the authors in conducting it. I’m interested in the application of scientific communication, and I think the Mejlgaard and Stares study reveals an opportunity for improving the quality of scientific discussion and participation in the short-term: Target competence building to those who are already highly interested in science and, according to this study, much more likely to participate in horizontal (peer-to-peer) and vertical (peer-to-policymaker) discussions.
Mejlgaard and Stares’ analysis points out that most of those that identify themselves as “very interested” in new scientific discoveries, consider themselves “moderately informed,” regardless of their actual knowledge level. If it could be assumed that those who are “very interested” in science while doubting their knowledge would be interested in receiving communication, targeting this group regardless of actual competency level could at least improve competency for those already eager to participate in scientific discussion and policy.
Of course, this doesn’t do much to improve science participation among the populous at large. (I side with the overall goal of scientific citizenship, especially because a democratic society needs to include the input of more than just a small group of participants.) Yet there could be several benefits to starting with this very interested group. Perhaps, citizen-to-policymakers discussions would be more likely based on accurate information. Maybe this group will help disseminate the correct information to others through their peer-to-peer discussions. By improving the knowledge base of the already active and interested, you could extend the reach of your efforts.
I admit that none of this is incredibly profound. Scientists do the majority of their public speaking when invited to society meetings where these very interested people congregate. I think the Mejlgaard and Stares study reaffirms this practice and should inspire scientists to stop waiting for invitations to speak and start reaching out to other groups. 
The challenge then becomes, if interest dictates participation, how do you amp the interest in moderate and low interest groups so that we can eventually reach the goal of a scientific citizenry?

Mejlgaard, N., & Stares, S. (2009). Participation and competence as joint components in a cross-national analysis of scientific citizenship Public Understanding of Science, 19 (5), 545-561 DOI: 10.1177/0963662509335456... Read more »

  • February 9, 2011
  • 02:29 AM
  • 1,381 views

Transitioning from Trainee to Faculty

by Dr Shock in Dr Shock MD PhD


Wish I had know this before when starting in Academia. Really starting your career after all the training you’ve been through, a real challenge. How to start of on the new job.
Important strategies from the medical literature, management practices and hands on experience for “on-boarding”:

Start early, meaning getting toknow your organisation before your start date. [...]


No related posts.... Read more »

  • February 8, 2011
  • 10:16 PM
  • 1,081 views

Much Ado About ADHD-Research: Is there a Misrepresentation of ADHD in Scientific Journals?

by Laika in Laika's Medliblog

The reliability of science is increasingly under fire. We all know that media often gives a distorted picture of scientific findings (i.e. Hot news: Curry, Curcumin, Cancer & cure). But there is also an ever growing number of scientific misreports or even fraud (see bmj editorial announcing retraction of the Wakefield paper about causal relation beteen MMR vaccination [...]... Read more »

  • February 7, 2011
  • 09:25 PM
  • 806 views

Choice vs Gender Discrimination in Math-Intensive Science

by Michael Long in Phased

Choice, not direct discrimination, explains the current low representation of women in tenure-track, math-intensive, research-based faculty positions.... Read more »

Stephen J. Ceci, & Wendy M. Williams. (2011) Understanding current causes of women’s underrepresentation in science. Proceedings of the National Academy of Sciences. info:/10.1073/pnas.1014871108

  • February 7, 2011
  • 10:54 AM
  • 941 views

Adopt Your Scientific Testimony to Jurors' Skeptical Ears

by Persuasion Strategies in Persuasive Litigator

By: Dr. Ken Broda-Bahm - In his recent State of the Union address, President Obama followed the common pattern of giving attention and applause lines to nearly every issue on the national agenda. But there was one issue that received no mention at all: climate change. The absence, noted by many commentators, extended even to areas where it would have been natural to mention the environment. The President's "clean energy" initiative, for example, was touted based on its ability to create jobs and bolster competitiveness, rather than its ability to help the environment. This decision was no doubt the result...... Read more »

William R. L. Anderegga, James W. Prallb, Jacob Haroldc, and Stephen H. Schneidera. (2010) Expert credibility in climate change. Proceedings of the National Academy of Sciences of the United States of America. info:/

  • February 7, 2011
  • 06:42 AM
  • 648 views

BMC Research Notes launches a new thematic series on data standardization, sharing and publication

by Tara Cronin in BioMed Central Blog

Following our call for contributions to BMC Research Notes on data standards, sharing and publication, the journal and this initiative have received considerable attention from the research community. Today we launch this series of educational articles, as we publish the first of the numerous manuscripts we have received since September.
This new article by Tony Mathys and Maged Boulos gives an overview of the geospatial resources available for the health research community and public health sector to help them manage and share their data. It joins our previously published Data Note by Andrew Vickers and Angel Cronin and our editorial call for contributions in the series.
The series, supervised by our guest Editors and prominent Open Data advocates Dr Bill Hooker and Prof David Shotton, will grow substantially in 2011 as we are receiving contributions from across biology and medicine, including proteomics, flow cytometry, metabolomics, brain mapping and open bibliography.
We are still keen to receive more contributions, and authors are currently entitled to a full waiver of the article processing charge for accepted articles in this series. Articles should describe a domain-specific data standard and provide an example data set with the article, or a link to data that are permanently hosted elsewhere. The journal is also interested in receiving contributions to the series on broader aspects of scientific data sharing, archiving, and open data. If you would like to contribute a manuscript please refer to our call for contributions and get in touch with BMC Research Notes editorial team by sending us an email.
You can follow our most recent initiatives in Open Data on the BioMed Central blog.
Guillaume Susbielle, PhDIn-house Editor BMC Research Notes



Mathys T, & Kamel Boulos MN (2011). Geospatial resources for supporting data standards, guidance and best practice in health informatics. BMC research notes, 4:19 PMID: 21269487... Read more »

  • February 7, 2011
  • 06:00 AM
  • 734 views

Article review: How competent do trainees feel?

by Michelle Lin in Academic Life In Emergency Medicine

It is 2 a.m. You, the resident, have just spoken to your staff/attending, who told you to do a task. You have seen one, but don't feel comfortable doing one independently.Will you tell your staff/attending about how you feel? What if the patient did poorly after that?This study examines the perception of EM trainees of their competence and adverse events and how they feel about reporting them.MethodsAnonymous web-based survey sent to all trainees from 9 EM programs in Canada outside Quebec. 37.3% trainees responded.ResultsCompetence40% trainees felt they had minimal supervision when doing a task that they did not feel safe about. Most 'unsafe' tasks included providing care overnight, admission decision or procedures.When feeling incompetent, a third of trainees will not report this to their staff.Barriers include worry about loss of trust, automony or respect.Adverse events64% trainees felt responsible for contributing to adverse events.Most relate to procedures - chest tubes, central lines, paracentesis.Majority, but not all, reported the most serious events to the staff.Barriers include fear of appearing incompetent and humiliation. How would I change my teaching practiceEnsure trainees feel safe. Maybe do a dry run of central line insertion/break bad news prior.Encourage trainees to voice their discomfort. They are learning, not just working.Discuss adverse events and medical errors with trainees. ReferenceFriedman S, Sowerby R, Guo R, Bandiera G. Perceptions of emergency medicine residents and fellows regarding competence, adverse events and reporting to supervisors: a national survey. CJEM: Canadian journal of emergency medical care. 2010, 12(6), 491-9. PMID: 21073775... Read more »

  • February 6, 2011
  • 09:00 PM
  • 1,510 views

Misrepresentation of ADHD in scientific journals and in the mass media

by Hadas Shema in Information Culture

The scientific community often discusses the misrepresentation of health news by the media. A less discussed subject is misrepresentation of data in the scientific literature. Gonon, Bezard and Boraud used their knowledge about ADHD to find misrepresentations of data in scientific literature and mass media, and found that the misrepresentation problem often begins in the scientific literature. 1. Internal inconsistenciesThe good news is that only 2 out of about 360 papers (Barbaresi et al and Volkow et al) had "obvious discrepancies" between results and their authors' stated conclusions.The bad news is that both papers had been covered by the media, who mostly accepted their conclusions as gospel. Gonon et al say that in the 40 mass media articles they'd read about the Volkow et al. paper, "We have never read a mitigating statement saying that their results are open to the opposite interpretation although the authors explicitly raised thispossibility in their result section." Out of 21 the articles written about Barbaresi et al's paper, only The Guardian's article questioned the conclusions. More than that: out of the 30 times the Volkow et al paper was cited in scientific papers, in 20 the authors quoted its conclusion without pointing out the discrepancies.2. Fact omissionIt goes like this:Summary: A totally controls B!Result section: A controls B if C is present and D isn't.In this part, the authors focused on papers dealing with "the association between alleles of the gene coding for the D4 dopamine receptor (DRD4) and ADHD." According to the authors, previous research has shown that while there is an association between higher frequency of a certain DRD4 allele and ADHD, it only occures in 23% of ADHD patients, as opposed to 17% of the control population. Out of 117 papers about ADHD research done in humans that mentioned the DRD4-ADHD connection, 74 mentioned the association in their summaries, but only 19 of those also mentioned the conferred small risk. All 25 papers which mentioned the association but didn't present data on it had the misrepresentation in their summaries. In review papers, out of 43 summaries, only 6 mentioned that the allele confer only a small risk. The DRD4 gene, ADHD and the mass media - Media outlets have been known for their tendency toward genetic determinism (the "gay gene" for example) and so were quick to adopt the view that ADHD is "genetic". Out of 170 articles between 1996-2009, 168 mentioned that the DRD4 gene is significally associated with ADHD and out of those, 117 didn't mention the small risk and/or presented the raw data. 26 articles mentioned the 1.2 to 1.34 odd ratio but also stated there's a strong connection between the gene and ADHD. The authors' conclusion is that 82% of the articles misrepresented the association, a rate similar to that observed in the scientific literature.3. Extrapolating basic and pre-clinical findings to new therapeutic prospects ("Hi, it worked on mice!")The authors surveyed 101 papers dealing with the mouse brain for 3 common overstatements, and found that 56 overstated their conclusions. 23 even fantasized extrapolated about new therapeutic prospects. Naturally, those 23 papers were published in higher-impact journals and the overstatements made their way to the mass media. Out of 63 mass media articles, only 11 contained migtated comments. Limitations The authors consider their work to be qualitative rather than quantitative, since the selection of papers in the first case was not systematic. In the second and third cases the papers were selected after a systematic search, but the authors only highlighted one aspect of misrepresentation in each case. While the results correlate with misrepresentation in the mass media, there's no way to determine causation. In conclusionWhen I was young and working on a Biology degree, my (great) professor read us an abstract and said something along the lines of "They added that definitive conclusion in the end so the paper will be published in a better journal". While anecdotes aren't data, it does seem that scientists sometimes overstate their results in order to be published in higher rank journals. It's easy to blame the mass media whenever the people put on their tin hats, but the responsibility also falls on scientists to report their findings as accurately as possible, even outside the result section.Gonon, F., Bezard, E., & Boraud, T. (2011). Misrepresentation of Neuroscience Data Might Give Rise to Misleading Conclusions in the Media: The Case of Attention Deficit Hyperactivity Disorder PLoS ONE, 6 (1) DOI: 10.1371/journal.pone.0014618... Read more »

  • February 6, 2011
  • 04:11 PM
  • 676 views

It isn’t just students: Medical researchers aren’t citing previous work either

by bjms1002 in the Undergraduate Science Librarian

One of the things that faculty often complain about is that students don’t adequately track down and cite enough relevant material for their term papers and projects.  This problem isn’t confined to undergraduates.  A study in the January 4, 2011 issue of the Annals of Internal Medicine by Karen Robinson and Steven Goodman finds that [...]... Read more »

  • February 1, 2011
  • 01:50 AM
  • 1,505 views

Managing the demands of professional life

by Dr Shock in Dr Shock MD PhD


This is the title of an article recently published and written by a psychiatrist and a cardiac surgeon. It’s about an important question not only for physicians but also for other professionals. I found their answer recognizable for most of their concepts.
In short, it’s about five concepts that can be helpful in the work of [...]


No related posts.... Read more »

Dickey, J., & Ungerleider, R. (2007) Managing the demands of professional life. Cardiology in the Young, 17(S2). DOI: 10.1017/S1047951107001242  

  • January 31, 2011
  • 09:35 PM
  • 492 views

Role of Scientists and the Media in Propagating ADHD Misconceptions

by Michael Long in Phased

Both scientists and the media are to blame for extreme misrepresentations of ADHD neurobiology in the scientific literature and the lay press.... Read more »

  • January 31, 2011
  • 03:54 PM
  • 1,064 views

How to be a neuroscientist

by Bradley Voytek in Oscillatory Thoughts

In this post, I will teach you all how to be proper, skeptical neuroscientists. By the end of this post, not only will you be able to spot "neuro nonsense" statements, but you'll also be able to spot nonsense neuroscience questions.I implore my journalist friends to take note of what I say in this post.Much has already been said on the topic of modern neuroimaging masquerading as "new phrenology". A lot of these arguments and conversations are hidden from the lay public, however, so I'm going to expose the dirty neuroscientific underbelly here.(Image source: The Roots - Phrenology)This post was prompted by a question over on Quora: What is the neurological basis of curiosity? Where does curiosity reside in the brain?The question itself is of a type that is commonly asked in cognitive neuroscience: where is <vague behavior> in the brain?But what does it even mean to ask where "curiosity" is in the brain? What would an answer look like?According to the article linked to in the current top answer on Quora:In study after study, scientists have found that the striatum lit up like an inferno of activity when people didn’t know exactly what was going to happen next, when they were on the verge of solving their mystery and hoped to be rewarded—it was more active then, in fact, than when people received their reward and had their curiosity satisfied."So," you may ask, "what's wrong with that answer? That seems reasonable and sound and very sciencey!"You just got brain-mesmerized!I can prove, with one statement, that this answer is wrong (if you're impatient, jump to point 2 at the bottom).I'm not picking on the person who answered the question; they had no way to know. They were just following the discourse of the media narrative about neuroscience findings.So what is wrong with this explanation (he says, finally getting to the damned point)? I'll break both of these points down in detail later.1. The question is phrased in such a way that it presumes that "curiosity" is a singular thing.2. The question presumes that a complex behavior or emotion can be localized to a brain region or regions. There are several philosophical pitfalls packaged into the answer, such as the ontological commitment to the narrative of cognitive neuroscience and the cerebral localization of function.To be clear, what I'm not saying is that behaviors aren't in the brain. What I am saying is that the cerebral localization narrative is too simplistic.Let me break down these points.1. "Is curiosity a singular thing?"When you ask "where is curiosity in the brain" you assume that researchers can somehow isolate curiosity from other emotions and behaviors in a lab and dissect it apart. This is very, very difficult, if not impossible. Neuroimaging (almost always) relies on the notion of cognitive subtraction, which is a way of comparing your behavior or emotion of interest (curiosity) against some baseline state that is not curiosity.Or, as I say in my book chapter from The Mind and the Frontal Lobes:The underlying assumption in these studies is that activity in brain networks alters in a task-dependent manner that becomes evident after averaging many event-related responses and comparing those against a baseline condition. Deviations from this baseline reflect a change in the neuronal processing demands required to perform the task of interest.2. "Can curiosity be localized to one brain region?"No, it cannot. Here's how I know: I've personally worked with people who have a severely damaged striatum. Know what? They still have curiostiy. If the striatum is where curiosity is in the brain, how can someone whose striata are gone still have curiosty? They cannot. Yet they do. Poof. Hypothesis disproved.Imagine asking "where is video located in my computer?" That doesn't make any sense. Your monitor is required to see the video. Your graphics card is required to render the video. The software is required to generate the code for the video. But the "video" isn't located anywhere in the computer.Now there's a subtlety here. It may be that people with damaged striata have curiosity impairments (whatever that means), which would agree with the fMRI study discussed in that link above, but it proves that the striatum is not where curiosity is in the brain. More technically: the striatum may be a critical part of a network of brain regions that support curiosity behaviors, but that is different from saying that the striatum is where curiosity is.Or, as I say in my chapter:...the cognitive subtraction method... provide[s] details of functional localization that can then be tested and corroborated using other methodologies, including lesion studies. The interpretation of these localization results is confounded, however, by a lack of clarity in what is meant for a "function" to be localized. For example, Young and colleagues (2000) noted that for a given function to be localizable that function "must be capable of being considered both structurally and functionally discrete"; a property that the brain is incapable of assuming due to the intricate, large-scale neuronal interconnectivity.Thus, discussing behavioral functions outside of the context of the larger cortical and subcortical networks involved with that function is a poorly posed problem. Therefore, the scientific study of cognition requires detailed neuroanatomical and connectivity information to compliment functional activity findings.God. I was going to end this with some links to news stories talking about neuroscientists finding out where (love/happiness/hate/prejudice/sexytimes/etc.) were located in the brain, but I just gave up. There are some damned many of them.If you're a journalist and you're reading this, please change the way you talk about these results.If you're a student, if you remember nothing else from this post, just remember to ask, "can a person who has a lesion to that brain region not experience that emotion or do that behavior anymore?" If the person still can, then that is not where that behavior is located in the brain. And, in all likelihood, that function can't be localized to any one region at all.Barres, B. (2010). Neuro Nonsense PLoS Biology, 8 (12) DOI: 10.1371/journal.pbio.1001005Racine E, Bar-Ilan O, & Illes J (2005). fMRI in the public eye. Nature Reviews Neuroscience, 6 (2), 159-64 PMID: 15685221... Read more »

Barres, B. (2010) Neuro Nonsense. PLoS Biology, 8(12). DOI: 10.1371/journal.pbio.1001005  

Racine E, Bar-Ilan O, & Illes J. (2005) fMRI in the public eye. Nature Reviews Neuroscience, 6(2), 159-64. PMID: 15685221  

Editors. (2004) Brain scam?. Nature Neuroscience, 7(7), 683-683. DOI: 10.1038/nn0704-683  

Weisberg, D., Keil, F., Goodstein, J., Rawson, E., & Gray, J. (2008) The Seductive Allure of Neuroscience Explanations. Journal of Cognitive Neuroscience, 20(3), 470-477. DOI: 10.1162/jocn.2008.20040  

Young, M., Hilgetag, C., & Scannell, J. (2000) On imputing function to structure from the behavioural effects of brain lesions. Philosophical Transactions of the Royal Society B: Biological Sciences, 355(1393), 147-161. DOI: 10.1098/rstb.2000.0555  

  • January 31, 2011
  • 06:00 AM
  • 918 views

Article Review: Morbidity and Mortality Conferences in EM

by Michelle Lin in Academic Life In Emergency Medicine

Residency training programs are required to have Morbidity and Mortality (M&M) Conferences, as mandated by the Accreditation Council of Graduate Medical Education (ACGME). These conferences were originally designed to look at medical errors and unforeseen complications in patient care.Traditionally, Surgery programs focus on medical error and complications in their conferences. In contrast, Internal Medicine programs tend to focus more on cases because of their intrinsic learning value. Error is less the focus in their conferences. What are the practices of EM residency programs?This paper reviews a descriptive survey study of M&M Conferences in U.S. EM residency programs.  The response rate was 72% (89 of 128) for the 29-question survey. If you include all the active EM programs out there (n=135), the response rate was 66%.ResultsBottom Line: M&M conferences are varied in format, content, and timing.Some M&M conferences are alternatively called "Quality Improvement Conference" or "Interesting Case Conference"67% of programs hold M&M monthly, and 15% hold them weekly.33% of M&Ms are attended by nurses and EMS personnel.Some programs focus more on pediatrics, others more on trauma, and others primarily on cases where death or error was the outcome.79% of programs have a protocol in place when a medical error is identified.The authors note that the M&M Conferences are perfect venues to address key ACGME Core Competencies into resident education (especially Practice-Based Learning and Improvement and Systems Based Practice).The next step is to determine the best models for M&M Conferences and to try to standardize them across all programs.(click to open a larger image)For our program at UCSF-SFGH, discussion and suggestions for improvement are framed within the Vanderbilt Healthcare Matrix for improvement health care practices. The matrix includes a 6x6 table with the Institute of Medicine mandates on one axis and the ACGME competencies on the other. Download the Matrix from the Institute for Healthcare Improvement (IHI) website.ReferenceSeigel T, McGillicuddy D, Barkin A, Rosen C. Morbidity and Mortality Conference in Emergency Medicine The Journal of Emergency Medicine. 2010, 38(4), 507-11. DOI: 10.1016/j.jemermed.2008.09.018.... Read more »

Seigel, T., McGillicuddy, D., Barkin, A., & Rosen, C. (2010) Morbidity and Mortality Conference in Emergency Medicine. The Journal of Emergency Medicine, 38(4), 507-511. DOI: 10.1016/j.jemermed.2008.09.018  

  • January 30, 2011
  • 09:05 AM
  • 1,265 views

Writerly scientist derided scientist-writer?

by Jeremy Yoder in Denim and Tweed

Following up on the recent discovery that novelist and lepidopterist Vladimir Nabokov correctly supposed that Polyommatus blue butterflies colonized the New World in stages, Jessica Palmer points out that none other than Stephen Jay Gould dismissed Nabokov's scientific work as not up to the same standards of genius exhibited in his novels. She suggests that Nabokov's work may have been dismissed by his contemporaries because his scientific papers were a little too colorfully written.Roger Vila, one of Pierce's co-authors, suggests that Nabokov's prose style (Wellsian time machine!) did his hypothesis no favors:The literary quality of his scientific writing, Vila says, may have led to his ideas being overlooked. "The way he explained it, using such poetry -- I think this is the reason that it was not taken seriously by scientists," Vila says. "They thought it was not 'hard science,' let's say. I think this is the reason that this hypothesis has been waiting for such a long time for somebody to vindicate it."That's a little harsh toward scientists, but it seems plausible: creativity in scientific writing is rarely rewarded.Hyperlink to quoted source sic.

Palmer's analysis is thoughtful and thorough, and you should read all of it. But she misses what (to me) seems like the best wrinkle in the whole business: Gould, alone of all the scientists, should have been sympathetic to the dangers of writing "too well" in a scientific context.

Stephen Jay Gould, one suspects, never murdered a single darling in a decades-long career of writing for scientific and popular venues. The iconoclastic 1979 paper "The spandrels of San Marco and the Panglossian paradigm: a critique of the adaptationist programme" [PDF], coauthored with Richard Lewontin, is a case in point. Gould and Lewontin wanted to make the point that not all traits and behaviors of living species are necessarily adaptive—that is, evolved to perform a function that enhances survival and/or reproductive success. Today it is widely agreed that this point needed making. But Gould's writing undercut the success of his own argument, or at least gave his detractors a toehold for derision.

.flickr-photo { }.flickr-framewide { float: right; text-align: left; margin-left: 15px; margin-bottom: 15px; width:100%;}.flickr-caption { font-size: 0.8em; margin-top: 0px; } The Cathedral of San Marco in Venice, its structurally practical arches encrusted with Baroque decoration. A metaphor for Gould's metaphors? Photo by MorBCN.Gould and Lewontin developed their argument with references to architecture and to literature. They compared non-adaptive traits to mosaics decorating the spandrels of the Cathedral of San Marco in Venice. Spandrels being spaces created between arches, anything decorating them is clearly secondary to the architectural decision to build an arch. They also compared "adaptationist" biologists to the character of Dr. Pangloss in Voltaire's satire Candide, who claims that "all is for the best in this best of all possible worlds."

Pangloss is a fool, and biologists who felt Gould and Lewontin were critiquing them took the obvious inference. One of the most biting responses to "Spandrels" focused much more on the style than the substance of the paper. The author, David Queller, titled it "The spaniels of St. Marx and the Panglossian paradox: A critique of a rhetorical programme" [PDF], and the parody only continues from there.

Queller built an elaborate and unflattering image of Gould and Lewontin as Marxists focused on their political perspective like the dog in the old RCA ads fixated on a grammophone. He even referenced one of Gould's favorite cultural touchstones, the works of Gilbert and Sullivan, to tweak Gould as "the very model of a science intellectual." Queller manages to have his cake and decry it, too—he mocks Gould and Lewontin with overflown metaphors, then backs off to say that such tactics are irresponsible:So, how did I like my test drive in the supercharged rhetoric-mobile? It's certainly been fun ... but it's pretty hard to keep the damned thing on the road. ... my little parody of Gilbert and Sullivan's modern Major General, who knows about everything but matters military, might induce an uninformed reader to conclude that Gould knows about everything but matters biological. But this is exactly the complaint that many biologists would level at Spandrels—that colorful language can mislead as well as inform.So if Gould's reading of Nabokov's scientific achievement was predicated on the opinions of Nabokov's colleagues, who didn't care for elaborate prose in their scientific journals, well, I think that's what my English teachers called irony.

References

Gould, S., & Lewontin, R. (1979). The spandrels of San Marco and the Panglossian paradigm: A critique of the adaptationist programme. Proc. Royal Soc. B, 205 (1161), 581-98 DOI: 10.1098/rspb.1979.0086

Queller, D. (1995). The spaniels of St. Marx and the Panglossian paradox: A critique of a rhetorical programme. The Quarterly Review of Biology, 70 (4), 485-9 DOI: 10.1086/419174

Vila, R., Bell, C., Macniven, R., Goldman-Huertas, B., Ree, R., Marshall, C., Balint, Z., Johnson, K., Benyamini, D., & Pierce, N. (2011). Phylogeny and palaeoecology of Polyommatus blue butterflies show Beringia was a climate-regulated gateway to the New World. Proceedings of the Royal Society B: Biological Sciences DOI: 10.1098/rspb.2010.2213... Read more »

  • January 28, 2011
  • 06:24 PM
  • 640 views

Premature Brain Diagnosis in Japan?

by Neuroskeptic in Neuroskeptic

Nature has a disturbing article from their Asian correspondent David Cyranoski: Thought experiment. It's open access.In brief: a number of top Japanese psychiatrists have started offering a neuroimaging method called NIRS to their patients as a diagnostic tool. They claim that NIRS shows the neural signatures of different mental illnesses.The technology was approved by the Japanese authorities in April 2009, and since then it's been used on at least 300 patients, who pay $160 for the privilege. However, it's not clear that it works.To put it mildly.*NIRS is Near Infra-Red Spectroscopy. It measures blood flow and oxygenation in the brain. In this respect, it's much like fMRI, but whereas fMRI uses superconducting magnets and quantum wizardry to achieve this, NIRS simply shines a near-infra-red light into the head, and records the light reflected backIt's a lot cheaper and easier than MRI. However, the images it provides are a lot less detailed, and it can only image the surface of the brain. NIRS has a small but growing number of users in neuroscience research; it's especially popular in Japan, for some reason, but it's also found plenty of users elsewhere.The clinical use of NIRS in psychiatry was pioneered by one Dr Masato Fukuda, and he's been responsible for most of the trials. So what are these trials?As far as I can see (correct me if I'm wrong), these are all the trials comparing patients and controls that he's been an author on:Matsuo et al (2000) n=9/10 elderly depressed/controlsSuto et al (2004) n=10/13/16 depressed/schizophrenia/controlsKameyama et al (2006) n=17/11/17 bipolar/depression/controlsNishimura et al (2007) n=5/33 panic disorder/controlsTakizawa et al (2008) n=55/70 schizophrenia/controlsUehara et al (2007) n=11/11 eating disorder/controlsSuda et al (2010) n=27/27 eating disorder/controls There are also a handful of Fukuda's papers in Japanese, which I can't read, but as far as I can tell they're general discussions rather than data papers.So we have 342 people in all. Actually, a bit less, because some of them were included in more than one study. That's still quite a lot - but there were only 5 panic patients, 30 depressed (including 9 elderly, who may be different), 38 eating disordered and just 17 bipolar in the mix.And the bipolar people were currently feeling fine, or just a little bit down, at the time of the NIRS. There are quite a lot of other trials from other Japanese groups, but sticking with bipolar disorder as an example, no trials that I could find examined people who were currently ill. The only other two trials, both very small, were in recovered people (1,2).Given that the whole point of diagnosis is to find out what any given patient has, when they're ill, this matters to every patient. Anyone could be psychotic, or depressed, or eating disordered, or any combination thereof.Worse yet, in many of these studies the patients were taking medications. In the 2006 depression/bipolar paper, for example, all of the bipolars were on heavy-duty mood stabilizers, mostly lithium; plus a few antipsychotics, and lots of antidepressants. The depressed people were on antidepressants.There's a deeper problem. Fukuda says that NIRS corresponds with the clinical diagnosis in 80% of cases. Let's assume that's true. Well, if the NIRS agrees with the clinical diagnosis, it doesn't tell us anything we didn't already know. If the NIRS disagrees, who do you trust?I think you'd have to trust the clinician, because the clinician is the "gold standard" against which the NIRS is compared. Psychiatric diseases are defined clinically. If you had to choose between 80% gold and pure gold, it's not a hard choice.Now NIRS could, in theory, be better than clinical diagnosis: it could provide more accurate prognosis, and more useful treatment recommendations. That would be cool. But as far as I can see there's absolutely no published evidence on that.To find out you'd have to compare patients diagnosed with NIRS to patients diagnosed normally - or better, to those randomized to get fake placebo NIRS, like the authors of this trial from last year should have done. To my knowledge, there have been no such tests at all.*So what? NIRS is harmless, quick, and $160 is not a lot. Patients like it: “They want some kind of hard evidence,” [Fukuda says], especially when they have to explain absences from work. If it helps people to come to terms with their illness - no mean feat in many cases - what's the problem?My worry is that it could mean misdiagnosing patients, and therefore mis-treating them. Here's the most disturbing bit of the article:...when Fukuda calculates his success rates, NIRS results that match the clinical diagnosis are considered a success. If the results don’t match, Fukuda says he will ask the patient and patient’s family “repeatedly” whether they might have missed something — for example, whether a depressed patient whose NIRS examination suggests schizophrenia might have forgotten to mention that he was experiencing hallucinations.Quite apart from the implication that the 80% success rate might be inflated, this suggests that some dubious clinical decisions might be going on. The first-line treatments for schizophrenia are quite different, and rather less pleasant, than those for depression. A lot of perfectly healthy people report "hallucinations" if you probe hard enough. "Seek, and ye shall find". So be careful what you seek for.While NIRS is a Japanese speciality, other brain-based diagnostic or "treatment personalization" tools are being tested elsewhere. In the USA, EEG has been proposed by a number of groups. I've been rather critical of these methods, but at least they've done some trials to establish whether this actually improves patient outcomes.In my view, all of these "diagnostic" or "predictive" tools should be subject to exactly the same tests as treatments are: double blind, randomized, sham-controlled trials.... Read more »

Cyranoski, D. (2011) Neuroscience: Thought experiment. Nature, 469(7329), 148-149. DOI: 10.1038/469148a  

  • January 28, 2011
  • 12:35 PM
  • 586 views

Will the uprising in the Middle East & North Africa usher in a new era of science and innovation in the Arab world?

by Farooq Khan in Complex systems + science

The wave of protests sweeping across the Middle East and North Africa has been described as revolutionary. Whether this is an accurate description of what is taking place remains to be seen, and depends upon how you define a revolution....... Read more »

Farooq Khan. (2011) Will the uprising in the Middle East . Nature Blogs. info:/

  • January 27, 2011
  • 08:13 PM
  • 1,036 views

Language learning and height

by Ingrid Piller in Language on the Move

Are you tall enough to learn English? Have you ever reflected on the relationship between height and language learning? Well, I haven’t, and I’ve been in language teaching and learning for almost 20 years. So, I assume that most of … Continue reading →... Read more »

Chang, Leslie T. (2009) Factory Girls: From Village to City in a Changing China. Spiegel . info:/

  • January 27, 2011
  • 10:30 AM
  • 1,154 views

Legal Protections for Working Women in US Law Might Have Been a Joke

by Richard Landers in NeoAcademic

In a fascinating article in the Industrial-Organizational Psychologist, Scott Highhouse[1] discusses why legal protections provided to women under Title VII of the Civil Rights Act of 1964 might have been included by lawmakers as a joke – or more specifically, as a way to make the bill so ridiculous that it would not pass a [...]


Some related articles on Neo-Academic:The Right to Internet Access
... Read more »

Highhouse, S. (2011) The history corner: Was the addition of sex to Title VII a joke? Two viewpoints. . The Industrial-Organizational Psychologist, 48(3), 102-107. info:/

  • January 27, 2011
  • 07:33 AM
  • 813 views

the linguistics of heaven and hell

by Chris in The Lousy Linguist

The value of pop culture data for legitimate research is being put to the test. Exactly what, if anything, can the reality show Big Brother tell us about language change over time?Voice Onset Time is a measure of how long you wait to begin vibrating your vocal folds after you release a stop consonant. Voiced stop consonants like /b/ and /d/ require two things: 1) stop all airflow from escaping the airway by closing the glottis and 2) after the air is released, begin vibrating the glottis (by using the rushing air). For non-linguists, think of a garden hose. Imagine you use your thumb to stop the water for a second and you let the pressure build, then you let go and water rushes out, but then you use you thumb to clamp down just a bit on the water to spray it. This is kinda like the speech production of voiced stop consonants in human language.(image from Kval.com)Though I’m no phoneticist, I really like VOT as a target of linguistic study for one crucial reason: it’s a clear example of a linguistic feature that varies according to your human language system but which you do NOT have conscious control over. What that means is that you cannot consciously change the length of your own personal VOT. Go ahead, try it. Make your VOT 20 milliseconds longer. Go ahead, I’ll wait…Of course you can’t. Well, not consciously, but what researchers have found is that your brain, quite independent of conscious will or knowledge, can! Lab studies have found that people will unknowingly alter their VOTs according to certain situations, and the results are predictable. For example, they found that when listening to a set of long VOT stimuli, subjects will begin to lengthen their own VOTS, in essence accommodating the longer VOTs. Over the longer term it has also been shown that people will lengthen their VOT over their lifetime to accommodate cultural shifts. It has been shown that The Queen Mother herself now has a longer VOT than during her younger days (few other people have been recorded consistently over a long period to provide such valuable data, so thanks mum).Here’s what Bane et al. did: They took recordings of confessional sequences from the UK reality TV show Big Brother (where groups of strangers are made to live with each other and occasionally speak to a camera alone like a video diary) and tested what happened to 4 crucial individuals (the ones that stayed on the show long enough to provide several months worth of data points). What they found was that their VOTs did in fact change, though no linear pattern was discovered (i.e., they did not simply get longer in a steady line). This paper is labeled as a progress report because they don't have a firm hypothesis about what actually is happening. Nice trick there boys, ;)They did find one interesting thing: During part of the show, the house mates were physically divided into basically a caste system where half the people were low caste and half were high (a heaven and a hell. And this seemed to have an effect on VOT as well (sociolinguists are slap happy about this, I'm sure).I haven’t looked at the actual number very closely, but in section 6, they say “Housemate trajectories seem to diverge when the divide is present…” However, just taking a glance at the Figure 3, it looks like the diverge at the beginning, then converge at the end, episode 65 (and remain somewhat similar until several episodes of non-DIVIDE have gone by). If my cursory glance is correct, I would assume it takes awhile for the convergence to manifest, and then it persists for awhile after DIVIDE is gone. But this is just me looking at the picture, not the actual data.Finally, and this is just a readability point, but I would order the names in Figure 3 in the same order as the end point of each trajectory, making it easier to follow who is doing what.Max Ban, Peter Graf, & Morgan Sonderegge (2011). Longitudinal phonetic variation in a closed system Linguistic Society of America 2011 Annual Meeting.... Read more »

Max Ban, Peter Graf, & Morgan Sonderegge. (2011) Longitudinal phonetic variation in a closed system. Linguistic Society of America. info:/

  • January 24, 2011
  • 06:43 PM
  • 919 views

Something ghoti with science citations

by Bradley Voytek in Oscillatory Thoughts

Science has a lot of problems. Or rather, scientometrics has a lot of problems. Scientific careers are built off the publish or perish foundation of citation counts. Journals are ranked by impact factors. There are serious problems with this system, and many ideas have been offered on how to change it, but so far little has actually been affected. Many journals, including the PLoS and Frontiers series, are making efforts to bring about change, but they are mostly taking a social tactic: ranking and commenting on articles.I believe these methods are treating the symptom, not the problem.Publish or perish reigns because our work needs to be cited for we scientists to gain recognition. Impact factors are based on these citation counts. Professorships are given and tenure awarded to this who publish in high-ranking journals. However citations are biased, and critical citations are often simply ignored.Bear with me here for a minute. How do you spell "fish"? g-h-o-t-i: "g-h" sounds like "f", as in "laugh". "o" sounds like "i", as in "women". "t-i" sounds like "sh", as in "scientific citations". This little linguistic quirk is often (incorrectly) attributed to George Bernard Shaw; it's used to highlight the strange and inconsistent pronunciations found in English. English spelling is selective. You can find many spelling examples that look strange, but support your spelling argument.Just like scientific citations.There are a lot of strange things in the peer-reviewed scientific literature. Currently, PubMed contains more than 18 million peer-reviewed articles with approximately 40,000-50,000 more added monthly. Navigating this literature is a crazy mess. When we created brainSCANr, our goal was to simplify complex neuroscience data. But now we want to shoot for more.At best, as scientists we have to be highly selective about what studies we cite in our papers because many journals limit our bibliographies to 30-50 references. At worst, we're very biased and selectively myopic. On the flip side, across these 18+ million PubMed articles, a scientist can probably find at least one peer-reviewed manuscript that supports any given statement no matter how ridiculous. Don't believe me? Here's my first whack at a questionable series of statements supported by peer-reviewed literature:Human vision extends into the ultraviolet frequency range1, possibly mediated by an endogenous violet receptor2.Or:The effects of retroactive prayer are well-described in improving patient outcomes1. Herein we examine the hypothesis that such retroactive healing is mediated by an innate human ability for "psi"; that is, for distance healing mediated by well known quantum effects2.What we need is a way to quickly assess the strength of support of a statement, not an authors' biased account of the literature. By changing the way we cite support for our statements within our manuscripts, we can begin to address problems with impact factors, publish or perish, and other scientometric downfalls.brainSCANr is but a first step in what we hope will be a larger project to address what we believe is the core issue with scientific publishing: manuscript citation methods.We argue that, by extending the methods we present in brainSCANr to find relationships between topics, we can adopt an entirely new citation method. Rather than citing only a few articles to support any given statement made in a manuscript, we can create a link to the entire corpus of scientific research that supports that statement. Instead of a superscript number indicating a specific citation within a manuscript, any statement requiring support would be associated with a superscript number that represents the strength of support that statement has based upon the entire literature.For example, "working memory processes are supported by the prefrontal cortex"0.00674, gets strong support, and a link to PubMed showing those articles that support that statement. Another statement, "prefrontal cortex supports breathing"0.00033, also gets a link, but notice how much smaller that number is? It has far less scientific support. (The method for extracting these numbers uses a simple co-occurrence algorithm outlined in the brainSCANr paper).My citation method removes citation biases. It provides the reader a quick indication of how well-supported an argument is. If I'm reading this paper and I see a large number, I might not bother to look it up as the scientific consensus is relatively strong. But if I see an author make a statement with a low number--that is, a weak scientific consensus--then I might want to be a bit more skeptical about what follows.We live in a world where the entirety of scientific knowledge is easily available to us. Why aren't we leveraging these data in our effort to uncover truth? Why are we limiting ourselves to a method of citations that has not substantially changed since the invention of the book? My method may have flaws, but it much harder to game than the current citation biases that only give us the narrowest slice of scientific support. My citation method entirely shifts the endeavor of science from numbers and rankings of journals and authors (a weak system for science, to say the least!) to a system wherein research is about making statements about truth. Which is what science should be.Thoughts?. (2006). The Impact Factor Game PLoS Medicine, 3 (6) DOI: 10.1371/journal.pmed.0030291 (2010). How to improve the use of metrics Nature, 465 (7300), 870-872 DOI: 10.1038/465870aRobinson KA, & Goodman SN (2011). A systematic examination of the citation of prior research in reports of randomized, controlled trials. Annals of Internal Medicine, 154 (1), 50-5 PMID: 21200038... Read more »

join us!

Do you write about peer-reviewed research in your blog? Use ResearchBlogging.org to make it easy for your readers — and others from around the world — to find your serious posts about academic research.

If you don't have a blog, you can still use our site to learn about fascinating developments in cutting-edge research from around the world.

Register Now

Research Blogging is powered by SMG Technology.

To learn more, visit seedmediagroup.com.