Post List

Research / Scholarship posts

(Modify Search »)

  • April 13, 2012
  • 01:39 AM
  • 1,457 views

A White Roof: So Simple It's Insane So Insane It Just Might Work

by DJ Busby in Astronasty

Reflectivity might work better at mitigating global warming than focus on CO2... Read more »

  • April 12, 2012
  • 11:28 PM
  • 339 views

Time for an “Occupy Science” in India?

by Kausik Datta in In Scientio Veritas

Yes. Yes!! Oh, yes! — This was my reaction while reading a commentary in April 12’s Nature. In a policy commentary article titled Bold strategies for Indian Science (Nature 484, 159-160;12 April 2012), Gautam Desiraju, a professor of Chemistry in the prestigious Indian Institute of Science, Bangalore, and the current president of the International Union of Crystallography, held forth forcefully on what he thought were the bottlenecks that seem to be holding back the progress of Indian science. I found much... Read more... Read more »

  • April 11, 2012
  • 09:31 PM
  • 830 views

Racial Amplitudes of Scholastic Aptitude

by nooffensebut in The Unsilenced Science

A complete review of the SAT racial data reveals the relative stagnation of African Americans and Hispanic Americans, the rapid progress of Asian Americans, and a possible decline of whites.... Read more »

Eidelman S, Crandall CS, Goodman JA, & Blanchar JC. (2012) Low-Effort Thought Promotes Political Conservatism. Personality . PMID: 22427384  

Frey, M., & Detterman, D. (2005) Regression Basics: Rejoinder to Bridgeman. Psychological Science, 16(9), 747-747. DOI: 10.1111/j.1467-9280.2005.01607.x  

Price AL, Patterson N, Yu F, Cox DR, Waliszewska A, McDonald GJ, Tandon A, Schirmer C, Neubauer J, Bedoya G.... (2007) A Genomewide Admixture Map for Latino Populations. American Journal of Human Genetics, 80(6), 1024-36. PMID: 17503322  

Zakharia F, Basu A, Absher D, Assimes TL, Go AS, Hlatky MA, Iribarren C, Knowles JW, Li J, Narasimhan B.... (2009) Characterizing the admixed African ancestry of African Americans. Genome Biology, 10(12). PMID: 20025784  

  • April 10, 2012
  • 07:34 PM
  • 536 views

Five easy ways to have more sex

by eHarmony Labs in eHarmony Labs Blog

Read on to learn easy things you can do to improve your chances in the dating world.... Read more »

  • April 9, 2012
  • 12:40 PM
  • 603 views

The idiot savant story

by Michelle Dawson in The Autism Crisis

In a commentary epublished in March, about savant syndrome in autism, Patricia Howlin wrote: In 1887 Langdon Down was the first to coin the term ‘idiot savant’Howlin and several co-authors, including Sir Michael Rutter, wrote in a 2009 paper:Down (1887) was the first to coin the term ‘idiot savant’Here are Pam Heaton and Gregory Wallace from a major 2004 review:The term ‘idiot-savant’ was first used by Down (1887)From 1999, Pam Heaton again, as well as Linda Pring, Beate Hermelin, and others:The term "Idiot-Savants" was first used by Langdon Down in 1887Darold Treffert, often described as the authority on savants, has written accounts along these lines: However, the first specific description of savant syndrome took place in London in 1887 when Dr J. Langdon Down gave that year’s prestigious Lettsomian Lecture at the invitation of the Medical Society of London... In 1887, ‘idiot’ was an accepted classification for persons with an IQ below 25, and ‘savant’, or ‘knowledgeable person’, was derived from the French word savoir meaning ‘to know’. Down joined those words together and coined the term idiot savant by which the condition was generally known over the next century.That's from a 2009 paper. There seems to be an impressive consensus in the literature that Down coined the term "idiot savant" in 1887 (here is the source cited in all of the above), a claim that Treffert has made since the late 1980s, and many others have followed suit.So far as I can tell, this consensus is wrong. Edouard Seguin, who died in 1880, is well known for having written about savants. He wrote about the famous pianist Blind Tom Wiggins, for instance, in a book published in 1866. And in a short 1870 paper, he is quoted as using the term "idiot savant." Here it is (spelling from original): Among the wealthier classes, idiocy is not only oftener aggravated by accessory diseases, but also complicated with abnormal semi-capacities or disordered instincts, which produce heterogeneous types to an almost unlimited extent. It is from this class, almost exclusively, that we have musical, mathematical, architectural, and other varieties of the idiot savant; the useless protrusion of a single faculty, accompanied by a woful general impotence.Seguin's use of "idiot savant" did not pass unnoticed in the literature. For example, in the BMJ in 1875, George W. Grabham quotes and takes issue with Seguin's views (spelling from the original): A curious class may be termed that of the idiot "savans", in whom one or more faculties are amazingly developed, perhaps to the detriment of the rest. One has a marvellous power of acquiring languages and musical knowledge; another, great mechanical skill and original constructive ability; a third, though very childish, is no mean mental arithmetician; a fourth remembers all he reads; a fifth delights in dates; while a sixth can tell the time when awakened from sleep. General improvement has taken place in all these cases.Dr. Seguin, a well known authority on idiocy, has given the support of his pen to a theory "that idiocy is found in its simplest forms among the labouring classes, and that, among the wealthier classes, it is not only oftener aggravated by accessory diseases, but also complicated with abnormal semi-capacities or disordered instincts, which produce heterogeneous types to an almost unlimited extent. It is from this class almost exclusively that we have musical, mathematical, architectural, and other varieties of the idiot savant; useless protrusion of a single faculty, accompanied by a woeful general impotence". I am quite unable to agree with this view; my experience of many of these idiot "savans" proving them to have sprung from parents in humble circumstances, and leading me to believe them to have resulted in many instances from hereditary insanity.It's possible Seguin was not the first to use "idiot savant" but he does get this honor in the OED, which quotes Seguin's 1870 paper but does not mention or quote Langdon Down.In a footnote, Spitz (1995) provides a small trace of dissent, noting that Down himself made no claim, in 1887, to having coined "idiot savant" and indeed he seems to be using an existing term. Spitz did not try to find who did coin "idiot savant," but you can't blame Down for the false consensus. And it hardly matters, to current-day autistics, who exactly coined an obsolete term in the 1800s. Langdon Down and Edouard Seguin probably don't care about their h-indexes. There are many vastly more important issues related to the term "idiot savant" and the human beings who were characterized this way, and the still-dominant view that strong autistic abilities are useless protrusions (recent example here). But it does matter when telling an inaccurate story becomes the standard in the autism literature, over the course of 20 years or more. This is far from the only instance. And this is an especially easy story to verify (or not).References:HOWLIN, P. (2012). Understanding savant skills in autism Developmental Medicine & Child Neurology DOI: 10.1111/j.1469-8749.2012.04244.xGrabham, G. (1875). Remarks on the Orgin, Varieties, and Termination of Idiocy BMJ, 1 (733), 73-76 DOI: 10.1136/bmj.1.733.73-aSeguin, E. (1870). Art. XXXIII.-New Facts and Remarks concerning Idiocy: being a Lecture delivered before the New York Medical Journal Association, October 15, 1869 American Journal of the Medical Sciences, 59 (129), 518-519... Read more »

Seguin, E. (1870) Art. XXXIII.-New Facts and Remarks concerning Idiocy: being a Lecture delivered before the New York Medical Journal Association, October 15, 1869. American Journal of the Medical Sciences, 59(129), 518-519. info:other/

  • April 5, 2012
  • 09:31 PM
  • 458 views

What Should Be Done about Reproducibility

by Dave Bridges in Dave's Blog

A recent Commentary and linked editorial in Nature regarding reproducible science (or rather the lack thereof in science) has been troubling me for a few days now. The article brings to light a huge problem in the current academic science enterprise.

What am I talking about?

In the comment, two former Amgen researchers describe some of the efforts of that company to reproduce "landmark studies" in cancer biology. Amgen had a team of about a hundred researchers called the reproducibility team and their job was to test new basic science findings prior to investing in following up these targets. Shockingly, according to the authors, only 6/53 of these landmark studies were actually reproduced. When things were not reproduced they contacted the authors to attempt to work through the potential problems. This is an incredibly dismal 11% reproducibility rate!

Could it really be that bad?The first problem is what exactly is meant by reproducibility. In the commentary the authors acknowledge that they did attempt to use additional models in the validation process and that technical issues may have under-lied some of these differences. They also point out that their sample set is biased with respect to the findings. These were often novel and cutting edge type findings and typically more surprising than the general research finding. Also, their definition of reproducibility is unclear. If researcher says drug X has a 10 fold effect on something and the Amgen guys say it has a 3X effect on the process is that a reproducible finding. My initial reaction was that the 89% were thing where the papers said something like thing X does thing Y and there was no evidence supporting that. We don't know, and in a bit of an ironic twist, since no data is provided (either which papers were good and which were bad, or within those, which findings were good and bad) this commentary could be considered both unscientific and non-reproducible itself (also we are awfully close to April Fools Day).

So there is some bad papers out there, who cares?Reproducibility is at the heart of everything we do as scientists. No one cares if you did something once and for reasons you cant really explain, were never able to do it again. If something is not replicable and reproducble for all intents and purposes it should be ignored. We need measures of these to be able to evaluate research claims, and we need context specificity to understand the breadth of claims. Ill toss out few reasons why this problem really matters both to those of us who do science, and to everyone else.

This is a massive waste of time and moneyFrom the commentary:

Some non-reproducible preclinical papers had spawned an entire field, with hundreds of secondary publications that expanded on elements of the original observation, but did not actually seek to confirm or falsify its fundamental basis.

Wow, really? Whole fields have been built on these? In a way I don't feel bad for these fields at all. If you are going to work in a field, and are never going to bother even indirectly testing the axioms on which your field is built then you are really not so good at the science. If you are going to rely on everyone else being correct and never test it then your entire research enterprise might as well be made from tissue paper. More importantly, if you are on top of these things you are going to waste time and money figuring out not to follow this up. Hopefully this is the more common case. This really goes back to the difficulty in publishing negative data to let people know which conditions work and which don't.

The reward system for science is not in sync with the goals of the enterpriseWhy are people publishing things that they know only happen one out of six times? Why are they over-extending their hypotheses and why are they reluctant to back away from their previous findings? All of these things are because we are judged for jobs and for tenure and for grants on our ability to do these things. The person who spends 3 years proving that a knockout mouse model does not actually extend lifespan walks away with nothing, the one who shows it (even if done incorrectly) gets a high impact paper and a job. Even if it didn't take an unreasonable amount time and effort to publish non-reproducible data, the risk of insulting another researcher or not contributing anything new might be enough to prevent this. Until the rewards of publishing negative or contravening data are on par with the effort, people just won't do it.

This reflects really poorly on science and scientistsScience is always and probably has always been under some type of "attack". Science as an entity and scientists as their representatives need to not shirk this off or ignore it. We have to deal with this problem head-on, whether it be at the review level or at the post-publication level. People who are distrustful of science are rightful to point at this and say, why are we giving tens of billions of dollars to the NIH when they are 89% wrong. Why not just give that money to Amgen, who seem to be the ones actually searching for the truth (not that they will share that data with anyone else).

Can anything be done?The short answer is its really going to be difficult and its going to rely on a lot of moving parts. Reviewers should (and in my experience do) ask for explicit reprodicibility statements in the papers. This can go farther, if someone says this blot is representative of 5 experiments then there is no reason the other 4 couldnt be put in the supplement. If they looked at 100 cells and show just one, then why cant the rest be quantified in some way. Post-publication, there should be open (ie not just in lab meetings) discussion of papers and the problems and where they match or mismatch with the rest of the literature. Things like blogs and the Faculty of 1000 are great, but how often have you seen a negative F1000 review? Finally, eventually there ought to be some type of network of research findings. If I am reading a paper, and I would like to know what other results agree or disagree with this, it would be fantastic to get there in a reasonable way. This is probably the most complicated, as it requires not only publication of disagreeing findings, but also some network to link them together.

Begley, C., & Ellis, L. (2012). Drug development: Raise standards for preclinical cancer research Nature, 483 (7391), 531-533 DOI: 10.1038/483531a

What Should Be Done about Reproducibility by Dave Bridges is licensed under a Creative Commons Attribution 3.0 Unported License.... Read more »

  • April 4, 2012
  • 11:20 AM
  • 867 views

The Daily Mail incorrectly correct article describing cannabis-schizophrenia research

by Neurobonkers in Neurobonkers

The Daily Mail have issued a "correction" repeating their belief that just one cannabis joint can cause schizophrenia.... Read more »

Kucewicz MT, Tricklebank MD, Bogacz R, & Jones MW. (2011) Dysfunctional prefrontal cortical network activity and interactions following cannabinoid receptor activation. The Journal of neuroscience : the official journal of the Society for Neuroscience, 31(43), 15560-8. PMID: 22031901  

  • April 4, 2012
  • 08:00 AM
  • 613 views

Science careers: fair play or field of bullets?

by Zen Faulkes in NeuroDojo

Yesterday, Elizabeth Sandquist posed an hypothesis:

You can't just be good to succeed in #science, you have to be exceptional. Any thoughts?
NeuroPolarBear replied with a post, and Drugmonkey pulled out an older post.

But my post will be the best, for I shall cite peer-reviewed data in the primary literature.

As it happened, Petersen and colleague published a paper yesterday looking at career success in physics. Appropriately enough, even though it’s a career paper, it feels very much like a physics paper: lots of equations and models and phrase like “leptokurtic but remarkably symmetric.” Hoooookay... 

The authors tracked 300 physicists through about 20 years of their careers. They fell into three groups: phsyicists who were eminent (h -index of 61); productive and highly cited (h-index of 44), and early career assistant professors (h-index of 15). In that data from real scientists, Petersen and colleagues see that there are time when physicists are “shocked.” The shocks can be positive (“Wow, I just made this totally great discovery by accident!”) or negative (“Uh oh, they found that paper where I manipulated data.”)

They also look at career productivity, and the ability of researchers to build collaborations.That’s the “ground truth” that Petersen and colleagues use to create models of research career trajectories. This lets them play with some of the parameters.

The models indicate that as competition increases, many people can be taken out of the career pathway by... blind, stinking, clueless, doo-da luck.

Those that survive the field of bullets reach a point where they can start generating collaborative networks, and that builds even more success.

But the competition turns out to be very important in this model; and that relates to tenure. Many people want to see tenure replaced with a series of recurring short-term contracts. The authors imply that the short-term model could be harmful for the development of science. A failure in one short-term contract could derail a productive researcher, since early career shocks can ripple throughout a scientist’s career.

I think Petersen and colleagues would say that you do not have to be exceptional to make it in science. More important is the ability to tough out the early weeding out period, which doesn’t necessarily have a lot to do with your talent.


Reference

Petersen A, Riccaboni M, Stanley H, Pammolli F. 2012. Persistence and uncertainty in the academic career Proceedings of the National Academy of Sciences 109(14): 5213-5218. DOI: 10.1073/pnas.1121429109... Read more »

Petersen A, Riccaboni M, Stanley H, & Pammolli F. (2012) Persistence and uncertainty in the academic career. Proceedings of the National Academy of Sciences, 109(14), 5213-5218. DOI: 10.1073/pnas.1121429109  

  • April 2, 2012
  • 08:04 AM
  • 1,214 views

Open Data Manchester: Twenty Four Hour Data People

by Duncan Hull in O'Really?

According to Francis Maude, Open Data is the “next industrial revolution”. Now you should obviously take everything politicians say with a large pinch of salt (especially Maude) but despite the political hyperbole, when it comes to data he is onto something.... Read more »

  • March 30, 2012
  • 10:46 PM
  • 1,026 views

When prince charming kissed Mendel: delayed recognition in science.

by Hadas Shema in Information Culture

Monk Gregor Mendel hadn't lived to see his peas become famous; his paper has been asleep, waiting for prince charming to cite it awake. Of course, not all "delay recognition" papers sleep as long as Mendel's, but "sleeping beauty" or "Mendel's syndrome" papers do exist in science. A "sleeping beauty" paper can go uncited for years, until suddenly it's awakened. Costas, van Leeuwen and van Raan (2010) classify published scientific papers according to three general types: Normal-type: these have the normal distribution of published papers, usually reaching the peak of their citation 3-4 years after publication and then decay. Flash in the pans-type: these get cited very often when they first come out, but are forgotten in the long run, kind of like a teenager pop star. Delayed-type: those who start drawing interest later than the normal-type papers. Costas et al. prefer not to call them all "sleeping beauties" because real sleeping beauties (never cited and then suddenly rise to fame) are very rare. Source: Costas, van Leeuwen and van Raan (2010) Looking at all the documents from Web of Science between the years 1980 and 2008 (over 30 million), Costas et al. found that the "flash in the pans" type of papers tend more to be editorial, notes, reviews and so forth, rather than research articles. Delayed documents tended to be more prominent in the "articles" category. When they checked Nature and Science, two 'letter' journals, Costas et al. found that they cover 10.9% and 10.5% of "flash in the pans" documents respectively, which is higher than average (9.8%) in the database. The castle of the sleeping beauty is the availability of information. The information has to be accessible, and it has to be visible. The Web, of course, has improved the accessibility of papers a great deal, especially when said papers are open-sourced. When a paper is digitalized or becomes open-accessed, its visibility and availability increase. But being available is not enough: researchers must have use for the information despite the passage of time. The prince kisses the sleeping beauty awake Source: Wang, Ma, Chen & Rao, 2012In 1995, Polchinski's paper on supergravity in string theory “Dirichlet branes and Ramond-Ramond charges” came out and cited an early work by Romans (1986) about the same subject. Romans' paper has not been cited from 1986 to 1995(!), but according to Google Scholar (which admittedly could be inflated) count, it has been cited 424 times since then. Why? One reason is that Romans' paper was simply ahead of its time, published in a "sleeping beauty" field. In the nine years until Polchinski's paper, interest in supergravity has considerably increased. Another reason is that Polchinski is a high-classed prince, with great academic authority. An unknown scholar probably wouldn't have been as successful in waking up Romans' paper.Source: Wang, Ma, Chen & Rao, 2012 An extension of the "Mendel Syndrome" is "Mendelism", when researchers "develop lines of research and have a profile of publications (‘oeuvres’) 'ahead of their time'’’ (recent Nobel Laureate Dan Shechtman comes to... Read more »

Costas, van Leeuwen, & van Raan. (2011) The ‘‘Mendel syndrome’’ in science: durability of scientific literature and its effects on bibliometric analysis of individual scientists. Scientometrics, 177-205. info:/

van Raan, A. (2004) Sleeping Beauties in science. Scientometrics, 59(3), 467-472. DOI: 10.1023/B:SCIE.0000018543.82441.f1  

Rodrigo Costas, Thed N. van Leeuwen, & Anthony F. J. van Raan. (2009) Is scientific literature subject to a sell-by-date? A general methodology to analyze the durability of scientific documents. Journal of the American Society for Information Science and Technology. arXiv: 0907.1455v1

Wang, Chen, & Rao. (2012) Why and how can "sleeping beauties" be awakened?. The Electronic Library, 30(1), 5-18. info:/http://dx.doi.org/10.1108/02640471211204033

  • March 29, 2012
  • 08:04 AM
  • 703 views

Colour clash

by Zen Faulkes in Better Posters

What should you wear to a poster session? Never mind the formal versus casual versus comfortable dilemma, what colour should you wear? A nearly decade old paper making the social media rounds last week suggests looking at your poster while picking your outfit.Several people, knowing I’m the poster blog guy, asked me what I thought.The authors themselves write:Our study had several limitations.This is an understatement. This study, by Keegan and Bannister, is almost nothing but limitations. It’s as lightweight as this:The hypothesis is that having the presenter’s clothes match the colour of the poster will result in more visits to the poster. The design of the experiment is actually not bad. The authors show a statistically significant decrease in poster visits when the presenter was wearing clothing that did not match the colour of the poster. This is also supported by ad hoc observations of poster visitors: 5 people were overheard by the observer during the clashing-attire phase to say that the presenter’s blouse did not match her poster, and none visited the poster.Let’s run through some interpretive issues.1. The test poster had not one, but four colours on it: blue, lavender, green and yellow. This makes it tricky to say the lavender blouse “matched” the poster colour. It matched a colour, but not all.I took the colour with the largest surface showing in the picture, and also closest to the presenter’s eye level - green - and placed it into Kuler. The picture below shows the suggested complementary colours using the “triad” model.The colours at the end, a purple and a gold, are not too different from the other colours in the poster. This suggests that coordination might be a more appropriate description than simple matching.If I take the same base green and select a “complementary” colour scheme:I get a suggestion for an orange-brown that is not too far off from the rust worn as the “clashing” colour. Designers often use such contrast colours to make subjects “pop”. For example, red rose petals look redder next to the green leaves of the rose.2. There is no control for the clothing colour itself, isolated from the matching to the poster. The authors acknowledge this, noting:People may have decided to not visit the poster... because they did not like the rust blouse regardless of whether it coordinated with the poster(.)There is no accounting for what colours look good on this presenter. Some people look great in particular colours and terrible in others (I happen to think I look stunning in purple).This is particularly an issue given that the clashing colour is nearly red. There have been many suggestions that red signals all sorts of things in humans (Changizi et al. 2006, Elliot et al. 2007, Elliot & Niesta 2008), including aggression (Hill & Barton 2005), though the latter has been contentious. Just wearing something reddish may drive away visitors. To test this, you’d have to do an experiment with a red poster; you’d predict lower numbers of visits with the presenter wearing lavender blouse if Keegan and Bannister’s paper is correct.3. Joshua Drew asks: (W)hat about if it was a dude who was clashing?Indeed. You could fill entire libraries have been written about gender expectations, particularly with regards to appearances. It’s an open question if this effect would persist if the presenter were male.4. The definition of a “visitor” at the a poster includes people looking at it. This would include mere glances of people walking by. It would be interesting to see the data broken down by those looking versus the number talking to the presenter. The numbers would be smaller, but might be a more meaningful measure of poster popularity.Especially given that we don’t know how the details of the conference. If it’s a busy conference in a small space, with tables in the middle of the walkway (I’ve seen this), you almost can’t look at anything else, because you’re stuck in foot traffic.5. Two presenters at two posters at one meeting. The sample is tiny. This is barely even a preliminary study. The authors themselves admit:It would have been ideal to have conducted this study during several poster sessions; however, funding limited us to one medical education conference, which had only one poster session.Given the large number of people who go to multiple conferences a year, perhaps the authors could have rounded up some more volunteers rather than going it alone.This paper has been cited three times according to Web of Knowledge, and eight times according to Google Scholar. But nobody ought to take this research too seriously yet. This needs replication and a more robust study design. If anyone wants to collaborate on replicating and extending this experiment, let me know. In the meantime, remember that black goes with everything.ReferencesChangizi MA, Zhang Q, Shimojo S. 2006. Bare skin, blood and the evolution of primate colour vision. Biology Letters 2(2): 217-221. http://dx.doi.org/10.1098/rsbl.2006.0440 Elliot AJ, Maier MA, Moller AC, Friedman R, Meinhardt J. 2007. Color and psychological functioning: The effect of red on performance attainment. Journal of Experimental Psychology: General 136(1): 154-168. Elliot AJ, Niesta D. 2008. Romantic red: Red enhances men's attraction to women. Journal of Personality and Social Psychology 95(5): 1150-1164. Hill RA, Barton RA. 2005. Psychology: Red enhances human performance in contests. Nature 435(7040): 293-293. http://dx.doi.org/10.1038/435293aKeegan DA, Bannister SL. 2003. Effect of colour coordination of attire with poster presentation on poster popularity Canadian Medical Association Journal 169(12): 1291-1292. PMID: 14662667. http://www.cmaj.ca/content/169/12/1291.fullRelated linksDress senseGoogle Plus discussions here and here.Hat tip to Liz Neeley for this lead.Photo by Neal. on Flickr; used under a Creative Commons license.... Read more »

  • March 27, 2012
  • 12:01 AM
  • 628 views

Writing a Good Review

by agoldstein in WiSci

Andrew Moore, Editor-in-Chief of the review-and-discussion journal BioEssays, discusses the perks and pitfalls of writing a good review.... Read more »

  • March 26, 2012
  • 08:00 AM
  • 471 views

Seeing is Believing: The Story Behind Henry Heinz’s Condiment Empire

by Krystal D'Costa in Anthropology in Practice

Do me a favor: Go open your refrigerator and look at the labels on your condiments. Alternatively, if you’re at work, open your drawer and flip through your stash of condiment packets. (Don’t look at me like that. I know you have a stash. Or you know where to find one. It’s practically Office Survival [...]









... Read more »

  • March 21, 2012
  • 12:13 PM
  • 608 views

Science Integrators

by Matt & Cris in Originus

Andrew Moore, editor in chief of BioEssays, recently published a piece that makes so much sense it will probably never …Continue reading »... Read more »

  • March 21, 2012
  • 08:00 AM
  • 640 views

The myth of fingerprints

by Zen Faulkes in NeuroDojo

Could you have made a mistake?

If you are a fingerprint examiner in court giving testimony, the answer was once, “No,” according to Mnookin (2001).

(T)he primary professional organization for fingerprint examiners, the International Association for Identification, passed a resolution in 1979 making it professional misconduct for any fingerprint examiner to provide courtroom testimony that labeled a match “possible, probable or likely “rather than “certain.”
(I’ve been unable to find is this is still true.)

This paper by Ulery and colleagues is a follow-up to a paper published last year on fingerprint analysis. The previous paper found 85% of fingerprint examiners made mistakes where two fingerprints were judged to be from different people, when in fact they were from the same person (false negative). There was much more analysis, but you get the idea.

The researchers wanted to see how consistent the decisions were after time had passed. For this paper, they used some of the same fingerprint examiners that had been tested before (72 of 169 from he previous paper). It had been seven months since the fingerprint examiners had seen these prints. They were all prints that they’d seen for the previous research, but Ulery and colleagues didn’t tell them that.

Because the experimenters wanted to see if examiners who had made a mistake before would make the same mistakes again, the choice of what pairs of fingerprints to make was somewhat complicated. But all examiners saw nine pairs fingerprints that were not matched (from different people) and sixteen pairs that were matched (same people). And it’s also important to note that the fingerprints chosen were chosen in part because they were difficult.

In the original test, the fingerprint examiners only rarely said two fingerprints were from the same person when they weren’t (false positives). On the retest, there were no cases of false positives, either repeated mistakes from the previous test or entirely new mistakes.

The reverse mistake, the false negatives, were more common. Of the false negative errors made in the previous paper, about 30% were made again in the new study. And the examiners made new mistakes that hadn’t been made before.

There is some good news here, however. One piece of good news in this paper is that in some cases the examiners’s ratings of the difficulty were correlated with probability they would make the same decisions as before. But he examiner’s ratings of difficulty, however, only weakly predicted the errors that they made.

Another important finding is evidence that the best way to reduce errors is to have fingerprints examined by multiple people, rather than multiple examinations by the same person. The authors write:

Much of the observed lack of reproducibility is associated with prints on which individual examiners were not consistent, rather than persistent differences among examiners.
Nevertheless, even with two examiners checking fingerprints, Ulery and colleagues estimate that 19% of false negatives would not be picked out by having another examiner check the prints.

These papers all concern decisions made by experts, which is obviously the logical place to start from a policy and pragmatic point of view. As an exercise in seeing how expertise develops, tt would be interesting to see if beginners showed the same types of patterns in decision making.

References

Mookin JL. 2001. Fingerprint evidence in an age of DNA profiling. Brooklyn Law Review 67: 13.

Saks M. (2005). The coming paradigm shift in forensic identification science Science, 309 (5736), 892-895 DOI: 10.1126/science.1111565

Ulery B, Hicklin R, Buscaglia J, & Roberts M (2012). Repeatability and Reproducibility of Decisions by Latent Fingerprint Examiners PLoS ONE, 7 (3) DOI: 10.1371/journal.pone.0032800

Photo by Vince Alongi on Flickr; used under a Creative Commons license.... Read more »

  • March 15, 2012
  • 05:55 AM
  • 1,019 views

Is Your Newspaper Making You Ignorant?

by Neurobonkers in Neurobonkers

Why does there appear to be such a strong correlation between newspaper circulation and bullshit?... Read more »

Frankfurt, H. (2005) On Bullshit. Princeton University Press. info:other/

  • March 13, 2012
  • 06:56 PM
  • 647 views

5 things CSI gets right

by Stuart Farrimond in Guru: Science Blog

For Brits, this week sees the return of everybody’s favourite team of armed Police/crime scene/forensic scientist hybrids: the night shift of the Las Vegas Crime Scene Investigation dpt. (UK Channel 5, Tuesdays 9PM). Now entering its 12th season – it’s even been around since ‘seasons’ were called ‘series’ – CSI is the most watched TV [...]... Read more »

Durnal, E. (2010) Crime scene investigation (as seen on TV). Forensic Science International, 199(1-3), 1-5. DOI: 10.1016/j.forsciint.2010.02.015  

  • March 13, 2012
  • 03:23 PM
  • 490 views

Unrequited love: What to do when the feeling isn’t mutual

by eHarmony Labs in eHarmony Labs Blog

What happens when you spill your guts and declare your love to a friend, only to find your advance is unrequited? Can the friendship be saved, or is it doomed? Read on to find out how to predict what will happen to your friendship.... Read more »

  • March 13, 2012
  • 02:00 PM
  • 1,166 views

On March 15, 5 suborbital sounding rockets are scheduled to launch from the NASA Wallops Facility, VA

by Olga Vovk in Milchstraße

This is part of a study of the upper level jet stream located in the mesosphere.

These five rockets will release an aluminum based chemical into the upper layers of atmosphere (the mesosphere) that will form milky-white clouds that will trace winds in space. These clouds might be visible for public up to 20 minutes by East coast residents from southern parts of New Hampshire and Vermont till South Carolina.... Read more »

Larsen, M. F., and C. G. Fesen. (2009) Accuracy issues of the existing thermospheric wind models: Can we rely on them in seeking solutions to wind-driven problems?. Ann. Geophys., 27, 2277–2284. info:/

  • March 9, 2012
  • 04:46 AM
  • 885 views

A Yale Professor's Rampage on PLoS and a Group That Failed To Replicate His Research

by Neurobonkers in Neurobonkers

John Bargh, a Professor of Psychology and Cognitive Science at Yale University has written a blog post that’s currently receiving a thorough dressing down by the academic community. The title of the blog post, “Nothing in Their Heads” is a scathing ad-hom attack on a research group that failed to replicate his research. The opening gambit is an attack on, well the entire academic community.... Read more »

Doyen S, Klein O, Pichon CL, & Cleeremans A. (2012) Behavioral priming: it's all in the mind, but whose mind?. PloS one, 7(1). PMID: 22279526  

Bargh, J. Chen, M. Burrows, L. (1996) Automaticity of Social Behaviour: Direct Effects of Trait Construct and Stereotype Activation on Action. Journal of Personality and Social Psychology. info:/http://www.yale.edu/acmelab/articles/bargh_chen_burrows_1996.pdf

join us!

Do you write about peer-reviewed research in your blog? Use ResearchBlogging.org to make it easy for your readers — and others from around the world — to find your serious posts about academic research.

If you don't have a blog, you can still use our site to learn about fascinating developments in cutting-edge research from around the world.

Register Now

Research Blogging is powered by SMG Technology.

To learn more, visit seedmediagroup.com.