I am still abroad right now, but nonetheless I still want to keep the German share within the articles high. So I present to you the second article from German authors in just one week. On monday I already talked about Supply Chain Risk Management in the German Automotive Industry and so the second is today on how supply chain risk management is performed during a financial crisis.... Read more »
Blome, C., & Schoenherr, T. (2011) Supply chain risk management in financial crises - A multiple case-study approach. International Journal of Production Economics. info:/
The niche is an important idea in ecology that allows us to think about how organisms relate to their environment. Despite its importance, our understanding of the niche has evolved throughout history, which can lead to confusion about what a niche really is. By reflecting on the evolution of the idea, we can appreciate its importance and identify new pathways to explore in the future.... Read more »
Elton, CS. (1927) Animal Ecology. University of Chicago Press. info:/
Holt, R. (2009) Bringing the Hutchinsonian niche into the 21st century: Ecological and evolutionary perspectives. Proceedings of the National Academy of Sciences, 106(Supplement_2), 19659-19665. DOI: 10.1073/pnas.0905137106
Pulliam, H. (2000) On the relationship between niche and distribution. Ecology Letters, 3(4), 349-361. DOI: 10.1046/j.1461-0248.2000.00143.x
Antipsychotics, originally designed to control the hallucinations and delusions seen in schizophrenia, have been expanding their domain in recent years. Nowadays, they're widely used in bipolar disorder, depression, and as a new paper reveals, increasingly in anxiety disorders as well.The authors, Comer et al, looked at the NAMCS survey, which provides yearly data on the use of medications in visits to office-based doctors across the USA.Back in 1996, just 10% of visits in which an anxiety disorder was diagnosed ended in a prescription for an antipsychotic. By 2007 it was over 20%. No atypical is licensed for use in anxiety disorders in the USA, so all of these prescriptions are off-label.Not all of these prescriptions will have been for anxiety. They may have been prescribed to treat psychosis, in people who also happened to be anxious. However, the increase was accounted for by the rise in non-psychotic patients, and there was a rise in the rate of people with only anxiety disorders.The increase was driven by the newer, "atypical" antipsychotics.Whether the modern trend for prescribing antipsychotics for anxiety is a good or a bad thing, is not for us to say. The authors discuss various concerns ranging from the side effects (obesity, diabetes and more), to the fact that there have only been a few clinical trials of these drugs in anxiety.But what's really disturbing about these results, to me, is how fast the change happened. Between 2000 and 2004, use doubled from 10% to 20% of anxiety visits. That's an astonishingly fast change in medical practice.Why? It wasn't because that period saw the publication of a load of large, well-designed clinical trials demonstrating that these drugs work wonders in anxiety disorders. It didn't.But as Comer et al put it:An increasing number of office-based psychiatrists are specializing in pharmacotherapy to the exclusion of psychotherapy. Limitations in the availability of psychosocial interventions may place heavy clinical demands on the pharmacological dimensions of mental health care for anxiety disorder patients. In other words, antipsychotics may have become popular because they're the treatment for people who can't afford anything better.These data show that antipsychotics were over twice as likely to be prescribed to African American patients; the poor i.e. patients with public health insurance; and children under 18.Comer JS, Mojtabai R, & Olfson M (2011). National Trends in the Antipsychotic Treatment of Psychiatric Outpatients With Anxiety Disorders. The American journal of psychiatry PMID: 21799067... Read more »
Comer JS, Mojtabai R, & Olfson M. (2011) National Trends in the Antipsychotic Treatment of Psychiatric Outpatients With Anxiety Disorders. The American journal of psychiatry. PMID: 21799067
The Yakut community of Eastern Siberia has gained some attention from anthropologists because it culturally stands out from other Siberian populations. Their Turkic language, unique burial practices, and horse-breeding culture is not native to Siberia. Recent genetic analysis of 58 bodies preserved in permafrost from the last five centuries and 166 current members of the [...]... Read more »
Thèves, C., Senescau, A., Vanin, S., Keyser, C., Ricaut, F., Alekseev, A., Dabernat, H., Ludes, B., Fabre, R., & Crubézy, E. (2011) Molecular Identification of Bacteria by Total Sequence Screening: Determining the Cause of Death in Ancient Human Subjects. PLoS ONE, 6(7). DOI: 10.1371/journal.pone.0021733
Crubézy E, Amory S, Keyser C, Bouakaze C, Bodner M, Gibert M, Röck A, Parson W, Alexeev A, & Ludes B. (2010) Human evolution in Siberia: from frozen bodies to ancient DNA. BMC evolutionary biology, 25. PMID: 20100333
by Vincent Racaniello in virology blog
Thirty years ago this month I did an experiment that set the course of my career, and provided an important step forward for animal virology. I showed that a cloned DNA copy of the poliovirus RNA genome is infectious in mammalian cells. When I arrived as a postdoctoral fellow in the laboratory of David Baltimore [...]... Read more »
Racaniello, V., & Baltimore, D. (1981) Cloned poliovirus complementary DNA is infectious in mammalian cells. Science, 214(4523), 916-919. DOI: 10.1126/science.6272391
Throughout my life I’ve been lucky to be friends with a diverse array of people, who have had quite varied past experiences. There are those few friends with "charmed" lives. Healthy family, happy home, found "the one" with little difficulty. There are others who have experienced major past adversity. The loss of a parent, a debilitating rejection, chronic poverty. This variability has often made me wonder about the relationship between past experiences and whether one responds to current life adversity with with vulnerability or resiliency. If faced with a new crisis, who will display the resilient response – 1) my friend who has never experienced any adversity or 2) my friend who has experienced too much adversity. There are convincing arguments to be made for either case. My friend who never experienced adversity might have a strong social support network and a positive outlook on life, but might lack necessary skills and toughness needed to get through a traumatic event. My friend who experienced too much adversity might be stressed and depleted from their past experiences, but might have developed that toughness and those skills that my “charmed” friend lacks. So what’s the answer?
In 2010 Mark Seery, a professor at the State University of New York at Buffalo, along with colleagues Alison Holman and Roxane Cohen Silver tackled this question. Specifically they assessed whether past adversity is associated with 1) worse mental health and well-being outcomes overtime, and 2) how one responds to a recent adverse event.
Read More->... Read more »
Seery MD, Holman EA, & Silver RC. (2010) Whatever does not kill us: cumulative lifetime adversity, vulnerability, and resilience. Journal of personality and social psychology, 99(6), 1025-41. PMID: 20939649
We’ve written before about the strong influence of dietary choice on greenhouse gas emissions. A recent study in Agricultural Systems took a look at the land use effects of different scenarios of meat consumption and livestock productivity. The study concludes that a “faster-yet-feasible” growth in livestock productivity, together with a substitution of pork and poultry for ruminant, reduces global agricultural land use about 20 percent, from 5.4 billion ha to 4.4 billion ha in 2030.
Yields of annual crops are usually considered when we talk about meeting the world’s growing demand for food, but this study looks at some hitherto less investigated options: “1) increasing the efficiency of the entire food chain from ‘field to fork’; 2) changing diets toward food commodities requiring less land; and 3) increasing the yields of pastures.”
The researchers use the physical ALBIO (Agricultural Land Use and Biomass) model to study the effects of food consumption trends, livestock and crop productivity, and efficiency in food industry and trade, among others. To model a change in meat consumption, the researchers substitute 20 percent of per capita beef consumption with the same amount of pork and poultry. One step further, the study also models a minor vegetarian transition in regions with high per capita meat consumption (> 70 kg per capita per year). The magnitude of the changes was constrained to keep the results realistic.
The result of the lower-beef scenario is that global agricultural land use in 2030 would fall from 5.4 billion ha to 4.4. billion ha, “mainly due to substantial decreases in permanent pasture area.”
Even though this transition from ruminant meat to pork and poultry is hypothetical, this trend already is taking place to some degree. Moreover, it’s a trend “that could be boosted by upcoming factors, including increasing prices of agricultural land and feedstuffs, and implementation of stricter environmental and climate policies.” However, the scenario in which there is “a partial substitution of vegetable food for meat cannot be motivated by referring to recent trends.”
Wirsenius, S., Azar, C., & Berndes, G. (2010). How much land is needed for global food production under scenarios of dietary changes and livestock productivity increases in 2030? Agricultural Systems, 103 (9), 621-638 DOI: 10.1016/j.agsy.2010.07.005... Read more »
Wirsenius, S., Azar, C., & Berndes, G. (2010) How much land is needed for global food production under scenarios of dietary changes and livestock productivity increases in 2030?. Agricultural Systems, 103(9), 621-638. DOI: 10.1016/j.agsy.2010.07.005
A team of researchers at Oak Ridge National Laboratory provide us this month in Ecological Indicators with a set of indicators that collectively represent how bioenergy systems may affect environmental sustainability. “This suite is intended as a basis or starting point for the selection of indicator suites for particular situations, which may require a subset or expansion of this proposed indicator suite.”
The study seeks to empirically measure environmental effects rather than rely on inferring such effects through assessment of management practices. “Ideally, a comparison between indicator values and baseline conditions should reveal the marginal environmental effects of a bioenergy system.”
The 19 proposed indicators cover six environmental categories:
Greenhouse gases: CO2 equivalent (kg CO2 / GJ)
Productivity: aboveground net primary productivity (g C/m2/yr)
Soil quality: total organic carbon (Mg/ha), total nitrogen (Mg/ha), extractable phosphorus (Mg/ha), bulk density (g/cm3)
Water quality and quantity: nitrate, phosphorus, suspended sediment and herbicide concentrations in streams(mg/L), peak storm flow (L/s), minimum base flow (L/s), consumptive water use (m3/ha/day)
Biodiversity: presence of taxa of special concern (presence), habitat area of taxa of special concern (ha)
Air quality: tropospheric ozone (ppb), carbon monoxide (ppm), total particulate matter (PM2.5 and PM10)
McBride, A., Dale, V., Baskaran, L., Downing, M., Eaton, L., Efroymson, R., Garten, C., Kline, K., Jager, H., Mulholland, P., Parish, E., Schweizer, P., & Storey, J. (2011). Indicators to support environmental sustainability of bioenergy systems Ecological Indicators, 11 (5), 1277-1289 DOI: 10.1016/j.ecolind.2011.01.010... Read more »
McBride, A., Dale, V., Baskaran, L., Downing, M., Eaton, L., Efroymson, R., Garten, C., Kline, K., Jager, H., Mulholland, P.... (2011) Indicators to support environmental sustainability of bioenergy systems. Ecological Indicators, 11(5), 1277-1289. DOI: 10.1016/j.ecolind.2011.01.010
July was the hottest month ever recorded in Washington, D.C., in Oklahoma City, Oklahoma, and in Wichita Falls, Kansas, as measured by the National Weather Service. In fact, the NWS has issued an “excessive heat warning” for a huge swath of middle America extending from northwestern Illinois and central Iowa in the north to central Texas in the south. The Centers for Disease Control and Prevention warn each year that people can easily become ill or even die from extreme heat: from 1979 to 2003, 8,015 people died due to excessive heat exposure. In fact, more people in the US died from heat exposure during that 24-year period than from hurricanes, lightening, tornadoes, floods, and earthquakes combined.... Read more »
Monif AlRashidi, András Kosztolányi, Mohammed Shobrak, Clemens Küpper, & Tamás Székely. (2011) Parental cooperation in an extreme hot environment: natural behaviour and experimental evidence. Animal Behaviour, 235-243. info:/10.1016/j.anbehav.2011.04.019
by Travis Saunders, MSc, CEP in Obesity Panacea
Image by mhowry
Travis’ Note: Today’s post comes from PhD Student Ash Routen. You can find out more about Ash and his work at the bottom of this post.
Consistent with the majority of developed countries, a significant proportion of children here in the UK are overweight or obese (around 30% of 10-11 year olds as of 2010). How do we know this? Well, since 2005 the UK Department of Health have been operating the ‘National Child Measurement Programme’ (NCMP) a nationwide public health surveillance initiative, which to the best of my knowledge is the biggest in Europe. Annually over one million children (aged 4-5 and 10-11 years) have their height and weight measured by teams of school nurses, these measures are then used to calculate body mass index (BMI), and then BMI determined weight status. This data is used to inform local planning and delivery of weight intervention/healthy lifestyle services, and to produce national overweight and obesity prevalence figures.
As BMI differs between genders and increases with age during childhood, the NCMP categorise the children’s weight status as underweight, healthyweight, overweight or obese, by comparing their BMI to children the same age and gender using a BMI growth reference chart. So, children above the 91st percentile (91% of all children used in the reference sample) are defined as overweight, and above the 98th as obese. In most regions, the child’s weight status is fed back to the parent’s and children by letter, with information on local weight management initiatives if they are categorised as underweight, overweight, or obese. Of course you can imagine the furore of some parent’s and thus the NCMP has attracted quite a lot of negative media attention (e.g.http://www.telegraph.co.uk/health/children_shealth/7514267/Letter-to-fat-four-year-old-prompts-complaint-from-obesity-group.html) as a result. As such there is great onus on ensuring the quality of data and identifying any sources of potential ‘error’, which include human (i.e. reliability of the nurses measurements), technical (i.e. reliability of the scales/height measuring device) and biological ‘error’(i.e. both daily and monthly variation in BMI).
I was interested in the impact of the time of day when the measurements are conducted. The nurses follow a standardised protocol, but can take the measurements at any time of day (and indeed the month of measurement may also vary). We know that our weight fluctuates throughout the day, and that we shrink a little after rising due to gravity pushing us back down! What we didn’t know was if combined variation in these measures would result in a change in BMI. Who cares right? they would be at no more or less risk of adverse health if their BMI shifts a little…but could it be enough for those whom are on the cusp of a BMI weight category (e.g. 90.5 percentile) to be differently classified due to the time of day they are measured?
What did we do?
To investigate this issue we took a sample of 74 children (aged 10-11 years) and measured their height and weight in the morning (0900-1045 hr) and again in the afternoon (1300-1500). From this we calculated their BMI, BMI percentile and weight status category using two set’s of BMI percentile cut-off’s, namely clinical cut-off’s (overweight: 91st and obese: 98th) as used by the NCMP and clinicians, and population monitoring cut-off’s as used mainly by researchers (85th and 95th centiles).
What did we find?
Not surprisingly in the afternoon all the children were shorter (-0.5 cm), however only girls were heavier (+0.1 kg), and BMI (+0.12 kg.m2), and BMI percentile was greater (+2.5 centiles) in all children. In relation to weight status categories there were no shifts in the number of people in each category from morning to afternoon, but on an individual level there were some interesting findings. When applying the clinical BMI cut-off’s we saw that one girl moved from healthyweight to overweight, and using the population monitoring cut-off’s , two girls moved from the healthyweight to overweight category, and one moved from the overweight to obese category with BMI increases of only 0.30, 0.55 and 0.26 kg/m2, respectively.
What are the implications?
We saw that it only takes a height loss of about 1 cm and an increase in weight of about 150g to shift a girls BMI category if they are near to the cut-off threshold. As only a few individuals changed (and this was a small sample) the results may seem inconsequential. However both on an individual and national level there may be some impact. Nationally, comparison of prevalence data (and thus future direction of resources) between schools and regions (and this is where I speculate) may be may be clouded if they measure a greater proportion of their children either in the morning or the afternoon. Whilst the extrapolation of the present observations using the clinical BMI cut-off’s (which the NCMP use in parental feedback) to the potential impact on the national NCMP data is tenuous, it is worthy of consideration. As the time of day when measurements are taken is not standardised, or recorded by the NCMP it could be supposed that 50% of the measurements taken are performed in the morning and 50% in the afternoon. If one in every 27 (3.7%) healthy weight girls (as we found in our sample) were on the cusp of overweight in samples measured in the morning they could well have been categorised as overweight had they been measured in the afternoon. Out of the 162,640 healthyweight girls measured by the NCMP in 2009/10 this would represent 6017 girls being classified as overweight instead of healthyweight, which hinders a ‘true’ assessment of prevalence data.
Arguably of more importance is that potentially 6017 parents and children would be informed that their child is overweight, due to their misfortune of being measured in the afternoon as opposed to the morning. We know that children labelled as overweight may be at greater risk of stigmatisation, teasing and anxiety; it is not unimaginable therefore that such a letter could trigger unhealthy activity and dietary habits and unnecessary parental intervention. For all the useful information such screening programmes provide us researchers, we must be cognisant of the discourse surrounding the issue of childhood obesityand consider the impact of such surveillance programmes on individual children and families (see Michael Gard’s work for a thought provoking viewpoint: http://bod.sagepub.com/content/13/4/118.extract).
What can we do?
In our paper we conclude that arguably, to increase data reliability the time of day in which the measurements are performed should be standardized (to either morning or afternoon) by the NCMP and indeed any public health surveillance programme that does not standardise measurement – this would at least ensure that all children are treated equitably. We do not have one ‘true BMI’, but fluctuate about a mean value on a daily and weekly basis. Therefore for the purposes of comparison, and analysis of trends year-on year, we should choose either to measure in the morning or afternoon. However, on the individual level it appears wise to ensure that children are measured in the morning to avoid unfavourable shifts in weight category and associated psychosocial implications of labelling. Standardisation of the timing of taking the measurements is one simple revision to the procedures of such surveillance programmes that could help to limit the impact of at least one potential ‘error’ variable.
About the author: Ash Routen is in the final months of his doctoral studies at the University of Worcester, UK examining the impact of pedometer interventions on habitual PA in kids, with an interest in the assessment of body composition and objective physical activity measurement in kids. He can be found on Twitter @AshRouten.
... Read more »
Routen, A., Edwards, M., Upton, D., & Peters, D. (2011) The impact of school-day variation in weight and height on National Child Measurement Programme body mass index-determined weight category in Year 6 children. Child: Care, Health and Development, 37(3), 360-367. DOI: 10.1111/j.1365-2214.2010.01204.x
According to a forthcoming article published in Forbes, excerpts of which appear on Matthew Herper’s blog “The Medicine Show,” big pharma should take bigger risks and outsource R&D to smaller, innovative companies. At least that’s the philosophy of Bernard Munos, … Continue reading →... Read more »
Munos, B., & Chin, W. (2011) How to Revive Breakthrough Innovation in the Pharmaceutical Industry. Science Translational Medicine, 3(89), 89-89. DOI: 10.1126/scitranslmed.3002273
It’s not just Pontius Pilate and Lady MacBeth, all of us feel better with clean hands. The disgust literature is everywhere these days. As it turns out, disgust is a powerful emotional motivator. Researchers recently attempted to see if being even minimally involved in activities that brought participants into contact with religious beliefs different from their own [...]
Related posts:Choosing to either disgust your jurors or tick them off
Eww! That is just disgusting! (but…very interesting)
Deliberations: Jurors think and feel as they make decisions
... Read more »
Ritter, RS, & Preston, JL. (2011) Gross gods and icky atheism: Disgust responses to rejected religious beliefs. . Journal of Experimental Social Psychology. info:/
The aftermath of the EPS Conference is quite exciting on a side. Higgs hunting points to an unexpected direction even if some residuals of an old expectation are still there. I just want to show you the graphs of this conference from Tevatron and LHC From these it is very clear that the excluded range [...]... Read more »
There is a common view that the human genome has two different parts – a “constant” part and a “variable” part. According to this view, the bases of DNA in the constant part are the same across all individuals. They are said to be “fixed” in the population. They are what make us all human – they differentiate us from other species. The variable part, in contrast, is made of positions in the DNA sequence that are “polymorphic” – they come in two or more different versions. Some people carry one base at that position and others carry another. The idea is that it is the particular set of such variations that we inherit that makes us each unique (unless we have an identical twin). According to this idea, we each have a hand dealt from the same deck.The genome sequence (a simple linear code made up of 3 billion bases of DNA in precise order, chopped up onto different chromosomes) is peppered with these polymorphic positions – about 1 in every 1,250 bases. That makes about 2,400,000 polymorphisms in each genome (and we each carry two copies of the genome). That certainly seems like plenty of raw material, with limitless combinations that could explain the richness of human diversity. This interpretation has fuelled massive scientific projects to try and find which common polymorphisms affect which traits. (Not to mention personal genomics companies who will try to tell you your risk of various diseases based on your profile of such polymorphisms).The problem with this view is that it is wrong. Or at least woefully incomplete. The reason is it ignores another source of variation: very rare mutations in those bases that are constant across the vast majority of individuals. There is now very good evidence that it is those kinds of mutations that contribute most to our individuality. Certainly, they are much more likely to affect a protein’s function and much more likely to contribute to genetic disease. We each carry hundreds of such rare mutations that can affect protein function or expression and are much more likely to have a phenotypic impact than common polymorphisms. Indeed, far from most of the genome being effectively constant, it can be estimated that every position in the genome has been mutated many, many times over in the human population. And each of us carries hundreds of new mutations that arose during generation of the sperm and egg cells that fused to form us. New mutations may spread in the pedigree or population in which they arise for some time, depending in part on whether they have a deleterious effect or not. Ones that do will likely be quickly selected against.A new paper from the 1000 genomes project consortium shows that:“the vast majority of human variable sites are rare and that the majority of rare variants exhibit, at most, very little sharing among continental populations”. This is a much more fluid picture of genetic variation than we are used to. We are not all dealt a genetic hand from the same deck – each population, sub-population, kindred, nuclear family has a distinct set of rare genetic variants. And each of these decks contains a lot of jokers – the new mutations that arise each time a hand is dealt. Why have such rare mutations generally been ignored while the polymorphic sites have been the focus of intense research? There are several reasons, some practical and some theoretical. Practically, it has until recently been almost impossible to systematically find very rare mutations. To do so requires that we sequence the whole genome, which has only recently become feasible. In contrast, methods to survey which bases you carry at all the polymorphic sites across the genome were developed quite some time ago now and are relatively cheap to use. (They rely on sampling about 500,000 such sites around the genome – because of unevenness in the way different bits of chromosomes get swapped when sperm and eggs are made, this sample actually tells you about most of the variable sites across the whole genome). So, there has been a tendency to argue that polymorphic sites will be major contributors to human phenotypes (especially diseases) because those have been the only ones we have been able to look at. Unfortunately, the results of genome-wide association studies, which aim to identify common variants associated with traits or diseases, have been disappointing. This is especially true for disorders with large effects on fitness, such as schizophrenia or autism. Some variants have been found but their effects, even in combination are very small. Most of the heritability of most of the traits or diseases examined to date remains unexplained. (There are some important exceptions, especially for diseases that strike only late in life and for things like drug responses, where selective pressures to weed out deleterious alleles are not at play). In contrast, many more rare mutations causing disease are being discovered all the time, and the pace of such discoveries is likely to increase with technological advances. The main message that emerges from these studies has been called by Mary-Claire King the “Anna Karenina principle”, based on Tolstoy’s famous opening line:“Happy families are all alike; every unhappy family is unhappy in its own way”But can such rare variants really explain the “missing heritability” of these disorders? Some people have argued that they cannot, but this seems to me to be based on a pervasive misconception of how the heritability of a trait is measured and what it means. According to this misconception, if a trait is heritable across the population, that heritability cannot be accounted for by rare variants. After all, if a mutation only occurs in one or a few individuals, it could only minimally (nearly negligibly) contribute to heritability across the whole population. That is true. However, heritability is not measured across the population – it is measured in families and then averaged across the population. In humans, it is usually derived by comparing phenotypes between people of different genetic relatedness (identical versus fraternal twins, siblings, parents, cousins, etc.). The values of these comparisons are then averaged across large numbers of pairs to allow estimates of how much genetic variance affects phenotypic variance – the population heritability. While a specific rare mutation may only affect the phenotype within a single family, such mutations could, collectively, explain all of the heritability. Completely different sets of mutations could be affecting the trait or causing the disease in different families. The next few years will reveal the true impact of rare mutations. We should certainly expect complex genetic interactions and some real effects of common polymorphisms. But the idea that our traits are determined simply by the combination of variants we inherit from a static pool in the population is no longer tenable. We are each far more unique than that. (And if your personal genomics company isn’t offering to sequence your whole genome, it’s not personal enough).Gravel S, Henn BM, Gutenkunst RN, Indap AR, Marth GT, Clark AG, Yu F, Gibbs RA, The 1000 Genomes Project, & Bustamante CD (2011). Demographic history and rare allele sharing among human populations. Proceedings of the National Academy of Sciences of the United States of America, 108 (29), 11983-11988 PMID: ... Read more »
Gravel S, Henn BM, Gutenkunst RN, Indap AR, Marth GT, Clark AG, Yu F, Gibbs RA, The 1000 Genomes Project, & Bustamante CD. (2011) Demographic history and rare allele sharing among human populations. Proceedings of the National Academy of Sciences of the United States of America, 108(29), 11983-11988. PMID: 21730125
Walsh CA, & Engle EC. (2010) Allelic diversity in human developmental neurogenetics: insights into biology and disease. Neuron, 68(2), 245-53. PMID: 20955932
SUMMARY: In a brilliant cross-pollination of engineering, physics and biology, scientists have developed a credit-card sized device that can diagnose HIV and syphilis in the remotest parts of the world in just minutes... Read more »
Chin, C., Laksanasopin, T., Cheung, Y., Steinmiller, D., Linder, V., Parsa, H., Wang, J., Moore, H., Rouse, R., Umviligihozo, G.... (2011) Microfluidics-based diagnostics of infectious diseases in the developing world. Nature Medicine. DOI: 10.1038/nm.2408
In a brilliant cross-pollination of engineering, physics and biology, scientists have developed a credit-card sized device that can diagnose HIV and syphilis in the remotest parts of the world in just minutes... Read more »
Chin, C., Laksanasopin, T., Cheung, Y., Steinmiller, D., Linder, V., Parsa, H., Wang, J., Moore, H., Rouse, R., Umviligihozo, G.... (2011) Microfluidics-based diagnostics of infectious diseases in the developing world. Nature Medicine. DOI: 10.1038/nm.2408
As a community here @sciamblogs we decided to each cover something chemistry related on each of our individual blogs to coincide with the World Chemistry Congress taking place in Puerto Rico. This scared the bejeezus out of me as I’m a biologist, not a chemist, and I’ve never been brilliant at the textbook chemistry stuff from my undergraduate classes. Also, a wise biology teacher once told me that all chemistry is boring until it starts moving, then its biology.... Read more »
On March 27, 1977 the deadliest disaster in aviation history took place on the Spanish island of Tenerife. In the midst of take off, going approximately 160 mph, KLM flight 4805 collided with Pan Am flight 1739 half way down … Continue reading →... Read more »
Kahneman, D., & Tversky, A. (1979) Prospect Theory: An Analysis of Decision under Risk. Econometrica, 47(2), 263. DOI: 10.2307/1914185
... Read more »
"The Selfish Gene." "Selfish DNA." Oh, how such phrases can get people bent out of shape. Stephen Jay Gould hated such talk (see a little book called The Panda's Thumb), and Richard Dawkins devoted more time to answering critics of his use of the term 'selfish' than should have been necessary. Dawkins' thesis was pretty straightforward, and he provided real examples of "selfish" behavior of genes in both The Selfish Gene and its superior sequel, The Extended Phenotype. But there have always been critics who can't abide the notion of a gene behaving badly.
Leaving aside silly bickering about the attribution of selfishness or moral competence to little pieces of DNA, let's consider what we might mean if we tried to imagine a really selfish piece of DNA. I mean a completely self-centered, utterly narcissistic little piece of DNA, one that not only seeks its own interest but does so with rampant disregard for other pieces of DNA and even for the organism in which it travels. Can we imagine, for example, a piece of DNA that deliberately harms its host in order to propagate itself?
Sure, we might picture genes acting in naked self-interest, perhaps colluding to create an organism that can fly and mate but can't eat. We can picture genes driving organisms to take outrageous risks in order to reproduce. And we can picture millions and millions of "jumping genes" that don't seem to care at all about the host's welfare while they hop about in bloated mammalian genomes. (If you are one who prefers to think of these transposable elements as beautifully-designed marvels of information transfer and storage, you can have a pass on that last one for now, because you won't like where we're going with this.) But can we picture a gene that actively harms its host in order to get ahead?
At first, this might seem ridiculous. How can harming the host help a gene propagate itself? We can talk about the examples above, and explain each through some reproductive benefit or trade-off. But I'm not talking about negligence here; I'm talking about harm. Well, okay. I'm talking about killing babies.
I'm talking about a gene that kills the embryo in which it's expressed, unless the embryo promises to propagate the gene. The most famous example of such an outrageously selfish gene is the Medea element, found in certain beetles. ('Medea' is both an acronym and a deliciously evil description of the effect of the element.) Here's the basic idea: a female that carries the Medea element has some offspring. Some of those embryos will have the Medea element in their genomic endowment and others won't. But all of the embryos will be exposed to the Medea effect, because it comes into the embryo through the egg, which was created by the Medea-carrying mother. The Medea effect kills any embryo that doesn't carry its own copy of the Medea element. The survivors are the ones that carry the element. Pretty smart, huh?
How this works, exactly, is not well understood. But Medea isn't the only selfish little piece of DNA that stoops to infanticide. Another example was described just a few years ago in the nematode C. elegans, that workhorse of developmental genetics. Called the peel-zeel element, it's just a little different from Medea: in the peel-zeel system, the embryo-killing curse comes from the dad. (Selfish elements like this are quite rare, and this paternally-acting system is the only known element of that kind.) But the sick story is otherwise the same: only those embryos that carry their own copy of the peel-zeel element can avoid sperm-carried destruction. Now some new results, published in this month's PLoS Biology, are revealing how this evil plan is carried out. The article, "A Novel Sperm-Delivered Toxin Causes Late-Stage Embryo Lethality and Transmission Ratio Distortion in C. elegans," was authored by Hannah Seidel and colleagues.
The group had previously shown that the paternal genetic element would kill embryos that didn't have an "antidote," and had explained the peculiar genetic arrangement that keeps this element from being driven completely to fixation in the population. (An element that kills everyone but itself would be expected to quickly infest the entire population, but this doesn't occur in the case of the peel-zeel element.) Although the authors knew a bit about the antidote gene (called zeel-1), they knew nothing about the killer gene or how it worked; they knew only that it was probably very close to the antidote gene. They did have one particularly useful tool, especially valuable in the experimental wonderland of genetics that is C. elegans: they had some mutants with perfectly good antidote function but no killing ability. So they used those mutants to do some very nice genetic mapping experiments, and discovered the precise locations of the mutations that abolished the lethal effect. Interestingly, those mutations were in an "intergenic interval" in the fully-sequenced C. elegans genome, right next to zeel-1. In other words, the killing activity seemed to be right next to the antidote, in a part of the genome that contained no known genes. Or, more accurately, it contained no annotated genes. It turns out that we're still discovering new genes in fully-sequenced genomes. (It's actually not that easy to identify a bona fide gene in a gigantic DNA sequence.) And Seidel et al. had just discovered a new gene – the peel-1 gene. It makes a protein somewhat similar to zeel-1.
Once they had the actual gene in hand, the authors could probe the protein's function. They showed that it is packed into a particular type of delivery vehicle inside sperm, which are the only cells that express it. The delivery vehicles ensure that each embryo is provided with an adequate dose of the toxin. Oddly, the lethal protein acts somewhat late in development, in skin and muscle cells, and the embryo dies a grisly death unless it carries the antidote. The image on the right (from the cover of the July 2011 issue of PLoS Biology) shows two affected embryos (the blobs on the left and right) and one happily normal worm.
In another cool experiment, the authors turned on the death gene artificially in adult animals, and it killed them just fine. They could save those otherwise-doomed worms by turning on the antidote artificially.
The peel-zeel element, then, is a great example of a truly ruthless selfish genetic element. The toxin and the antidote are side-by-side in the genome, so that an animal with the antidote will almost certainly also receive the toxin. (Think about how different things would look if the antidote gene were separate from the toxin; the toxin could quickly lose its ability to propagate itself through the generations.) And the toxin is sperm-delivered to all embryos. This combination of traits allows the paternally-carried element to kill any embryo without a copy of the element.
As far as we know, the peel-zeel system serves only its own interests. It offers no fitness advantage to its host, and is likely instead to exact a cost. Its presence in the nematode genome is easy to explain in a biosphere teeming with "selfish" DNA that admits no evident "purpose" beyond its own propagation. That's not to say it can't be useful; as an accompanying commentary notes, DNA-encoded toxin/antidote systems could be employed by well-meaning humans to seemingly benevolent ends. But whether or not one chooses to see the peel-zeel system as a product of "design," the pattern of "selfish" propagation is hard to miss. And, surely, hard to restrain.
... Read more »
Seidel, H., Ailion, M., Li, J., van Oudenaarden, A., Rockman, M., & Kruglyak, L. (2011) A Novel Sperm-Delivered Toxin Causes Late-Stage Embryo Lethality and Transmission Ratio Distortion in C. elegans. PLoS Biology, 9(7). DOI: 10.1371/journal.pbio.1001115
Do you write about peer-reviewed research in your blog? Use ResearchBlogging.org to make it easy for your readers — and others from around the world — to find your serious posts about academic research.
If you don't have a blog, you can still use our site to learn about fascinating developments in cutting-edge research from around the world.
Research Blogging is powered by SMG Technology.
To learn more, visit seedmediagroup.com.