EmotiMeter is a client-side application that continuously search for emoticons (happy / sad) in Twitter updates and draws a circle in a world map regarding the user location. ... Read more »
Bo Pang, Lillian Lee, & Shivakumar Vaithyanathan. (2002) Thumbs up? Sentiment Classification using Machine Learning Techniques. Proceedings of the ACL-02 conference on Empirical methods in natural language processing. arXiv: cs/0205070v1
There are already several great Android malware static and dynamic analysis frameworks (http://code.google.com/p/droidbox/, http://code.google.com/p/apkinspector/, http://code.google.com/p/androguard/ ) but I still wanted not only testing my first hypothesis about the higher correlation of non-standard Android permissions and malware but to be able to discover the most common permissions that malware authors use when developing these troublesome applications.... Read more »
B. Sanz, I. Santos, C. Laorden, X. Ugarte-Pedrero y P.G. Bringas. (2012) On the Automatic Categorisation of Android Applications. Proceedings of the 9th IEEE Consumer Communications and Networking Conference (CCNC). info:/
Everyone loves a good Hollywood ending. There’s nothing quite as satisfying as seeing a masked hero finally dispatch an evil villain. But aren’t flying men with super-strength a bit passé? Maybe it’s time for some new, cutting-edge superheroes…... Read more »
Velten A, Willwacher T, Gupta O, Veeraraghavan A, Bawendi MG, & Raskar R. (2012) Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging. Nature Communications, 745. PMID: 22434188
Wang Q, Tahir M, Zang J, & Zhao X. (2012) Dynamic electrostatic lithography: multiscale on-demand patterning on large-area curved surfaces. Advanced Materials, 24(15), 1947-51. PMID: 22419389
The scientific method begins with a hypothesis about our reality that can be tested via experimental observation. Hypothesis formation is iterative, building off prior scientific knowledge. Before one can form a hypothesis, one must have a thorough understanding of previous research to ensure that the path of inquiry is founded upon a stable base of established facts. But how can a researcher perform a thorough, unbiased literature review when over one million scientific articles are published annually? The rate of scientific discovery has outpaced our ability to integrate knowledge in an unbiased, principled fashion. One solution may be via automated information aggregation. In this manuscript we show that, by calculating associations between concepts in the peer-reviewed literature, we can algorithmically synthesize scientific information and use that knowledge to help formulate plausible low-level hypotheses.Oh man I've been waiting to write this post for over a year now. I'm so. Flippin'. Excited.I'm really proud to announce that our paper, "Automated Cognome Construction and Semi-automated Hypothesis Generation" has been accepted for publication in the Journal of Neuroscience Methods.Here's the pre-print PDF.I've been writing about this project on this blog for quite a while now, mostly in talking about brainSCANr and the many, many rejections we received while trying to publish it along the way.Seventeen journals to be exact. Which is fun to note in the Rejections & Failures section of my CV. It makes a game out of failing!I'll start by telling the story of how this project got started, then get into some of the more sciencey details.Back in May 2010 I was invited to speak at the (now) annual Cognitive Science Student Association (CSSA) Conference run by the undergraduate CogSci student association at Berkeley. They're an incredibly talented group and I've had a lot of fun working with them over the years.At that conference I sat on a Q&A panel with a hell of a group of scientists, including George Lakoff and the Chair of Stanford's Psychology department, James McClelland (who helped pioneer Parallel Distributed Processing).Berkeley CSSA ConferenceOn that panel I A'd many Qs, one of which was a fairly high-level question about the challenge of integrating the wealth of neuroscientific literature. It was a variant on the classic line that neuroscience is "data rich but theory poor". This is a problem I'd been struggling with for a long time and I'd had a few ideas.In my response I said that one of our problems as a field was that we had so many different people with different backgrounds speaking different jargons who aren't effectively communicating. I followed with an off-hand comment that "The Literature" was actually pretty smart when taken as a system, but that we individual puny brains just weren't bright enough to integrate all that information. I went on to claim that, if there was some way to automatically integrate information from the peer-review literature, we could probably glean a lot of new insights.Well James McClelland really seemed to disagree with me, but the idea kept kicking around my brain for a while.One night, several months later (while watching Battlestar Galactica with my wife), I turned to her and explained my idea. She asked me how I was planning on coding it up and, after I explained it, she challenged me by saying that she could definitely code that faster than I could.Fast-forward a couple of hours to around 2am and she had her results. Bah.The idea boils down to a very simple (and probably simplistic) assumption that the more frequently two neuroscientific terms appear in the title or abstracts of papers together, the more likely those terms are to be associated. For example, if "learning" and all of its synonyms appears in 100 papers with "memory" and all of its synonyms while both of those terms appear in a total of 1000 papers without one another, then the probability of those two terms being associated is 100/1000, or 0.1.We calculated such probabilities for every pair of terms using a dictionary that we manually curated. It contained 124 brain regions, 291 cognitive functions, and 47 diseases. Brain region names and associated synonyms were selected from the NeuroNames database, cognitive functions were obtained from Russ Poldrack's Cognitive Atlas, and disease names are from the NIH. The initial population of the dictionary was meant to represent the broadest, most plausibly common search terms that were also relatively unique (and thus likely not to lead to spurious connections).We counted the number of published papers containing pairs of terms using the National Library of Medicine's ESearch utility and the count return type. Here's the example for "prefrontal cortex" and "striatum":Conjunction:http://eutils.ncbi.nlm.nih.gov/entrez/eutils/esearch.fcgi?db=pubmed&field=word&term=("prefrontal+cortex"+OR+"prefrontal+cortices")+AND+("striatum"+OR+"neostriatum"+OR+"corpus+striatum")&rettype=countDisjunctions:http://eutils.ncbi.nlm.nih.gov/entrez/eutils/esearch.fcgi?db=pubmed&field=word&term=("prefrontal+cortex"+OR+"prefrontal+cortices")+NOT+("striatum"+OR+"neostriatum"+OR+"corpus+striatum")&rettype=counthttp://eutils.ncbi.nlm.nih.gov/entrez/eutils/esearch.fcgi?db=pubmed&field=word&term=("striatum"+OR+"neostriatum"+OR+"corpus+striatum")+NOT+("prefrontal+cortex"+OR+"prefrontal+cortices")&rettype=countHere's what the method looks like:Voytek & Voytek - Figure 1We note in our manuscript that this method is rife with caveats, but this wasn't meant to be an end-point, but rather a proof-of-concept beginning.In the end we get a full matrix of 175528 term pairs. Once we got this database we hacking together the brainSCANr website to allow people to play around with terms and their relationships. We wanted to create a tool for researchers and the public alike to use to help simplify the complexities of neuroscience. You enter a search term, it shows the relationships and gives you links to the relevant peer-reviewed papers.As an example, here's Alzheimer's:brainSCANr Alzheimer's diseaseMy wife and co-author(!) Jessica Voytek and I threw the first version together (with help from my Uber ... Read more »
Voytek, J., & Voytek, B. (2012) Automated cognome construction and semi-automated hypothesis generation. Journal of Neuroscience Methods. DOI: 10.1016/j.jneumeth.2012.04.019
Schmidt M, & Lipson H. (2009) Distilling free-form natural laws from experimental data. Science (New York, N.Y.), 324(5923), 81-5. PMID: 19342586
Yarkoni T, Poldrack RA, Nichols TE, Van Essen DC, & Wager TD. (2011) Large-scale automated synthesis of human functional neuroimaging data. Nature methods, 8(8), 665-70. PMID: 21706013
Lein, E., Hawrylycz, M., Ao, N., Ayres, M., Bensinger, A., Bernard, A., Boe, A., Boguski, M., Brockway, K., Byrnes, E.... (2006) Genome-wide atlas of gene expression in the adult mouse brain. Nature, 445(7124), 168-176. DOI: 10.1038/nature05453
Compared to a spindly mosquito, the mass of a raindrop is like a bus bearing down on a human. Yet the delicate insects thrive in wet, rainy climates. To find out how mosquitos live through rain showers, researchers pelted them with water drops while filming them at high speed. They saw that the insects' light weight, rather than being a liability, might be the key to their survival.
David Hu is a professor in both the biology and mechanical engineering departments at Georgia Tech. He's previously studied how water striders take advantage of fluid dynamics to skate across the surfaces of ponds. Andrew Dickerson, a graduate student in Hu's lab, has used high-speed video to find out how dogs and other animals shake water off of themselves. And in their newest study of animals getting wet, the team asks why a rain shower doesn't flatten every mosquito around.
The researchers trapped mosquitos in small mesh cages and sprayed them point-blank from above with jets of water. This Supersoaker-esque blast was similar to raindrops falling from the sky at terminal velocity. To get detailed video of collisions, they also hit mosquitos with drops falling at a slower speed.
The first thing they saw was that mosquitos made no effort to avoid the water. And they seemed to know what they were doing, because all the insects that got hit survived.
Going to the tape, the scientists saw that the consequence of getting hit by a raindrop depends on what part of the mosquito's body takes the blow. Since the insects are so lanky, 75% of hits happen on the legs or wings. This can throw a mosquito into a brief tumble or even a barrel roll, but it recovers without much trouble.
Direct hits to mosquitos' bodies are a different kind of carnival ride. The speeding raindrops glom onto the insects and propel them downward. Mosquitos captured on camera sometimes fell as far as 20 body lengths while being pushed by a raindrop. For a human, that would be a 12-story drop and a quick ending to the story. But mosquitos are able to pull away sideways from the raindrops and continue on their way, unharmed.
The only danger seems to come if mosquitos are flying close to the ground when they're hit, leaving themselves too little time to escape. The authors note that one unlucky bug was driven into a puddle and "ultimately perished."
To crunch some numbers—and find out why no mosquitos were being crunched—the researchers turned to substitute bugs that were simply Styrofoam balls of different sizes and weights. Although a raindrop isn't any bigger than a mosquito, the insect is extremely lightweight compared to the water. When the heavy drop hits the airy mosquito, it's almost like hitting nothing at all. And this, the researchers found, is what keeps the mosquitos alive. By offering barely any resistance, a mosquito minimize the force of the collision. The raindrop doesn't even splatter when it hits.
Of course, a bus hitting a human is pretty damaging no matter how little resistance the person put up. Mosquitos have the added advantage of a hard exoskeleton to help them resist the blow.
There's another reason this impact is survivable, David Hu explained in an email: Even though the force of the collision is 100 times the mosquito's mass, it's still only equal to the weight of a single feather. ("If we were in a comparable situation," he added, "we would not survive.")
If the impact didn't kill us, the acceleration would. Humans being hurled downward generally black out around 2 or 3 G's. But a mosquito suddenly driven toward the ground by a raindrop experiences an acceleration of 100 to 300 G's. The authors note that "insects struck by rain may achieve the highest survivable accelerations in the animal kingdom."
Although not especially useful to people trying to kill mosquitos or survive vertical bus collisions, the research could prove very handy to engineers designing insect-sized robotic aircraft. To fly successfully through rainstorms, these aircraft might adopt some of the mosquitos' technologies. A low mass would minimize the force of collisions. And sprawled legs, the authors write, could give tiny aircraft enough torque to pull away sideways from a falling drop. Mosquitos also have water-repellent hairs that may help them separate from stuck-on raindrops; aircraft could achieve the same thing with hydrophobic coatings.
Now if they would only design the miniature robot planes to attack the mosquitos, we'd have some real excitement.
Andrew K. Dickerson, Peter G. Shankles, Nihar M. Madhavan, & David L. Hu (2012). Mosquitoes survive raindrop collisions by virtue of their low mass PNAS : 10.1073/pnas.1205446109
Images courtesy of the laboratory of David L. Hu.
... Read more »
Andrew K. Dickerson, Peter G. Shankles, Nihar M. Madhavan, & David L. Hu. (2012) Mosquitoes survive raindrop collisions by virtue of their low mass. PNAS. info:/10.1073/pnas.1205446109
My inactivity period was due to a lack of real news around the World. But I was not inactive at all. My friend Alfonso Farina presented to me another question that occupied my mind for the last weeks: What is the energy cost for computation? The first name that comes to mind in such a [...]... Read more »
Landauer, R. (1961) Irreversibility and Heat Generation in the Computing Process. IBM Journal of Research and Development, 5(3), 183-191. DOI: 10.1147/rd.53.0183
Bennett, C. (2003) Notes on Landauer's principle, reversible computation, and Maxwell's Demon. Studies In History and Philosophy of Science Part B: Studies In History and Philosophy of Modern Physics, 34(3), 501-510. DOI: 10.1016/S1355-2198(03)00039-X
Bérut, A., Arakelyan, A., Petrosyan, A., Ciliberto, S., Dillenschneider, R., & Lutz, E. (2012) Experimental verification of Landauer’s principle linking information and thermodynamics. Nature, 483(7388), 187-189. DOI: 10.1038/nature10872
A cellular automaton is a discrete model studied in computability theory and mathematics. It consists of an infinite, regular grid of cells, each in one of a finite number of states. The grid can be in any finite number of dimensions. Time is also discrete, and the state of a cell at time t is a function of the state of a finite number of cells called the neighborhood at time t-1. These neighbors are a selection of cells relative to some specified, and does not change (Though the cell itself may be in its neighborhood, it is not usually considered a neighbor). Every cell has the same rule for updating, based on the values in this neighbourhood. Each time the rules are applied to the whole grid a new generation is produced.The simplest nontrivial CA would be one-dimensional, with two possible states per cell, and a cell's neighbors defined as the cell on either side of it. A cell and its two neighbors forms a neighborhood of 3 cells, so there are 23=8 possible patterns for a neighborhood. So, there are 28=256 possible rules. These 256 CAs are generally referred to using a standard naming convention invented by Wolfram. The name of a CA is a number which, in binary, gives the rule table. Examples:Rule 90Rule 30http://amsqr.github.com/chromanin.js/ruletool.htmlReferences:Stephen Wolfram (2002). History of Cellular Automata A New Kind of Science... Read more »
Stephen Wolfram. (2002) History of Cellular Automata. A New Kind of Science. info:/
Lego blocks are awesome—they snap together easily and perfectly. These blocks are made using injection molding and wouldn’t fit so flawless together without the precise features of the metal molds [...]... Read more »
Cells are quite valuable, especially when used for regenerative research, diagnostics or research. But harvested cells do not come presorted and need to be separated from a heterogeneous mixture of cells. There are already numerous methods to sort cells according to biophysical properties such as size, density, morphology, and dielectric or magnetic susceptibility. Cell sorting based on labels can have a higher specificity, but introduces extra steps to add and remove labels, which can affect the phenotype of the cell. Rohit Karnik of MIT has developed a cell sorting method based on cell rolling. The continuous, label-free process is described in “Cell sorting by deterministic rolling” in Lab on a Chip.... Read more »
Today's smartphones could do better. Yes, they send texts, make video calls, talk to satellites, take, edit (and share) your pictures, play games and music... one even makes a whipping noise if you waggle it a bit. Some of them can make phone calls too. But surely there's so much more that could be crammed in?
The human cell has functionality that would put any smartphone to shame. The secret, as new research investigates, was learning how to multitask.... Read more »
Wong JV, Li B, & You L. (2012) Tension and robustness in multitasking cellular networks. PLoS computational biology, 8(4). PMID: 22577355
Klas Tybrandt, doctoral student in organic electronics at Linkoping University, Sweden, has developed an integrated chemical chip. The results have just been published in the prestigious journal Nature Communications (cited below). The Organic Electronics research group at Linköping University previously developed ion transistors for transport of both positive and negative ions, as well as biomolecules. [...]... Read more »
Cancer genomes often harbor numerous types of genetic alterations - mutations, structural variation, gene conversion events, etc. No single approach can survey everything at once, but exome sequencing is advantageous because mutations, copy number changes, and zygosity changes can be characterized simultaneously.... Read more »
Koboldt DC, Zhang Q, Larson DE, Shen D, McLellan MD, Lin L, Miller CA, Mardis ER, Ding L, & Wilson RK. (2012) VarScan 2: Somatic mutation and copy number alteration discovery in cancer by exome sequencing. Genome research. PMID: 22300766
Summer time means BBQ season but it’s also the start of road construction. Road construction usually leads to traffic jams and slowdowns, so it makes sense to avoid construction in [...]... Read more »
You, Z., Mills-Beale, J., Foley, J., Roy, S., Odegard, G., Dai, Q., & Goh, S. (2011) Nanoclay-modified asphalt materials: Preparation and characterization. Construction and Building Materials, 25(2), 1072-1078. DOI: 10.1016/j.conbuildmat.2010.06.070
As people continue to struggle with problems involving organ donation, a few robotic engineers continue to push the boundaries between humanity and machinery. A recent report in Nature (cited below) showed that two patients were able to overcome some aspects of their paralysis by way of an implant. Reaching and grabbing motions were possible by way [...]... Read more »
Hochberg, L., Bacher, D., Jarosiewicz, B., Masse, N., Simeral, J., Vogel, J., Haddadin, S., Liu, J., Cash, S., van der Smagt, P.... (2012) Reach and grasp by people with tetraplegia using a neurally controlled robotic arm. Nature, 485(7398), 372-375. DOI: 10.1038/nature11076
The brain of the clock (I took this picture)A computational model is a surrogate version of something usually made on a computer. An example that most people are familiar with are the computational models used to predict the weather. If you know how low pressure and high pressure fronts interact, and you know where one is and how fast it is moving, you can program software to play the situation out in a simulation, predicting what will happen and how quickly. Computational neuroscience is more or less just like that and it can be used to investigate all levels of neuroscience. Here's a brief intro to three of the basic levels. There are other types of computational models in neuroscience, but these three make up most of them.The Whole BrainIf you know how the thalamus, hippocampus, amygdala, and cortex all work together, you can simulate how inputs into one structure might influence the others. In this case each brain structure would basically be a 'black box' that received input and produced output based on known data. To do this kind of simulation you wouldn't actually simulate the millions of neurons in each structure.The Neural Network(source)On the next level down, you can make a computational model of a neural network inside a single brain structure. If you know the types of neurons in the amygdala and how they interact with each other, you can program those relationships in and test what might happen if one class of neurons fires too much or too little. You can test the effect removing one class of neurons has on the whole network and the output of that brain structure. In this case you are simulating individual neurons, but you are probably not simulating the details of the neurons, such as their dendrites and their specific channel composition. In this kind of computational model, the neurons are the 'black box' which receive input and produce output based on pre-set equations.The Cellular ScaleOne level down from this is a computational model of an individual neuron. In this type of model, the neuron is simulated in detail, with its dendrites, soma, and sometimes the axon. With this kind of model, you can test the effects of different dendrite shapes on the processing of the neuron. Usually the individual channels (such as calcium, potassium and sodium channels) in the neuron are programmed in and the electrical properties of the cells are calculated in detail. In this situation, the specific proteins and channels are the 'black boxes' computing ionic concentrations based on pre-set equations. A detailed tutorial on how to make a biophysically realistic model neuron can be found here.a neuron can be simulated as a series of resistors and capacitorsSidiropoulou et al., (2006) have an excellent review of the neuroscience discoveries that have been made with this cellular level of computational modeling. They start their paper highlighting the most interesting problem in cellular neuroscience."Understanding how the brain works remains one of the most exciting and intricate challenges of modern biology. Despite the wealth of information that has accumulated during the past years about the molecular and biophysical mechanisms that underlie neuronal activity, similar advances have yet to be made in understanding the rules that govern information processing and the relationship between the structure and function of a neuron." (Intro, Sidiropoulou et al., 2006) (red mine)This paper directly argues against the idea that neurons are just 'on-off' switches, and illustrates the complex computational processes that occur in individual locations of the neuron. They cover computational studies analyzing the information processing that occurs in the dendrite, at the synapse, at the soma, and even in the axon. The details are to complicated to get into here, but the paper is free. Finally, they end with a call to action for experimental and computational neuroscientists to work together to solve the really interesting problems in cellular neuroscience. "The following open questions could provide fertile ground for collaborations among molecular biologists, geneticists, physiologists, modellers and behaviourists for further explorations of the mysteries of the brain. Do specific behaviours require certain neuronal computational tasks? Which parts of the neural circuit or the neuron itself are responsible for these tasks? What are the underlying molecular mechanisms for the distinct operating modes of neuronal integration? Such holistic approaches should lend support to the growing idea reinforced by this review: that something smaller than the cell lies at the heart of neural computation." (Discussion, Sidiropoulou et al., 2006)Just as computational models can predict weather patterns with some degree of accuracy, no model is perfect. Similarly computational neuroscience is not going to lead to all the answers, but where it is particularly useful is in making very specific predictions about how certain aspects of a neuron or neural circuit might work. The insight gained from computational models can guide and focus experiments, making them more efficient. This saves time, money, energy, and animal lives. © TheCellularScaleSidiropoulou K, Pissadaki EK, & Poirazi P (2006). Inside the brain of a neuron. EMBO reports, 7 (9), 886-92 PMID: 16953202... Read more »
Photo by The Grappling Source Inc. at Wikimedia CommonsBeing subordinated is stressful. The process of one individual lowering the social rank of another often involves physical aggression, aggressive displays, and exclusion. In addition to the obvious possible costs of being subordinated (like getting beat up), subordinated individuals often undergo physiological changes to their hormonal systems and brains. Sounds pretty scary, doesn’t it? But what if some of those changes are beneficial in some ways?Dominance hierarchies are a fact of life across the animal kingdom. In a social group, everyone can’t be dominant (otherwise, life would always be like an episode of Celebrity Apprentice, and what could possibly be more stressful than that?). Living in a social group is more peaceful and nutritive when a clear dominance hierarchy is established. Establishing that hierarchy often involves a relatively short aggressive phase of jostling for position, followed by a longer more stable phase once everyone knows where they fall in the social group. Established dominance hierarchies are not always stable (they can change over time or from moment to moment) and they are not always linear (for example, Ben can be dominant over Chris, who is dominant over David, who is dominant over Ben). But they do generally help reduce conflict and the risk of physical injury overall.Nonetheless, it can be stressful to be on the subordinate end of a dominance hierarchy and these social interactions are known to cause physiological changes. Researchers Christina Sørensen and Göran Nilsson from the University of Oslo, Cliff Summers from the University of South Dakota and Øyvind Øverli from the Norwegian University of Life Sciences investigated some of these physiological differences among isolated, dominant, and subordinate rainbow trout.A photo of a rainbow trout by Ken Hammond at the USDA. Photo at Wikimedia Commons.Like other salmonid fish, rainbow trout are aggressive, territorial and develop social hierarchies as juveniles. Dominant trout tend to initiate most of the aggressive acts, hog food resources, grow larger, and reproduce the most, whereas subordinate trout display less aggression, feeding, growth, and reproduction. The researchers recorded the behavior, feeding and growth rates in three groups of fish: trout housed alone, trout housed with a more subordinate trout, and trout housed with a more dominant trout. The researchers also measured cortisol (a hormone involved in stress responses), serotonin (a neurotransmitter involved in mood, the perception of food availability, and the perception of social rank, among other things) and the development of new neurons (called neurogenesis) in these same fish.This video of two juvenile rainbow trout was taken by Dr. Erik Höglund. Here is Christina Sørensen’s description of the video: “What you see in the film is two juvenile rainbow trout who have been housed on each side of a dividing wall in a small aquarium. The dividing wall has been removed (for the first time) immediately before filming. You will see that the fish initially show interest for each other, followed by a typical display behaviour, where they circle each other. Finally one of the fish will initiate aggression by biting the other. First the aggression is bidirectional, as they fight for dominance, but after a while, one of the fish withdraws from further aggression and shows only submissive behaviour (escaping from the dominant and in the long run trying to hide... and as is described in the paper, depressed feed intake). The video has been cut to show in quick succession these four stages of development of the dominance hierarchy”. The researchers found that as expected, the dominant trout were aggressive when a pair was first placed together, but the aggression subsided after about 3 days. Also as expected, the dominant and isolated trout were bold feeders with low cortisol levels and high growth rates, whereas the subordinate trout did not feed as well, had high cortisol levels and low growth rates. Additionally, the subordinate trout had higher serotonin activity levels and less neurogenesis than the dominant or isolated trout. These results suggest that the subordination experience causes significant changes to trout brain development (Although we can’t rule out the possibility that fish with more serotonin and less neurogenesis are predisposed to be subordinate). In either case, this sounds like bad news for subordinate brains, right? Maybe it is. Or maybe the decrease in neurogenesis just reflects the decrease in overall growth rates (smaller bodies need smaller brains). Or maybe something about the development of these subordinate brains improves the chances that these individuals will survive and reproduce in their subordination. A crayfish raising its claws. Image by Duloup at Wikimedia.Research on dominance in crayfish by Fadi Issa, Joanne Drummond, and Don Edwards at ... Read more »
Sørensen, C., Nilsson, G., Summers, C., & Øverli, �. (2012) Social stress reduces forebrain cell proliferation in rainbow trout (Oncorhynchus mykiss). Behavioural Brain Research, 227(2), 311-318. DOI: 10.1016/j.bbr.2011.01.041
Yeh, S., Fricke, R., & Edwards, D. (1996) The Effect of Social Experience on Serotonergic Modulation of the Escape Circuit of Crayfish. Science, 271(5247), 366-369. DOI: 10.1126/science.271.5247.366
Issa, F., & Edwards, D. (2006) Ritualized Submission and the Reduction of Aggression in an Invertebrate. Current Biology, 16(22), 2217-2221. DOI: 10.1016/j.cub.2006.08.065
The heat wave throughout most of North America in the beginning of April had bought climate change into my mind. Was the heat wave caused by climate change? Likely not, I can’t imagine the effect of climate change happening so abruptly. But it made me think about what really causes climate change on this lovely blue planet of ours?... Read more »
J. Wilkinson. (2012) The Sun and Earth’s Climate . New Eyes on the Sun, , 201-217. info:/
Mufti, S., & Shah, G. (2011) Solar-geomagnetic activity influence on Earth's climate. Journal of Atmospheric and Solar-Terrestrial Physics, 73(13), 1607-1615. DOI: 10.1016/j.jastp.2010.12.012
Oreskes, N. (2004) BEYOND THE IVORY TOWER: The Scientific Consensus on Climate Change. Science, 306(5702), 1686-1686. DOI: 10.1126/science.1103618
Rivera, . (2012) Discovery of the Major Mechanism of Global Warming and Climate Change. Journal of Basic and Applied Sciences, 8(1). DOI: 10.6000/1927-5129.2012.08.01.29
Figure 1. PHYRN concept and work flow.
'Danger and Evolution in the twilight zone'
I have been communicating with Randen Patterson on and off over the last five years or so about his efforts to try and study the evolution of gene families when the sequence similarity in the gene family is so low that making multiple sequence alignments are very difficult. Recently, Randen moved to UC Davis so I have been talking / emailing with jim more and more about this issue. Of note, Randen has a new paper in PLoS One about this topic: Bhardwaj G, Ko KD, Hong Y, Zhang Z, Ho NL, et al. (2012) PHYRN: A Robust Method for Phylogenetic Analysis of Highly Divergent Sequences. PLoS ONE 7(4): e34261. doi:10.1371/journal.pone.0034261.
Figure 8. Model for the Evolution of the DANGER Superfamily.
I invited Randen and the first author Gaurav Bhardwaj to do a guest post here providing some of the story behind their paper for my ongoing series on this topic. I note - if you have published an open access paper on some topic related to this blog I would love to have a guest post from you too. I note - I personally love the fact that they used the "DANGER" family as an example to test their method.
Here is their guest post:
A fundamental problem to phylogenetic inference in the “twilight zone” (<25% pairwise identity), let alone the “midnight zone” (<12% pairwise identity), is the inability to accurately assign evolutionary relationships at these levels of divergence with statistical confidence. This lack of resolution arises from difficulties in separating the phylogenetic signal from the random noise at these levels of divergence. This obviously and ultimately stymies all attempts to truly resolve the Tree of Life. Since most attempts at phylogenetic inferences in twilight/midnight zone have relied on MSA, and with no clear answer on the best phylogenetic methods to resolve protein families in twilight/midnight zone, we have presented rest of this blog post as two questions representative of these problems.
Question1: Is MSA required for accurate phylogenetic inference?
Our Opinion: MSA is an excellent tool for the inference from conserved data sets, but it has been shown by others and us, that the quality of MSA degrades rapidly in the twilight zone. Further, the quest for an optimal MSA becomes increasingly difficult with increased number of taxa under study. Although, quality of MSA methods has improved in last two decades, we have not made significant improvements towards overcoming these problems. Multiple groups have also designed alignment-free methods (see Hohl and Ragan, Syst. Biol. 2007), but so far none of these methods has been able to provide better phylogenetic accuracy than MSA+ML methods. We recently published a manuscript in PLoS One entitled “PHYRN: A Robust Method for Phylogenetic Analysis of Highly Divergent Sequences” introducing a hybrid profile-based method. Our approach focuses on measuring phylogenetic signal from homologous biological patterns (functional domains, structural folds, etc), and their subsequent amplification and encoding as phylogenetic profile. Further, we adopt a distance estimation algorithm that is alignment-free, and thus bypasses the need for an optimal MSA. Our benchmarking studies with synthetic (from ROSE and Seqgen) and biological datasets show that PHYRN outperforms other traditional methods (distance, parsimony and Maximum Liklihood), and provides significantly accurate phylogenies even in data sets exhibiting ~8% average pairwise identity. While this still needs to be evaluated in other simulations (varying tree shapes, rates, models), we are convinced that these types of methods do work and deserve further exploration.
Question 2: How can we as a field critically and fairly evaluate phylogenetic methods?
Our Opinion: A similar problem plagued the field of structural biology whereby there were multiple methods for structural predictions, but no clear way of standardizing or evaluating their performance. An additional problem that applies to phylogenetic inference is that, unlike crystal structures of proteins, phylogenies do not have a corresponding “answer” that can be obtained. Synthetic data sets have tried to answer this question to a certain extent by simulating protein evolution and providing true evolutionary histories that can be used for benchmarking. However, these simulations cannot truly replicate biological evolution (e.g. indel distribution, translocations, biologically relevant birth-death models, etc). In our opinion, we need a CASP-like model (solution adopted by our friends in computational structural biology), where same data sets (with true evolutionary history known only to organizers) are inferred by all the research groups, and then submitted for a critical evaluation to the organizers. To convert this thought to reality, we hereby announce CAPE (Critical Assessment of Protein Evolution) for Summer 2012. We are still in pre-production stages, and we welcome any suggestions, comments and inputs about data sets, scoring and evaluating methods.
Bhardwaj, G., Ko, K., Hong, Y., Zhang, Z., Ho, N., Chintapalli, S., Kline, L., Gotlin, M., Hartranft, D., Patterson, M., Dave, F., Smith, E., Holmes, E., Patterson, R., & van Rossum, D. (2012). PHYRN: A Robust Method for Phylogenetic Analysis of Highly Divergent Sequences PLoS ONE, 7 (4) DOI: 10.1371/journal.pone.0034261
This is from the "Tree of Life Blog"
of Jonathan Eisen, an evolutionary biologist and Open Access advocate
at the University of California, Davis. For short updates, follow me on Twitter.
... Read more »
Bhardwaj, G., Ko, K., Hong, Y., Zhang, Z., Ho, N., Chintapalli, S., Kline, L., Gotlin, M., Hartranft, D., Patterson, M.... (2012) PHYRN: A Robust Method for Phylogenetic Analysis of Highly Divergent Sequences. PLoS ONE, 7(4). DOI: 10.1371/journal.pone.0034261
When I first heard about Journal Fire, I thought, Great! someone is going to take all the closed-access scientific journals and make a big bonfire of them! At the top of this bonfire is the burning effigy of a wicker man, representing the very worst of the vanity journals.... Read more »
Hey, you! Get off of that cloud! Cloud computing is on the rise, as we have discussed her on many an occasion. It’s useful for fast and robust web hosting, it’s great for anywhere email access, for remote file storage and backup (DropBox Wuala GoogleDrive etc), for sharing large media files, whether movies, music files, [...]Post from: David Bradley's Sciencetext Tech TalkNeighbourhood Watch for cloud computing
... Read more »
Sudhir N. Dhage, & B.B. Meshram. (2012) Intrusion detection system in cloud computing environment. International Journal of Cloud Computing, 1(2/3), 261-282. info:/
Do you write about peer-reviewed research in your blog? Use ResearchBlogging.org to make it easy for your readers — and others from around the world — to find your serious posts about academic research.
If you don't have a blog, you can still use our site to learn about fascinating developments in cutting-edge research from around the world.
Research Blogging is powered by SMG Technology.
To learn more, visit seedmediagroup.com.