266 posts · 212,295 views
Brains, behaviour, and evolution.
This appeared earlier today on the Facebook feed I Fucking Love Science:
I remember seeing a shark documentary as a kid, hosted by Burgess Meredith, if I remember correctly. It made the same basic claim about great white sharks: too big to have predators, nobody had ever seen them die except by accident or by human hands, blah blah blah, therefore “some have suggested” they are immortal.
That I can remember the end of the show all these years later shows you what a terrific close the “immortal” idea makes. But it only sounds plausible of our disconnect with that natural environment. It plays on our lack of knowledge about the natural world, and that we have a hard time tracking these sorts of things. It’s like asking most city dwellers, “Have you ever seen a baby pigeon?” “No, I haven’t. And you know what, I’ve never seen a dead pigeon, either! Oh my goodness, pigeons must be immortal!”
Sharks and lobsters have a few things in common, too, that makes the “immortality” claim easy to make. They live in the oceans, which means they are hard to track, and few people have first hand experience with them. They are long lived species, and it’s not easy to look at one and know how old it is.
When you add in “they only die from external causes,” you have a huge out. Most animals, including humans, die from external causes, broadly construed. Sure, a predator is an external cause. A bacterial or viral infection is an external cause. What would not count as an “external cause”? The definition is so loose that you can make exceptions for almost every possible counter-example.
And, of course, it links out, not to an actual scientific paper, which would be the sort of action you might expect from a group that proclaims to love science, but to a radio interview.
This is not a slap against the participants in the interview. Jelle Atema is a good scientist with real bona fides. But this radio interview is a long way from the sort of careful science you would need to do to show lobsters are “functionally immortal.”
There is some interesting science to this. Many decapod crustaceans have indeterminate growth (mentioned by Vogt 2008, 2010, who cites others). This means that they keep growing throughout their life, and do not have a set upper limit for size. It’s not just lobsters that do this, as far as I know; crayfish do, too. Lobsters are probably in this meme because they get so much larger than crayfish. It’s easier to people to believe a big animal like a lobster could be so much older than a small animal like a crayfish.
There is about one paper that I have been able to find on lobster longevity by Klapper and colleagues (1998). The introduction says:
Lobsters grow continuously throughout their lifespan, only decreasing growth rates with age. Furthermore, and again in contrast to humans, they are able to regenerate whole limbs even at a high age.
This cites a book chapter by Govind, on... muscle innervation?! The chapter talks a little bit about sarcomeres being added throughout life, but that’s about it. It’s not a chapter on aging and senescence.
More provocatively, the abstract of the Klapper and colleagues says (my emphasis):
Lobsters (Homarus americanus) grow throughout their life and the occurrence of senescence is slow.
But there is no citation for the “slow senescence” claim. And there is no original empirical data supporting that in the Klapper paper (e.g., longevity and activity and health and mortality data). The paper shows that adult lobsters still make an enzyme called telomerase, but it does not show that lobsters are long lived because of it.
How old does this “functionally immortal” lobster get? Well, if lobsters really were “functionally immortal,” why would you note expect them to live for centuries? Bodnar (2009) has a table that puts the oldest lobster on record in the 50-100 range. Bodnar cites Finch (1990), which again does not seem to have much more than a table with an estimated maximum lifespan, connected to another reference I haven’t. Nobody seems to define what “slow senescence” is, or how it has been measured in lobsters.
Regardless, a “functionally immortal” animal that has a shorter recorded maximum lifespan than a human? Colour me unimpressed.
For such a bold claim, it has been disappointingly hard to track down the real science. It’s also disappointing to see such a credulous claim come from a source that contends it fucking loves science. I think it is fair to call this one:
Sadly, I suspect this myth might have a longer lifespan than many lobsters.
Bodnar AG. 2009. Marine invertebrates as models for aging research. Experimental Gerontology 44(8): 477-484. DOI: 10.1016/j.exger.2009.05.001
Govind CK. 1995. Muscles and their innervation. In: Factor, J.R. (Ed.), Biology of the Lobster Homarus americanus, pp. 291–312, Academic Press, San Diego, CA.
Klapper W, Kühne K, Singh KK, Heidorn K, Parwaresch R, Krupp G. 1998. Longevity of lobsters is linked to ubiquitous telomerase expression. FEBS Letters 439(1-2): 143-146. DOI: 10.1016/S0014-5793(98)01357-X
Vogt G. 2008. How to minimize formation and growth of tumours: Potential benefits of decapod crustaceans for cancer research. International Journal of Cancer 123: 2727-2734. DOI: 10.1002/ijc.23947
Vogt G. 2008. The marbled crayfish: a new model organism for research on development, epigenetics and evolutionary biology. Journal of Zoology 276: 1-13. DOI: 10.1111/j.1469-7998.2008.00473.x
Vogt G. 2010. Suitability of the clonal marbled crayfish for biogerontological research: A review and perspective, with remarks on some further crustaceans. Biogerontology 11: 643-669. DOI: 10.1007/s10522-010-9291-6... Read more »
Klapper Wolfram, Kühne Karen, Singh Kumud K, Heidorn Klaus, Parwaresch Reza, & Krupp Guido. (1998) Longevity of lobsters is linked to ubiquitous telomerase expression. FEBS Letters, 439(1-2), 143-146. DOI: 10.1016/S0014-5793(98)01357-X
When animals live caves full time, their descendents often lose their eyes. It has happened over and over and over and over again, in all different kinds of animals. But how this happens is not obvious. Stephen Jay Gould wrote that some people would use cave fish as an argument that “Lamarck must have been on to something” with his idea that acquired characteristics can be inherited. Well, no, that’s not that case, but it is a good example of how tricky thinking about losses can be.
The latest paper to try to sort out eye loss uses small amphipod crustaceans (Gammarus minus). An advantage of working with this particular species is that some populations live out in the sunshine with us, but several populations have gone down in the underground. In this case, Carlini and colleagues have three separate populations that went into caves, and they have their closest relatives, which are not cave dwellers. Each pair of populations acts as a natural experiment.
The eyes do change with the habita, as expected. The amphipods that live “above” in springs have eyes with about 40 facets (ommatidia), while the cave dwellers’eyes have about 5 ommatidia.
Using genetic tests, the team found that the genes for making visual pigments, the opsins, were still intact. They had not turned into non-working genes (“pseudogenes”). The genes for the opsins were extremely similar, and in no way as different as the eyes of these little guys were.
What they did find was that the expression of these genes was dialed way down compared to their surface dwelling relatives:
Carlini and colleagues note that this could be related to the overall reduction of the eye, but they attempted to control for this by scaling expression to the size of the eyes.
Carlini and colleagues suggest that the opsin genes are under some sort of pressure to stay “intact” in this species (contrary to suggestion here that there is an advantage to blindness in caves). But the team doesn’t have a suggestion for what the opsin genes might be needed for, although they suggest it might be a non-visual function.
This doesn’t solve the matter of how the animals are reducing the amount of opsins they make. Presumably there is some mutation in a regulatory gene, perhaps even a gene one specific to the visual system.
They should keep an eye out for that.
Carlini DB, Satish S, Fong DW. 2013. Parallel reduction in expression, but no loss of functional constraint, in two opsin paralogs within cave populations of Gammarus minus (Crustacea: Amphipoda). BMC Evolutionary Biology 13(1): 89. DOI: 10.1186/1471-2148-13-89
“What big eyes you have!”
Turning light and going blind: A tale of caves and genes
Once more into the cave
Better off blind
Picture from here.... Read more »
Carlini David B, Satish Suma, & Fong Daniel W. (2013) Parallel reduction in expression, but no loss of functional constraint, in two opsin paralogs within cave populations of Gammarus minus (Crustacea: Amphipoda). BMC Evolutionary Biology, 13(1), 89. DOI: 10.1186/1471-2148-13-89
I have vague memories of the first time I counted to a hundred. It felt like one of those landmarks like tying your shoes for yourself the first time, or riding the bicycle more than a few feet without the training wheels or dad holding you up.
Of course, I don't come anywhere near Adam Spencer:
Once when I was about 7, I counted to 10,000 just to check the numbers didn't run out before then #NerdConfessions
Counting large numbers is not something that comes easily for us humans. A new paper claims this little guy, a baby guppy, may be a superior number cruncher as soon as it pops out of mama’s belly:
A couple of years ago, I reported on a paper that looked at the development of “counting” ability in guppies. In that paper, they claimed that it took about 40 days for guppies to develop the sort of ability to distinguish numbers that they had as adults. Now, the same team is back, testing very young guppies again, but this time using new methods.
The team asked these tiny guppies if they recognized numbers of things by showing animals dots while they have them food. Here are the three stimuli the team used.
Both A and B differ in the number of spots, but A also differs in the average sizes of those spots (which the authors call a “continuous variable). C differs in size, but not in the number. This is try to control for the fact that when you change number of things, you also change many other factors, like amount of area reflecting light, etc.
The authors then measured the amount of time the guppies spent near each set of dots as an indication of “preference”, on the assumption that the guppies are more likely to spend time near the dots where they got food if they learned certain dots meant food. If animals don’t learn where the food is, they may well not be able to tell the stimuli apart.
The authors place these pairs of dots at the end of the tank while fish are feeding when they were four and five days old. As a control, they either feed the fish food or just in a little water without food. On day six, they placed the babies in the tank to see which set of dots they gravitate to. On day seven, they repeat this, but flip the positions of the dots.
The fish were significantly more likely to be around the set of dots that promised food when they differed by number (A and B, above), but not when the dots varied in size. That said, the guppies were not great at this. The guppies got it right only 60% of the time, which is only a slight improvement on a coin toss.
However, the authors themselves admit that this paper is hard to compare with their previous one because the stimuli are so different. The previous paper used other live fish as the stimulus, not just static dots. They also note that this test is slightly different from other training tests, which generally ask the animal to do something even more specific than “hang out at one end of an aquarium.”
It is an interesting suggestion, though, that animals so small and so young can cope with differences in number. But I still think I’ll beat them at counting to a hundred.
Piffer L, Miletto Petrazzini ME, Agrillo C. 2013. Large number discrimination in newborn fish. PLOS ONE 8(4): e62466 DOI: 10.1371/journal.pone.0062466
One fish, two fish... can fish count?
Picture by Shaojung on Flickr; used under a Creative Commons license.... Read more »
You probably don’t feel tired when you get a tan.
You probably think your friends feel more or less fatigued depending on whether they are dark skinned or fair skinned (like myself).
We know that differences in colour are important lots of other species besides humans. They can play a big part in an animal’s ability to blend into the surrounding environment, for instance. What might be less appreciated is that being a certain colour might take energy. After all, many colours in animals are caused by pigments: specific molecules that animals have to make in their bodies. Some of those molecules could well depend on molecules that the animal has to get somehow, or make through a physiological process.
Melanin is just such a chemical. Melanin is a dark chemical in lost of insects, but one of the main compounds insects need to make it only comes in food. If you don’t get enough food, you can’t make enough melanin. A new paper by Roff and Fairbairn take this a step further, and asks if melanin might actually be costly for animals to make, with an eye towards evolutionary situations. For instance, how big a benefit in dark colour would there have to be for you to spend the energy to make more dark stuff?
They test this in a clever way. Rather than looking at different colour types of one species, they look at changes in colour of a single species, a sand cricket (Gryllus firmus; above right). When these crickets shed their skeleton, they are very lightly coloured (right): there is no melanin in their new skeleton for a while until it hardens up.
They reasoned that if making all this melanin was costly to the cricket, then crickets with less melanin should have more of some other feature, like the gonads. And that’s what they found. The bigger the gonads in cricket, the less melanin they had. This degree of melanization was highly heritable, too (a score of 0.61, where 0 is not influenced by genes, and 1 is completely determined by genes).
This in no way suggests that this means you shouldn’t tan. Yet.
Roff DA. & Fairbairn DJ. 2013. The costs of being dark: the genetic basis of melanism and its association with fitness-related traits in the sand cricket. Journal of Evolutionary Biology: in press. DOI: 10.1111/jeb.12150
Moth picture from here; cricket picture from here; cricket molt from here.... Read more »
Roff DA, & Fairbairn DJ. (2013) The costs of being dark: the genetic basis of melanism and its association with fitness-related traits in the sand cricket. Journal of Evolutionary Biology. DOI: 10.1111/jeb.12150
This is our new winner, ladies and gentlemen.
This unassuming moth is a greater wax moth (Galleria mellonella). Don’t let its drab appearance fool you, friends. This is a record-setting animal, with one of the most extreme sensory systems yet found. Its speciality? Hearing.
When you listen to anything, there are two main properties inherent in the sound: loudness and tone. The volume is determined by the size of sound waves; the tone is set by the frequency of sound waves. Humans hear tones where the sound waves vibrate back and forth at several thousand times a second. Something that moves back and forth once a second has a frequency of one Hertz (Hz); a thousand times a second is one kiloHertz (kHz).
People differ in how well they hear sounds at the high end. In particular, you lose the high frequency sounds as you get older. You can test how high you can hear at this website. Note that it stops at 22 kHz, because very few people can hear that high.
Animals, of course, have different limitations than humans. Cartoons often reference a dog whistle, with a pitch that humans can’t hear, but dogs can.
(Note: “Dog whistle” is not to be confused with “wolf whistle.” Know the difference!)
Moir and colleagues did two experiments to show the wax moth’s superior high-end hearing. First, they used a technique to show whether the ear drum (tympanum) was vibrating. If you can’t vibrate something at at the same frequency as the sound, you can’t detect the tone of the sound. They found the ear drum was able to keep up with every frequency they tested.
The critical experiment, though, is the neurophysiology. It doesn’t matter what the ear drum does if the neurons don’t convert anything into a signal. The wax moth has an ear with a grand total of four neurons devoted to picking up sound. Thus, analyzing the signals is fairly straighforward.
They found the moth’s ear could pick up sounds all the way up to 300 kHz. That’s twice as high as the previous record holder:
Sorry, Lymantria dispar. You had a good run.
The wax moth doesn’t hear equally well across the range. It is particularly good at picking up sounds in the 60 kHz range. For the wax moths to hear the end frequency sounds, they have to be much louder. At 60 kHz, the wax moths can pick up sounds of a volume about 50 decibels of sound pressure level (dB SPL); at 300 kHz, the sound has to be more like 90 dB SPL. That’s a loud sound. And at the very high end (280-300 kHz), some of the moths don’t respond at all to even loud sounds, suggesting this is near the upper limit of their hearing.
Why does the wax moth need such amazing hearing? The general explanation for why insects can hear at these high frequencies is because of these:
Bats hunt insects using high frequency sounds, and many insects have evolved ears that can hear the sounds bats make. This does not seem to be coincidence. The bats are thought to be exerting extreme selection pressure on insects, so hearing predators approaching is an adaptive advantage.
In this case, there is just one little puzzle. No bat makes a sound that hits 300 kHz. Why does the greater wax moth ear reach way up that high in the frequency spectrum? The authors suggest that this highly responsive ear allows the moth to react faster to sounds. After all, if your ear can vibrate at 300,000 times a second, and it takes 300 vibrations for the ear to pick up the sound, you could pick up the sound in a thousandth of a second, compared to about a hundredth of a second for an ear vibrating at 20 kHz, like our crappy human ears.
Moir HM, Jackson JC, Windmill JFC. 2013. Extremely high frequency sensitivity in a 'simple' ear. Biology Letters 9(4): 20130241-20130241. DOI: 10.1098/rsbl.2013.0241
Good night, Dr. Griffin, where ever you are...
Crickets fly away from bats, but do they run away, too?
Do bright bugs banish bothersome bats?
Let your neurons relax, the predators are gone!
Photo by dhobern on Flickr; used under a Creative Commons license.... Read more »
Moir H. M., Jackson J. C., & Windmill J. F. C. (2013) Extremely high frequency sensitivity in a 'simple' ear. Biology Letters, 9(4), 20130241-20130241. DOI: 10.1098/rsbl.2013.0241
There’s been a lot of talk about “paleo diets”, but here we have the real deal. A meal caught in the middle of digestion in a dinosaur.
Microraptor gui was introduced back in 2003, and immediately attracted attention because of the its feathers, particularly lots of long, prominent feathers on its hind legs, so unlike any bird or other flying beast we know of. There is good evidence (though disputed) that it was a glossy, black animal, rather like the grackles that hang around my campus.
But behaviour is one of the trickiest things to pull from fossils. How did these animals live?
Here is the newest fossil to shed light on this question in Microraptor.
Just in front of where the hind legs meet the spine, and below the spine, there is a mass that is a little darker than the surrounding rock. There are close ups of this area in the journal article, but the reproduction is disappointingly low-resolution in the pre-print, and in any case, relatively few would immediately recognize the key feature there.
Fish bones. There is nothing but fish bones in the gut of this dinosaur. Authors Xing and colleagues say, “M. gui was an adept hunter of aquatic prey.”
Still, are there any other indications in the anatomy that Microraptor gui was a habitual fish-eater? After all, all kinds of meat eaters will pick up any meal that’s available. It is at least possible that this one individual M. gui scavenged some leftovers off someone else’s plate, so to speak.
Xing and company say that evidence against this being scavenging is that fish spoils quickly, so the window of opportunity would be small. However, other M. gui fossils have bird and mammals bones, suggesting this species may not be a picky eater.
Microraptor may not be alone in its fish-eating habits. It’s been suggested that much larger dinosaurs were fish-eaters:
When Jurassic Park 3 came out, I snickered a little bit at the use of the Spinosaurus as the “big bad” monster to up the ante over Tyrannosaurus rex. Because a quick visit to wikipedia indicated that people thought this spiny beast ate fish, based on the skull, and backward facing teeth (think of fishhooks to keep the prey in place).
Are there any anatomical features that support M. gui as an “adept aquatic predator”? This fossil gives previously unseen views of the teeth in this species. The only thing that might be related to a possible fish diet is that some of the teeth are not serrated, and some of the very frontmost teeth point forward. Both features are apparently common in fish-eaters.
Given that well-preserved Microraptor fossils seem to emerge regularly, we can probably expect still more insight into how this interesting little beast lived. Not bad for something we didn
It s wonderful to think that ten years ago, we didn’t know Microraptor gui ever existed (the genus was named in 2000, M. gui named in 2003). Now, we know what it looked like and what it ate, putting is well on the way to becoming one of the best “fleshed out” dinosaurs.
Xing L, Persons WS, Bell PR, Xu X, Zhang J, Miyashita T, Wang F, Currie PJ. 2013. Piscivory in the feathered dinosaur Microraptor. Evolution: in press. DOI: 10.1111/evo.12119
There’s something fishy about Microraptor (Of course Switek beat me to this!)
Microraptor reconstruction from here.... Read more »
Last week, the science news world was all a-flutter about a new technique to clear brains described in the paper, “Structural and molecular interrogation of intact biological systems.” (Argh, what a title. Would you have guessed what they did from that title?)
We in the invertebrate neuroscience community have been clearing brains for decades. Here are some examples from my own work.
Assembled in the dying days of straight edges and Letraset and photographing photographs, here are leg motor neurons from spiny sand crabs (Blepharipoda occidentalis; Faulkes and Paul 1996). The nerve to the leg splits into two branches. A shows the neurons in the combined nerve, B shows the neurons just from the front branch of the nerve, and C shows the neurons in the back branch of the nerve.
Here are homologous neurons from the legs of slipper lobsters (Faulkes 2012), presented here in colour for the first time. There is the equivalent to part A in the composite above. This one is darker than some of the others because it has gone through a process called intensification.
Same species (Ibacus peronii) but this time we have neurons in the tail. These are abdominal fast flexor motor neurons that power the big tasty muscles that everyone likes to eat (Faulkes 2004).
Here are the homologous cells in a spiny lobster (Panulirus argus; Espinoza et al. 2006). This is a composite “stack” of images compiled with Helicon Focus. That’s why this one is prettier than the others; more of the neurons are in focus.
Now compare the two above to this one from crayfish (Procambarus clarkii; another Helicon Focus composite, previously seen in the 2011 J.B. Johnston Club calendar). Notice how there are seven in the pictures above but eight in the one below (two on the left are overlapping)? It’s because the species above lack a specialized giant motor neuron that crayfish have related to escape tailflips.
The technique create these images is called cobalt backfilling, developed in the 1970s (Tyrer and Altman 1974; Bacon and Altman 1977; Altman and Tyrer 1980). I think the clearing of neural tissue was developed at this time. All the water in the neural tissue is removed and replaced with absolute alchohol. The tissue is cleared in methyl salicylate.
Altman JS, Tyrer NM. 1980. Filling selected neurons with cobalt through cut axons. In: NJ Strausfeld, TA Miller (eds.), Neuroanatomical Techniques, pp. 373-402. Springer-Verlag: Berlin.
Bacon JP, Altman JS. 1977. A silver intensification method for cobalt filled neurons in wholemount preparations. Brain Research 138(2): 359-363. http://dx.doi.org/10.1016/0006-8993(77)90753-3
Chung K, Wallace J, Kim S-Y, Kalyanasundaram S, Andalman AS, Davidson TJ, Mirzabekov JJ, Zalocusky KA, Mattis J, Denisin AK, Pak S, Bernstein H, Ramakrishnan C, Grosenick L, Gradinaru V, Deisseroth K. 2013. Structural and molecular interrogation of intact biological systems. Nature: in press. http://dx.doi.org/10.1038/nature12107
Espinoza SY, Breen L, Varghese N, Faulkes Z. 2006. Loss of escape-related giant neurons in a spiny lobster, Panulirus argus. The Biological Bulletin 211(3): 223-231. Abstract and reprint
Faulkes Z. 2004. Loss of escape responses and giant neurons in the tailflipping circuits of slipper lobsters, Ibacus spp. (Decapoda, Palinura, Scyllaridae). Arthropod Structure & Development 33(2): 113-123. http://dx.doi.org/10.1016/j.asd.2003.12.003
Faulkes Z. 2012. The distal leg motor neurons of slipper lobsters, Ibacus spp. (Decapoda, Scyllaridae). NeuroDojo (blog): http://neurodojo.blogspot.com/2012/09/Ibacus.html
Faulkes Z, Paul DH. 1997. A map of the distal leg motor neurons in the thoracic ganglia of four decapod crustacean species. Brain, Behavior and Evolution 49(3): 162-178. http://dx.doi.org/10.1159/000112990
Tyrer NM, Altman JS. 1974. Motor and sensory flight neurones in a locust demonstrated using cobalt chloride. The Journal of Comparative Neurology 157(2): 117-138. http://dx.doi.org/10.1002/cne.901570203
A virtual camera lucida
Tuesday Crustie: Neural
See through brains clarify connections
Getting better views of brains by turning them invisible... Read more »
Chung Kwanghun, Wallace Jenelle, Kim Sung-Yon, Kalyanasundaram Sandhiya, Andalman Aaron S., Davidson Thomas J., Mirzabekov Julie J., Zalocusky Kelly A., Mattis Joanna, & Denisin Aleksandra K. (2013) Structural and molecular interrogation of intact biological systems. Nature. DOI: 10.1038/nature12107
Tyrer N. M., & Altman J. S. (1974) Motor and sensory flight neurones in a locust demonstrated using cobalt chloride. The Journal of Comparative Neurology, 157(2), 117-138. DOI: 10.1002/cne.901570203
Eyes are good things to have in the light. But if you lived in the dark... all the time... would those eyes become so much a nuisance that you might lose them?
Animals that live in caves are often blind. People sometimes mistake this as evidence that features can be lost just by a “Use it or lose it” rule. That would be an example of inheriting an acquired character, which doesn’t happen in evolution. Instead, the typical explanation is that because there is no advantage to maintaining eyes if you’re a cave dwelling population, any mutation that messes up making eyes is on an equal footing with the genes for making eyes.
It’s not that there’s an advantage for blindess... it’s just that there’s no disadavantage to it. And eyes are complex things to make, so lots of mutations could interfere with making eyes.
A recent paper by Klaus and colleagues suggests that sometimes, blindness in a cave-dweller is an advantage, not just neutral. They examined a group of crabs (genus Sundathelphusa; pictured, showing most cave adapted at bottom) in the Philippines. These are freshwater crabs, and some live in lakes and rivers and such above ground, and some live in caves. In fact, these crabs invaded caves over half a dozen times in the genus. The repeated examples make for nice natural experiments.
Using a combination of genetics plus the shape of the animal, they found that the eyes of the cave crabs had evolved just as fast as other features. Klaus and company argue that if the loss of eyes was genuinely neutral, you would expect it to be happening more slowly than other features, which are presumably under selection. Instead, the eyes were evolving just as quickly as the other featured, which suggests there is some sort of advantage to being blind.
What the advantage might be... the authors don’t say, surprising. In the introduction, Klaus and colleagues mention the idea that losing eyes “frees up” compuational power for other sensory organs. But they don’t follow that up in the discussion. They don’t even speculate a tiny little bit in the discussion. Other papers have also suggested some sort of advantage to blindness, but as far as I know, nobody has yet come up with a testable hypothesis. That it seems to be the case with both vertebrates and invertebrates suggests that whatever that selective factor is, it is very general.
Klaus S, Mendoza JCE, Liew JH, Plath M, Meier R, Yeo DCJ. 2013. Rapid evolution of troglomorphic characters suggests selection rather than neutral mutation as a driver of eye reduction in cave crabs, Biology Letters 9(2) 20121098. DOI: 10.1098/rsbl.2012.1098
Turning light and going blind: A tale of caves and genes
Once more into the cave
... Read more »
Klaus S, Mendoza JCE, Liew JH, Plath M, Meier R, & Yeo DCJ. (2013) Rapid evolution of troglomorphic characters suggests selection rather than neutral mutation as a driver of eye reduction in cave crabs. Biology Letters, 9(2), 20121098-20121098. DOI: 10.1098/rsbl.2012.1098
As I’ve mentioned before, scientists are so conservative that when you see an adjective like “extraordinary” in the title, you should at least open up the paper if you can and have a peek.
I came across a paper titled, “An extraordinary tail – integrative review of the agamid genus Xenagama” in Google Reader *. I was a bit curious (and miffed) because I had no idea from the title what kind of organism this paper would be about. All kinds of animals have tails.
I love me spikes and spines and armor on critters, so I flipped out a bit when I learned this belonged to the Xenagama:
That is indeed an cool looking lizard (Xenagama taylori) with a cool looking tail. The genus Xenagama originally contained two species that was defined by this short, club-like, spiky tail. But there’s a problem when you use a single extraordinary feature to classify animals: you might overlook all the other features that tie it to other relatives.
A new paper Wagner and colleagues uses a lot of different tricks to tease apart the evolutionary relationships of the lizards in this genus: morphology, genetics, climate, and so on.
By looking at all the morphology, and not just the tails, they found that a long-tailed lizard previously put in another genus (Acanthocercus zonurus; below) sorts out with Xenagama and not Acanthocercus. Genetic analysis on this species also put it in with the rest of the Xenagama group, although it’s an early offshoot from the tree of these related lizards.
The authors also discovered a new species in the genus, that, like Acanthocercus zonurus, has a reasonably long tail; sort of intermediate between the short known species and the misidentified one. This new species is dubbed Xenagama wilmsi.
It turns out that the short tail of most of the lizards in this genus was something that was obscuring some of the relationships. There were similar problems with data on breeding colours. Some of the males in this group show different colours, which was used in creating their classifications, but the males don’t show those breeding colours all year round.
All of which doesn’t answer the obvious question: why do some of these lizards have these short tails? The tails do seem to have an adaptive function. The two species with long tails seem to be tree dwellers, while the two short-tailed species are rock-dwelling burrowers. Xenagama taylori will use its short spiked tail to close its burrow, which you can see in action below:
How this tail has been molded through development and genetic to get so short would be a great doctorate for someone. While native to northern Africa, some of these lizards seem to be fairly available in the pet trade. Don’t know how easily these lizards would be to breed in captivity, though.
* You know, that allegedly useless service that absolutely nobody needs because all of the people on Twitter and social media are so good at finding stuff that I want to read, yet who somehow let me down on discovering this.
Wagner P, Mazuch T, Bauer AM. 2013. An extraordinary tail - integrative review of the agamid genus Xenagama. Journal of Zoological Systematics and Evolutionary Research: in press. DOI: 10.1111/jzs.12016
Top photo from here; Acanthocercus zonurus from here.... Read more »
Wagner Philipp, Mazuch Tomas, & Bauer Aaron M. (2013) An extraordinary tail - integrative review of the agamid genus Xenagama . Journal of Zoological Systematics and Evolutionary Research. DOI: 10.1111/jzs.12016
A couple of years ago, I got into a car wreck. A tire blew out on a truck to my right. It swerved and hit me. I skidded across the road. You know what you’re supposed to do in that situation, right?
You’re supposed to steer into the skid.
I did not. I was unable to correct the skid, and wound up crossing a couple of lanes of the highway. There was no oncoming traffic, and I was fine.
I was trained to do the correct thing and steer into the skid. I took driving lessons. Steering into the skid is what you’re told to do in driving school. I know this intellectually. But it’s not intuitive, you have only a split second to react, and, most importantly, we try hard not to create out of control skids. Skids are rare for people doing routine driving, especially in someplace like Southern Texas, where there are rarely icy roads.
How much time should driving instructors spend training beginning drivers to cope with skidding? There isn’t a simple answer. Someone who wants be a professional driver should get more training. A person whose driving mainly to a daily commute in a warm, semi-arid climate, may not need any training. I never practised steering a skidding ca, although I learned to drive in Canada, where icy roads are routine.
Last week, NESCent hosted a conference on journalism and reporting of evolution; something I’ve written about a fair amount here. As a possible solution to improve the situation, Melissa Wilson Sayres wrote:
Best Practice: Formal training in journalism/media communication for graduate students
(Check her original tweet for some discussion.)
This suggestion is well meaning. It’s a tempting suggestion to make for us in academia, since our entire career revolves around training in one way or another.I’ve been guilty of saying, “Every academic should make it a point to get good at... (pet topic).” But such suggestions are hard to do.
The deeper concern is whether “formal training in graduate school” can what we want it to do.
For instance, there has been a lot of interest in having students receive training in research ethics. Funding agencies love these. Some set aside specific pots of money to supplement training programs so that those programs can include training in ethics. Despite that, the Retraction Watch blog has no shortage of material, and most retractions are due to unethical behaviour on the part of the authors (Fang et al. 2012).
As an instructor, obviously I am not going to say that training is entirely useless. Rather, I am saying that training happens in a larger context. There is a great big ol’ reward system in place in academic science. Academic science rewards you for original peer reviewed journal articles, preferably in a small set of journals with a high impact factors (the “glamour mags”), and grants. The rewards for getting those things are large.
Giving a grad student formal ethics training and expecting them not to be even a little tempted to take shortcuts in their research to get those highly rewarded papers in Nature, or Science, or Cell is like admonishing someone to cut down on calories while leading them through a cupcake shop giving away free samples while everyone’s back is turned.
Similarly, despite training about sexual harassment, there’s still a lot of pig-headed, boorish, sexist behaviour in the workplace. Again, note that I’m not saying that such training is useless, but that there is a lot of cultural baggage that can’t quickly be overcome by “formal training.”
First, there is no central authority that says, “YEA VERILY, ALL GRADUATE PROGRAM SHALL TEACH...” Trying to implement any formal training across the board is tough, given that grad students are spread across thousands of independent fiefdoms.
And let’s not underestimate how long “communication training” would take. As Karen James wrote:
I’ve been working at (communicating outside a research field) for a decade and still not there.
Graduate students get a lot of formal training already. There has to be a point where we stop adding to their curriculum. We can’t just send students to a workshop, or even a semester long
class, then dust off our hands and say, “They’ve been trained.” Communication training won’t matter much until there are rewards and opportunity for people to practice those skills, day in, day out, until it becomes like steering into the skid: when you don’t even have to think it through.
Fang FC, Steen RG, Casadevall A. 2012. Misconduct accounts for the majority of retracted scientific publications. Proceedings of the National Academy of Sciences 109(42): 17028-17033. http://dx.doi.org/10.1073/pnas.1212247109
“We cheated death”
Reporting Across the Culture Wars: Engaging Media on Evolution
Photo by Sugar Daze on Flickr; used under a Creative Commons license.
... Read more »
Fang F. C., Steen R. G., & Casadevall A. (2012) Misconduct accounts for the majority of retracted scientific publications. Proceedings of the National Academy of Sciences, 109(42), 17028-17033. DOI: 10.1073/pnas.1212247109
A new editorial in The Journal of Comparative Neurology celebrates a paper that goes the extra mile in making its anatomical data available:
(The authors) provide an unprecedented level of access to their supporting data by publishing their full set of experimental outcomes in the form of virtual slides, or whole‐slide images.
The editorial nicely summarizes why archiving data from brain slices is particularly important. Brains are complex structures, and there is necessarily a lot of interpretation of what you see on microscope slides. (How many beginning students mistake air bubbles for amoeba?). Increasingly, many studies rely on stains that fade over time.
For comparative neuroanatomists, you can’t always guarantee that you will be able to get another brain from some interesting species. You can’t just go get a brain from a whale any time you want. There is a tradition of collecting and archiving interesting brains from all kinds of species in comparative neuroanatomy.
The editorial points out the advantages of archiving these data on the Internet rather than in print:
(A) typical virtual slide in the collection would require over 250 square meters of paper if printed at full resolution.
The irony of all this is that The Journal of Comparative Neurology is a paywalled, subscription based journal. And not just any subscription journal, but one with a breath-taking $30,860 price tag. And that’s for Internet access or print. If you want both, be prepared to add a few thousand to meet the new asking price of $35,489.
Guys, if openness and data sharing is good, and the limitations of print are bad, you’ve just made great argument for journals like PLOS ONE, PeerJ, the BioMed Central family, and their like. Why does your journal continue to exist in its current form?
Got $30,000 to spare?
Gaillard F, Karten HJ, Sauvé Y. 2013. Retinorecipient areas in the diurnal murine rodent Arvicanthis niloticus: A disproportionally large superior colliculus. The Journal of Comparative Neurology: in press. DOI: 10.1002/cne.23303.
Karten HJ, Glaser JR, Hof PR. 2013. A landmark in scientific publishing. The Journal of Comparative Neurology: in press. DOI: 10.1002/cne.23329
Photo by topastrodfogna on Flickr; used under a Creative Commons license.... Read more »
True facts about giraffes!
They’re tall. And I use the word precisely. They’re not just big; their legs are about half again as long as you’d predict based on their mass and bodies of other mammals.
Being tall has distinct consequences for the nervous system. The distances that signals have to travel might mean there is lots of lag between something happening out in the world, the signal getting to the brain, and the appropriate response going all the way back down to the muscles the animals use to move about.
There are ways around this distance problem. You can make axons bigger, which speeds up how fast they can send a signal, but that means you probably have fewer axons, which could mean lower sensitivity on the sensory side, or precision on the motor side.
A new paper by Heather More and colleagues try to figure out how the giraffe’s nervous system deals with all these great distances. They recorded the speed of reactions and the size of neurons in the sciatic nerve of giraffes. The average speed of signals in this giraffe nerve was about 50 meters per second, which is about the same as rats. Rats, it should be noted, are not tall. They’re not even big.
More and colleagues calculated that for a giraffe to be as quick and as responsive as a rat, the speed of signals would have to be around 200 m/s, which is the top speed in the entire animal kingdom (a record held by some shrimps). And to get to that point, their neurons would have to be two and a half times the diameter they actually found in the giraffes.
Looking at the sheer number of axons, More and colleagues also suggest that the giraffe is comparatively at a disadvantage compared to smaller mammals. If the giraffe’s sciatic nerve had the same number of axons for its size as the rat does, it would have about 50 times more axons than what giraffes actually have.
From this, the authors predict that giraffes are working with a bit of a neural and behavioural handicap. They should be less sensitive to the world around them, and slower to respond to it, than smaller animals. But this is still a prediction that needs testing. Getting some giraffes in the lab for the experiments might be a bit tricky.
P.S.—When I blogged about a previous paper with some of the same authors, one criticism in the comments was that the team used conduction velocity of action potentials to measure “responsiveness.” This paper does a much better job of laying out all the different elements that go into determining “responsiveness” in general. That said, they don’t make much progress on measuring all those other elements in this paper, but at least they recognize they exist.
The elephant and the shrew, an axonal story
More HL, O'Connor SM, Brondum E, Wang T, Bertelsen MF, Grondahl C, Kastberg K, Horlyck A, Funder J, Donelan JM. 2013. Sensorimotor responsiveness and resolution in the giraffe. The Journal of Experimental Biology 216 (6): 1003-1011. DOI: 10.1242/jeb.067231
Top photo by ucumari on Flickr; used under a Creative Commons license.... Read more »
More H. L., O'Connor S. M., Brondum E., Wang T., Bertelsen M. F., Grondahl C., Kastberg K., Horlyck A., Funder J., & Donelan J. M. (2013) Sensorimotor responsiveness and resolution in the giraffe. Journal of Experimental Biology, 216(6), 1003-1011. DOI: 10.1242/jeb.067231
A while ago, there were some reports of young men at universities who came up with an interesting way of imbibing alcohol. It was nicknamed “buttchugging.”
This method of alcohol delivery is, from a certain very twisted point of view, quite clever. The gut is a tube. This means that regardless of which orifice alcohol enters your gut,you can still uptake the alcohol into your system and enjoy the intoxicating effects.
Now, a new paper from Jaeckle and Strathmann looks whether sea cucumbers might be able to ingest food via the back door. It turns out that sea cucumbers have an odd system for taking up oxygen. They have respiratory trees that connect to the anus, shown below:
Consequently, the anus isn’t just an exit point for undigested waste in sea cucumbers. There is much more activity than simple expulsion, with water being moved to and fro to allow for respiration. Given this, there would seem to be much more opportunity for nutrition to enter the digestive system through the anus.
To test this, they performed two experiments. In the first, they put giant California sea cucumbers (Parastichopus californicus) in tanks with algae that contained a heavy carbon isotope (14C). Let’s look at Figure 2.
What the authors are trying to show is that the closer you are to the respiratory tree, the proposed entry for the food, the more of the heavy carbon isotope you find. After 24 hours, though, the level is nearly as high in the digestive system as the respiratory tree. But a close look at the figure starts to see some significant shortcomings.
First, the more disturbing thing about this graph is the error bars. There aren’t any. There is no indication of sample size anywhere in the paper. There are no statistical analyses, either. I suspect that each data point was pulled from one individual. If so, that is a huge problem that makes it difficult to conclude much of anything.
Then, notice the time collection intervals. You get a lot of measurements in
the first 8 hours, then nothing until 25 hours. I am betting the
#OverlyHonestMethods version of this would read, “We did most of the
work in one day, went home to eat and sleep, and came back the next
morning.” That day long cycle might matter for the results, depending on
the behaviour of the sea cucumbers. Are they nocturnal? Do they feed
differently in the day that the night? Those sort of questions are not
answered. They also don’t appear to take any steps to try controlling
for ingestion through the mouth.
In the second experiment, they exposed the animals to large molecules containing a lot of iron. The iron allowed them to stain it later to see if it had been taken into the tissues.
I don’t like this experiment as much as the first one, because it seems unlikely that sea cucumbers are hanging out in regions that are rich in large nutritional molecules. The previous experiment seems to better represent actual ingestion of food in cells: the sort that a sea cucumber might encounter in the wild.
Again, there are some shortcomings in how they present their results. In their figures 4 and 5, Jaeckle and Strathmann show that there are bits of respiratory tree tissue that have stained blur for iron, indicating that those molecules were incorporated into the animal. But... the authors show only the positive experimental stains. While they say in the text:
In the respiratory trees of control animals, there was no equivalent presence of the blue reaction product.
It would be much better if they also showed the negative control in the pictures.
And another weird thing that you can see in the acknowledgements is that this work was done in 1996. While this is not a record for delay between the experiment and publication, over 15 years in the waiting has to be in the top one percent of waits.
The authors note that even if the sea cucumbers were able to retain all the food in the water, the amount of food they would be getting would probably not be that large. The authors do raise the possibility that this unusual way of feeding might allow sea cucumbers to get different kinds of food than they would get by ingesting it through the mouth. But this is speculative, and it seems likely that anal feeding would contribute at best only a small amount of the animal’s nutrition.
This paper makes a plausible case for this sea cucumber species to be able to get some nutrition via the anus, but it is very limited in what you can conclude from it.
P.S.—And if all that didn’t make sea cucumber anus remarkable enough, it can also provides a home for fish.
Jaeckle WB, Strathmann RR. 2013. The anus as a second mouth: anal suspension feeding by an oral deposit-feeding sea cucumber. Invertebrate Biology: in press. DOI: 10.1111/ivb.12009
Giant California sea cucumber photo by Ken-ichi; sea cucumber diagram from here.... Read more »
Jaeckle William. B., & Strathmann Richard. R. (2013) The anus as a second mouth: anal suspension feeding by an oral deposit-feeding sea cucumber. Invertebrate Biology. DOI: 10.1111/ivb.12009
This picture can’t do them justice. No picture can.
That’s because this is a picture of a blue whale, the largest animal to live on this planet. Ever.
Goodness knows, people try to show you the size. They put up mounts of blue whale skeletons in museums, or life sized models. There’s a very cool online animation that shows images from the blue whale full sized, on your computer screen, as has it drift by lazily. But I suspect that even these clever things do the trick of conveying what the size of the living, breathing animal must be.
But while the blue whale has the undisputed title of being the biggest, whales, dolphins, and their brethren in general are all very big compared to most mammals. In a new paper, Clauset tests a model that tries to explain why whales might so big.
Normally, when I think of limits to size, I think of biomechanical and physical constraints. “In a big animal, can you make the bone thick enough to move without breaking?”, for instance. This is a common sort of explanation for why you can’t make giant insects like in the old 1950s monster movies:
Strength increases as you make muscles bigger, but strength doesn’t increase as fast as mass does.
What’s interesting about Clauset’s approach is that he explains the sizes of whales without using too many of these sorts of arguments. He does invoke the physics to explain the limits to small sizes. Mammals (and birds) can only be so small because of how they regulate their body temperature. If you’re too small, you cannot eat enough food to make up for the heat flow away from your body.
This thermoregulation problem explains why there are no cat-sized dolphins that you keep in a backyard pool, or hamster-sized porpoises you can keep in a home aquarium. The smallest cetacean is the La Plata dolphin, which are around 35-50 kg as adults. Although the babies are pretty squee-worthy:
Clauset ignores all the details of biomechanical and physical and energetic limitations by rolling all that into “extinction risk.” Can make bones strong enough? Can’t eat enough food? All of those mean that those big species are more likely to go extinct.
Clauset assumes a species can either get bigger or smaller over evolutionary time, although Clauset assumes there are some fitness advantages to being bigger. You have a hard limit on how small you can get set by your ability to themoregulate. The limit to how big you can get is a soft limit set by the likelihood your lineage will go extinct. With only these facts, Clauset’s model fits the size distribution of cetaceans extremely well. Presumably, the same model could be used for terrestrial mammals or birds.
Even the massive blue whale, Clauset says, is not particularly unlikely according to his model. Clauset draws out the line from his model and suggests that it might be possible to have a whale species that is over three times bigger than blue whales; 3.7 times, to be exact. Clauset notes that such a massive whale could not just be the blue whale scaled up. To be bigger than the blues, a new whale species might have to evolve some innovation that would allow them to forage more efficiently than the blue whale’s lunge feeding.
The notion that even the blue what could be dwarfed by another sea creature is an awe inspiring thought.
Clauset A. 2013. How large should whales be? PLOS ONE 8(1): e53967. DOI: 10.1371/journal.pone.0053967
Blue whale photo by Seabass London on Flickr; used under a Creative Commons license. La Plata dolphin from Washington Post.... Read more »
What we know about crustacean pain?
Crabs, and probably other big decapod crustaceans, avoid electric shock in the short term.
They can learn to avoid places where they were shocked over slightly longer terms.
There may be substantial variation across individuals in their ability to learn.
The evidence is consistent with pain.
Pain is hard to prove, even in humans.
What don’t we know?
Whether electric shock is normally relevant to crustaceans.
Whether electric shock processed in the same sort of way that people and mammals process noxious stimuli.
Whether there are specialized neurons for noxious stimuli in crustacean.
Whether any other kinds of stimuli are aversive to large crustaceans, like high temperatures.
A new paper on crustacean pain by Magee and Elwood came out on Wednesday, 16 January 2013. This new paper is, in some ways, a variation on papers that senior author Robert Elwood published on hermit crabs (Appel and Elwood 2009, Elwood and Appel, 2009). In these earlier hermit crab work, they shocked the abdomen of the crabs, and they found they could get the hermits to leave the shell and take what would normally be an inferior shell as a new home.
In the new article, Magee and Elwood used electric shock as their stimulus again. Electric shock is used in many studies of pain, but it does have a problem: it’s not very specific. It will trip off any kind of cell that is electrically excitable, including sensory neurons, motor neurons (potentially causing spikes from motor neurons to travel backwards into the central nervous system), and muscles. It’s a difficult to know just what you’re doing to the animal’s nervous system.
Crabs (in this case, Carcinus maenas; pictured) like dark hidey-holes. Magee and Elwood took the crabs, placed them in an open arena with two shelters, one with vertical stripes and one with horizontal stripes. Then, they chose one shelter that if the crab entered it, they gave it an electric shock. They gave the animals 10 trials, which were delivered in short order (two minutes between each trial). The total testing time for each animal seemed to be maybe an hour or two, depending on the crab’s behaviour.
What you would expect is that crabs would learn to avoid the avoid the shelter in which they got shocked. This is indeed what happens (Figure 1 from Magee and Elwood below). But notice that even at the end of the trails, about a third of the crabs are still walking into the shock box. Maybe these these just crab masochists.
The authors went on to test whether it was the stripes on the shelter, or the direction of the shelter, that the crabs had learned by swapping the positions of the two shelters. This seemed to have happened right after the ten trials. Given that part of the argument that they make in the Introduction that “pain facilitates long-term protection because of the ease with which animals learn to avoid that situation and avoid future damage,” and in the Discussion that they are finding “long-term motivational change,” the short time scale is slightly surprising. The argument would be greatly strengthened if they showed the crabs retained the memory over, say, 24 hours. Other studies have shown that decapod crustaceans, such as crayfish and lobsters, can remember things like who they’ve fought for days to weeks (reviewed in Hemsworth et al. 2007).
It’s worth comparing this to an earlier paper that had a similar approach. Kawai et al. (2004) also applied electric shock to crayfish. The authors had two groups:
Crayfish shocked while facing a door they could escape through (group F for forward);
Crayfish shocked while they were facing away from the door (group B for backward).
These authors noted that group F, on the right in the figure below, learned to avoid the shock. But notice that about half the animals in this group never learned the task (S6, 7, and 10), and it took a long time. Group B never improved.
Magee and Elwood mention the long training in the paper by Kawai and colleagues, but not the variation in performance either between the two groups, or within the one group. I wonder if an animal by animal breakdown in the Magee and Elwood paper would show the same sort of individual variation. That is, the Magee and Elwood figure could either be:
All the crabs getting a little better throughout the trials.
Some crabs getting very good, while others just never, ever learn.
Now, the crayfish were shocked less than the crabs were. But if the idea being put forward is that avoidance learning suggests pain, would this mean that only about half of crayfish feel pain... and only if they are looking at a door?
Having talked about the paper itself, I want to shift to how it’s being reported. I expected media attention over this paper, because previous papers from the Elwood lab had gotten attention. For instance, a lead interview on Quirks and Quarks. I was still surprised by the breadth of coverage people in my Twitter feed helped me find yesterday. The Daily Mail was first I found, followed by the BBC, Discovery, The Guardian, National Public Radio, Fox News, Live Science and I’m sure there will be many others.
The Mail article starts off with the old hack, “Scientists have proven...” as though this is a worldwide consensus instead of two people (Magee and Elwood, 2013). Oh, but how such reports grate on me with their use of the word “prove.” (There are exceptions; the BBC puts “further evidence” in its headline, which emphasizes that this is part of a series of studies).
The original paper in JEB is, as often the case, much more nuanced than the press articles suggest. It carefully uses phrases like, “consistent with pain”, which admits there are alternative hypotheses. Elwood and colleagues are doing a series of studies, step by step; the sort of incremental progress that characterizes so much of science.
Let me start by saying that not everyone agrees with the notion generally that crustaceans feel pain. Victoria Braithwaite, in her book Do Fish Feel Pain? (reviewed here), concluded that crustaceans do not feel pain, even after long discussions with Elwood.
The two have very different standards on what they think the criteria for pain should be. Elwood appears to think that to show that an animal feels pain, you only need to show that it responds to a nasty stimulus with something more complicated than a reflex. Braithwaite thinks that to show an animal feels pain, you need to show that animal is capable of consciousness. I think it’s fair to say that neither one is widely accepted as the accepted standard for showing an animal feels pain.
To take a similar case, the evidence for nociceptio... Read more »
Magee B., & Elwood R. W. (2013) Shock avoidance by discrimination learning in the shore crab (Carcinus maenas) is consistent with a key criterion for pain. Journal of Experimental Biology, 216(3), 353-358. DOI: 10.1242/jeb.072041
We normally think that each of our senses is more or less distinct. Sure, there’s that condition called synesthesia, where people experience numbers with colours and that sort of thing, but that’s pretty rare, right?
Maybe not. A new paper suggests our different senses may be influencing each other more often than we think. The team looked at how smell, something we normally think of as one of our weaker, less important senses, hold sway over our vision, the sense that most people normally think of as our strongest, most important, senses.
Zhou and colleagues used a phenomenon called binocular rivalry to test this. Binocular rivalry is not when two binocular stores are competing on price.
Normally, the two halves of our brain get complementary information coming from each eye, which the brain stitches together into one almost seamless visual experience. Using a little bit of visual trickery, it’s possible to get all of the left brain being fed one image, and all of the right brain being fed a completely different, incompatible image. Face with two competing sets of information, people see only one image image at a time, alternating with the other, alternating with the other every few seconds, in an unpredictable way
You can get a sense of it from this picture. If you let your eyes cross so that the images are superimposed (a little like a 3-D stereogram).
In the overlapping image in the center, you will tend to see either green circles or red bars, not the half and half images. The two images will alternate back and forth in an unpredictable way.
In their main experiment, Zhou and colleagues showed people rival pictures of a rose and a banana at the same time. While doing this, they gave their volunteers the smell of a rose, and people became more likely to see the image of the rose.
When they gave them the smell of the banana, they were more likely to see the banana.
They also got this effect with a mix of images and words. In a second experiment (which must have been less fun for the volunteers), the rival images were a male torso and a set or words. When presented with the smell of, um, body odor. When presented with good ol’ B.O. (eeewww), the subjects were more likely to see the person instead of the words... but only if the smell of sweat was given in the right nostril.
Why does the nostril matter? Like the rest of our body, each nostril is wired to one half of the brain, so the input from each nostril has a different effect on one side of the brain than the other.
The “nostril” effect can be broken fairly easily, though. If you show a picture of a banana, with a rival image being the word “rose”, the scent of the rose still makes you more likely to see the word “rose,” but it no longer matters which nostril through which you smell the rose-like scent.
The one thing I can’t quite understand is why this paper is in The Journal of Neuroscience. There is no neuroscience in this paper. No brightly lit brain blobs, no EEGs, no neurons, nothing. This is a straight sensory perception paper.
Zhou W, Zhang X, Chen J, Wang L, Chen D. 2012. Nostril-specific olfactory modulation of visual perception in binocular rivalry. Journal of Neuroscience 32(48): 17225-17229. DOI: 10.1523/jneurosci.2649-12.2012
Binocular rivalry image from here. Nose by Caro's Lines on Flickr; rose and banana by cproppe on Flickr; both used under a Creative Commons license.... Read more »
Zhou W., Zhang X., Chen J., Wang L., & Chen D. (2012) Nostril-Specific Olfactory Modulation of Visual Perception in Binocular Rivalry. Journal of Neuroscience, 32(48), 17225-17229. DOI: 10.1523/JNEUROSCI.2649-12.2012
How long can an insect live? Cicadas might be up near the top. Some cicadas are famous for remaining in the larval stage for thirteen and seventeen years/ That makes them a pretty long lived insect, even if they spend most of that time as larvae underground, out of sight.
A lot of cicadas are synced up in these thirteen and seventeen year cycles, so that in peak years, huge numbers of these insects emerge. Then they are everywhere, singing to attract mates so they can get the next brood of baby cicadas on their long road to maturity.
Now, these two times – thirteen and seventeen years – are notable because they are both prime numbers. As I understood it, the leading explanation is that lots of things in nature tend to cycle. But most of those cycles are fairly short. One possible advantage of something that cycles with a prime number is that it’s unlikely that any other short cyclic events will consistently coincide with the emergence of the new adult cicadas.
Imagine cicadas emerged on a twelve year cycle. Any predator that was on a roughly two, three, four, or six year cycle could sync up with the food feat of cicada emergence – provided there was a little give in their cycles so they could line up in the first place. But that sort of synchronization between predators and prey is much harder to do with a prime number. Thus, cicadas never face large numbers of predators just waiting for them to come out from their long larval stage.
A new paper suggests that the cicadas might even reap a bigger advantage than that.
Koenig and Liebhold do a new analysis estimating how many birds are during each year when cicadas emerge in large numbers, and how many birds when the cicadas don’t. They have population estimates for fifteen predatory bird species over 45 years. Their data set is as old as I am.
Surprisingly, there are routinely fewer birds on the years when cicadas emerge. The authors propose that this indicates that the long cycle has somehow allowed the cicadas to emerge during years that are safer than usual.
The authors do briefly mention alternative hypotheses. Cicadas are famously loud insects. Maybe the cicadas are so abundant and noisy that they actually drive birds away from their normal habitats. They authors say this is unlikely, because the bird counts go down even in places where the cicadas are not calling.
Koenig and Liebhold suggest that it's more or less coincidence that the cicada broods last for a prime number of years. They suggest that the emergence of these huge numbers of insects has some sort of knock-on effects, such that when they occur, the bird populations are effects, and go through booms and busts of their own - and the birds' low point comes around again in about thirteen or seventeen years.
The details of how this might happen aren't clear.
I suppose that the good news about being a cicada researcher is you have time to plan new studies. The bad new is that it probably doesn't take thirteen years to plan those projects... or seventeen years
Koenig WD, Liebhold AM. 2012. Avian predation pressure as a potential driver of periodical cicada cycle length. The American Naturalist: in press. DOI: 10.1086/668596
Photo by fmerenda on Flickr; used a Creative Commons license.... Read more »
Koenig Walter D., & Liebhold Andrew M. (2012) Avian Predation Pressure as a Potential Driver of Periodical Cicada Cycle Length. The American Naturalist. DOI: 10.1086/668596
There are maps of your body in your brain. Some maps represent the control over your muscles. Other maps show the input coming in from your senses. One of the best known sensory maps is the one for touch.
But we might think of everything we feel with our skin as one sense – touch – these are several separate sense. We feel pressure. We feel changes in temperature, and and different neurons handle warmth and chills.
And we feel pain.
While Wilder Penfield published the famous maps of the somatosensory cortex over 60 years ago, it hasn’t been clear if the neurons that we use to pick up pain from tissue damage, nociceptors, make maps in the brain the way other sense do. There are fewer nociceptors in the skin than other sensory neurons.
A new paper by Mancini and colleagues set out to test this. They gave their volunteers either innocuous little puffs of air on their hands, or...
They shot their volunteers with frikkin’ laser beams.
This hurt. Not much, but enough to set off the nociceptors in the volunteers’s fingers. The authors describe it as “pinprick.”
While they were doing this to the hands, Mancini and company were taking brain scans using functional magnetic resonance imaging (fMRI).
If you look at your hand, the middle finger is well, in the middle, flanked on either side by the ring and index fingers.
If there’s a map of nociceptors in the cortex, you should find that same order in the parts of the brain that respond to being shot with lasers. Using the colour scheme above, the blue should always be flanked by red on the one side and green on the other.
And that’s what you see. Check the area surrounded by the dotted white line in the picture:
The team also shows that the responses for the control puffs of air also map out in the same way.
Strictly speaking, the authors only show that there’s a map of the nociceptors of the fingers. Now, to assert that this means there is a full map of the sort that gets shown in textbooks is sort of like saying that because you have a decent map of the Mediterranean, you also have a decent map of Australia. That’s plausible, though strictly speaking, they haven’t mapped the entire nociceptive globe, so to speak.
It’s a nice demonstration that these neurons follow some of the same patterns of organization as other sensory systems. Which does lead to a bigger question: why does the nervous system tend to make these maps instead of some other form of organization?
Mancini F, Haggard P, Iannetti GD, Longo MR, Sereno MI. 2012. Fine-grained nociceptive maps in primary somatosensory cortex. The Journal of Neuroscience 32(48): 17155-17162. DOI: 10.1523/JNEUROSCI.3059-12.2012
Classic graphics #3: The somatosensory cortex
... Read more »
Mancini F., Haggard P., Iannetti G. D., Longo M. R., & Sereno M. I. (2012) Fine-Grained Nociceptive Maps in Primary Somatosensory Cortex. Journal of Neuroscience, 32(48), 17155-17162. DOI: 10.1523/JNEUROSCI.3059-12.2012
Honeybees are clever wee beasties. If you give a honeybee a scent, then give her food, she can quickly learn to extend her mouthparts when she smells the scent alone. And they can remember this for at least a whole 24 hour day. This is a classic learning test made famous by Pavlov’s dogs. So honeybees are at least as smart as dogs, for this test anyway.
What’s going on in that tiny little head as they learn that some arbitrary smell means food? Usually, neurons need to make new “stuff” to form a memory. Making proteins, for instance, is usually needed for long term memory, but not short term memory.
Actin is a protein that is best known as half of the machinery that powers muscles (myosin in the other), but actin is also a more general component of a cell’s skeleton. In rats and mice and other furry mammals, you need to make actin to get long-term potentiation (LTP), which is a strengthening of the connections between two neurons.
Ganeshina and colleagues injected honeybees with chemicals that blocked the making of actin. You would expect that this would mess up the poor little honeybee’s memory.
But expectations were dashed. These actin-inhibiting drugs made the honeybees remember better, not worse.
The authors’ aren’t sure what’s going on here, but they have a guess.
The parts of the honeybee’s nervous system that learns smell are called the mushroom bodies. These mushroom bodies grow a little as the honeybee gets older, adding in new connections between neurons all the time, regardless of whether the honeybee learns anything or not. These new connections, because they aren’t related to anything the bee learns, would mostly add noise to the neural pathway. And that could drown out some of connections between neurons that are formed or strengthened as the honeybee learns.
The authors seem to think that knocking out the actin production prevents “random” new connections that would form just during normal aging. As a result, the honeybee gets more memory signal and less noise.
This is a story of diversity. This paper reminds us that even when animals can learn the same kinds of tasks, they may not be learning them in the same ways.
Ganeshina O, Erdmann J, Tiberi S, Vorobyev M, Menzel R. 2012. Depolymerization of actin facilitates memory formation in an insect. Biology Letters 8(6): 1023-1027. DOI: 10.1098/rsbl.2012.0784
Photo by BugMan50 on Flickr; used under a Creative Commons license.... Read more »
Ganeshina O., Erdmann J., Tiberi S., Vorobyev M., & Menzel R. (2012) Depolymerization of actin facilitates memory formation in an insect. Biology Letters, 8(6), 1023-1027. DOI: 10.1098/rsbl.2012.0784
When I was at the International Congress for Neuroethology in August, I tweeted this piece of advice offered for neuroethologists:
Use the champion animal.
Speaker Bill Kristan attributed this Walter Heilingenberg. The idea is simple: study the animal that is the best adapted, or makes greatest use of some feature or ability.
I was fascinated that I had never heard this quote before, even though Heilingenberg is well-remembered in the neuroethology community. (He died in 1994). I was further fascinated by how “sticky” this quote was at the meeting. “Champion animal” turned up in talk after talk, until by day 5, I was calling it “the Heilingenberg rule.”
I wondered if Helingenberg had ever written that that memorable advice down. After tracking and running into a few dead ends in Google and Google Scholar, I found this, which seems to be the origin of the phrase (Helingenberg 1991):
We have learned that some animal species are champions in particular aspect of sensory or motor performance and that such superior capabilities are linked to highly specialized neuronal structures. Such structures incorporate and optimize particular neuronal designs that may be less conspicuous in organisms lacking these superior capabilities (Bullock 1984, 1986a,b). Moreover, the behavioral repertoire of such “champion” species readily offers paradigms for testing the performance of their special designs at the level of the intact animal.
I was a little disappointed that the verifiable version of punchy, memorable advice is stuck in longer, more mundane scientific prose. I suppose I should not be surprised, given that many other great ideas start off as rather lengthy bits in print, and get shorter (and more memorable!) in the retelling.
For instance, the phrase, “an inordinate fondess for beetles,” is often quoted (or misquoted) as being from J.B.S. Haldane. According to Stephen Jay Gould, who researched the phrase (reprinted in his book Dinosaur in a Haystack), Haldane almost certainly said this in conversation. But the versions of this idea that Haldane wrote down (“endowed with a passion...for beetles”) are nowhere near as good as “inordinate fondness.”
Then there’s the story of how a quote from a business professor in the 1960s became widely attributed to Charles Darwin. And in that case, too, the quote got shorter and more memorable with repeated retelling.
I am sort of hoping that Heilingenberg might have said the short version in conversation. The idea is worth encapsulating in a short, powerful sentence instead of academic prose.
Heiligenberg W. 1991. The neural basis of behavior: a neuroethological view, Annual Review of Neuroscience 14(1): 247-267. DOI: 10.1146/annurev.ne.14.030191.001335
Gould SJ. 1993. A special fondness for beetles. Natural History 102(1): 4.
Photo from here.... Read more »
Heiligenberg W. (1991) The Neural Basis of Behavior: A Neuroethological View. Annual Review of Neuroscience, 14(1), 267. DOI: 10.1146/annurev.ne.14.030191.001335
Do you write about peer-reviewed research in your blog? Use ResearchBlogging.org to make it easy for your readers — and others from around the world — to find your serious posts about academic research.
If you don't have a blog, you can still use our site to learn about fascinating developments in cutting-edge research from around the world.
Research Blogging is powered by SMG Technology.
To learn more, visit seedmediagroup.com.