Post List

Research / Scholarship posts

(Modify Search »)

  • November 27, 2011
  • 05:24 PM
  • 1,066 views

Sapolsky Religion Lecture Dissected Part 1/6

by DJ Busby in Astronasty

Sapolsky admits that this brilliant lecture is often met with considerable resistance by some religious types. Here, my goal is to dissect his lecture and provide adequate scholarly references to clarify and validate his argument; to reaffirm that his lecture is based on unbiased science.... Read more »

Torgersen, Svenn. (1985) Relationship of Schizotypal Personality Disorder to Schizophrenia: Genetics. Schizophrenia Bulletin , 11(4). info:/

Owen, M., Williams, H., & O’Donovan, M. (2009) Schizophrenia genetics: advancing on two fronts. Current Opinion in Genetics , 19(3), 266-270. DOI: 10.1016/j.gde.2009.02.008  

Kurotaki, N., Tasaki, S., Mishima, H., Ono, S., Imamura, A., Kikuchi, T., Nishida, N., Tokunaga, K., Yoshiura, K., & Ozawa, H. (2011) Identification of Novel Schizophrenia Loci by Homozygosity Mapping Using DNA Microarray Analysis. PLoS ONE, 6(5). DOI: 10.1371/journal.pone.0020589  

  • November 26, 2011
  • 09:52 AM
  • 660 views

Beware Dead Fish Statistics

by Neuroskeptic in Neuroskeptic

An editorial in the Journal of Physiology offers some important notes on statistics.But even more importantly, it refers to a certain blog in the process:The Student’s t-test merely quantifies the ‘Lack of support’ for no effect. It is left to the user of the test to decide how convincing this lack might be. A further difficulty is evident in the repeated samples we show in Figure 2: one of those samples was quite improbable because the P-value was 0.03, which suggests a substantial lack of support, but that’s chance for you! A parody of this effect of multiple sampling, taken to extremes, can be found at http://neuroskeptic.blogspot.com/2009/09/fmri-gets-slap-in-face-with-dead-fish.htmlThis makes it the second academic paper to refer to this blog as far. Although I feel rather bad about this one, since the citation ought to have been to the original dead salmon brain scanning study by Craig Bennett. I just wrote about it.Actually, though, this editorial was published in five separate journals: The Journal of Physiology, Experimental Physiology, the British Journal of Pharmacology, Advances in Physiology Education, Microcirculation, and Clinical and Experimental Pharmacology and Physiology. Phew.In fact, you could say that this makes not two but six citations for Neuroskeptic now. Yes. Let's go with that. Anyway, after discussing the history of the ubiquitous Student's t-test - which was invented in a brewery - it reminds us that the p value you get from such a t-test doesn't tell you how likely it is that your results are "real".Rather, it tells you how often you'd get the result you did, if there was no effect and it was just random chance. That's a big difference. A p value of 0.01 doesn't mean your results are 99% likely to be real. It means that there's a 1% chance that you'd get them, by chance. But if you did say 100 experiments, or more likely, 100 statistical tests on the same data, then you'd expect to get at least one result with a p value of 0.01 purely by chance.In that case it would be silly to think that the finding was only 1% likely to be a fluke. Of course it could be true. But we'd have no particular reason to think so until we get some more data. This is what the dead salmon study was all about. This multiple comparisons issue is very old, but very important. Arguably the biggest problem in science today is that we're doing too many comparisons and only reporting the significant ones.Drummond GB, & Tom BD (2011). Statistics, probability, significance, likelihood: words mean what we define them to mean. British journal of pharmacology, 164 (6), 1573-6 PMID: 22022804... Read more »

  • November 26, 2011
  • 05:02 AM
  • 900 views

The stupid things Scientists say: What the jargon really means…

by Stuart Farrimond in Guru: Science Blog

Don’t scientists talk a load of old prattle? Put an academic in front of a TV camera, and it’s odd how many of the world’s top brains seem unable to communicate what they mean. Of course there are the exceptions, and often they are scooped up by news agencies and media outlets. I remember being [...]... Read more »

Somerville, R., & Hassol, S. (2011) Communicating the science of climate change. Physics Today, 64(10), 48. DOI: 10.1063/PT.3.1296  

  • November 24, 2011
  • 01:19 PM
  • 1,487 views

Working Memory (with R Code!)

by Ryan in Epidemiology as a liberal art

Just got Daniel Kahneman's Thinking, Fast and Slow for my birthday, and if the first two chapters are any indication, this is an amazing book.To prove it, I just wasted 3 hours programming up Kahneman's Add-1 exercise in R. In his words:To start, make up several strings of 4 digits, all different, and write each string on an index card. Place a blank card on top of the deck. The task you will perform is called Add-1. Here is how it goes: Start beating a steady rhythm. Remove the blank card and read the four digits aloud. Wait for two beats, then report a string in which each of the original digits is incremented by 1. If the digits on the card are 5294, the correct response is 6305. Keeping the rhythm is important.I've been interested in working memory (and improving mine) for some time, but what really grabbed me was this:You will find in the changing size of your pupils a faithful record of how hard you worked.Kahneman published a series of papers in the '60s and '70s demonstrating the physiological changes associated with concentrated effort (what he calls System 2: the deliberate, slow, logical faculties). He gives the simple example of evaluating 17*24:the computation was not only an event in your mind; your body was also involved. Your muscles tensed up, your blood pressure rose, and your heart rate increased. Someone looking closely at your eyes while you tackled this problem would have seen your pupils dilate. Your pupils contracted back to normal size as soon as you ended your work - when you found the answer (which is 408, by the way) or when you gave up.Anyhow, I went ahead and made a little R program to generate a 4-digit number every few seconds and spit it onto the screen, and to speed up over time. Give it a try:Tursky B, Shapiro D, Crider A, & Kahneman D (1969). Pupillary, heart rate, and skin resistance changes during a mental task. Journal of experimental psychology, 79 (1), 164-7 PMID: 5785627... Read more »

Tursky B, Shapiro D, Crider A, & Kahneman D. (1969) Pupillary, heart rate, and skin resistance changes during a mental task. Journal of experimental psychology, 79(1), 164-7. PMID: 5785627  

  • November 23, 2011
  • 12:03 PM
  • 1,269 views

O Brother, Where Art Thou? – Estimating fp

by Olga Vovk in Milchstraße

In Drake equation, fp stays for a fraction of stars that have planets. The Drake estimate for this parameter was fp=0.5. Which means that 50% of stars in Milky Way may have planets. In its modern estimate fp~ 0.4 (Marcy et al , 2005), however this number can become much higher with developing more precise techniques for planet detection.... Read more »

  • November 23, 2011
  • 10:18 AM
  • 1,023 views

Signal received from the lost Russian Phobos-Grunt Mars probe

by Olga Vovk in Milchstraße

Signal received from the lost Russian Phobos-Grunt Mars probe... Read more »

Harvey, Brian. (2007) The rebirth of the Russian space program 50 years after Sputnik, new frontiers . Springer-Praxis books in space exploration. info:other/

  • November 23, 2011
  • 10:08 AM
  • 445 views

Signal received from the lost Russian Phobos-Grunt Mars probe

by Olga Vovk in Milchstraße

... Read more »

Harvey, Brian. (2007) The rebirth of the Russian space program 50 years after Sputnik, new frontiers . Springer-Praxis books in space exploration. info:other/

  • November 22, 2011
  • 11:00 PM
  • 614 views

Some data on college degrees

by Ryan in Epidemiology as a liberal art

An article in the New York Times a few weeks ago got a lot of attention in the science blogging world. It described the high attrition rate of college students in STEM fields, and made the basic argument that science and engineering curricula are too hard, too dry, and far too divorced from reality. The answer? Projects. Group projects. Hardly earth-shattering if you've been through an engineering program recently, but the article raised some interesting points about heavily abstract coursework and disillusionment from real-world problems. A 2010 article [PDF] in Science made the same point, focusing on introductory courses:A new report from the National Academies... says that improving introductory courses is one of many steps needed to increase the number of students obtaining degrees in STEM.Students who had once sat passively during a weekly 2-hour recitation section while a graduate student solved problems on the whiteboard are now part of four-person teams responsible for finding the right answer. The larger context is our alarming deficit of students actually graduating with STEM degrees: about 15% of college degrees are in STEM, compared to 28% in Germany and a whopping 47% in China (BHEF Report [PDF]).And attrition seems to be a big part of the problem. The NYT article noted that only about 40% of students who start a STEM program actually finish one, and it gets as low as 20% in underrepresented minorities.But is attrition the whole story?Some dataVia Marginal Revolution and Code and Culture, I got linked through to a WSJ data table (from the Georgetown Center on Education and the Workforce; based on US Census data) on college majors, their popularity, unemployment, and quantiles of earnings. In this post I'll take a look at some of the relationships between college degrees and economic success. If I use "success" as shorthand for "economic success", please realize that it's just that: shorthand. Nothing sums up my feelings about basing post-graduate plans on income better than this line by George Packer:A Wall Street career is becoming indefensible, and yet large fractions of the graduates of America’s best universities can think of no better use for their intelligence and degrees than a job that has become less socially useful than prostitution, and a lot more harmful.But I digress.The WSJ-Georgetown-Census data had degrees broken down by Engineering, Science, History, Education, and Art. It took some recoding from the WSJ format: they, oddly, had things like Chemistry and Physics not listed under Science. They also, understandably, had certain degrees double-classified. I did my best to get things into reasonable categories; you can find my data recoding here [Google doc]. Also, R code for the plots.)I went on and combined Engineering and Science into a STEM group, and the rest into non-STEM. First things first, the distribution of incomes:Clearly some positive skew with a long right tail: the story of income inequality in America, right? That  outlier at $127,000 is Petroleum Engineering, if you're interested. And the median income overall is $51,000.Next the median income by degree group:Pretty much expected, with those engineers (I'm technically one of them, with two degrees in Biomedical Engineering) looking much better off, and no obvious differences between the rest. The difference between Engineering and the Liberal Arts is really stunning, though.And unemployment?History and Liberal Arts, again, not looking great, with little difference between the rest. Note, though, Education's low unemployment. That high unemployment outlier near 20% is Clinical Psychology.So how about some of the bivariate relationships? Take a look at the relationship between popularity and median income:It doesn't look like much is going on. Perhaps a hint of an inverse relationship, with less popular degrees making a touch more. The blue line is just a smoothed local regression. But let's take a look at it using some quantiles:Here the three lines are for the 10th, 50th, and 90th percentiles of median income. And here we see what seems to be a larger effect for the higher income (the slope for the 90th percentile is steeper). This seems even stranger: not only are the less popular degrees more economically valuable, they offer a better shot of a very comfortable income (note this is all based on quantiles, including the individual data points, so the superstars from finance are getting washed out... if our points were means, things would probably look quite different).And if we break the plot into STEM and non-STEM groups (and overlay a linear regression and loess fit), we see something a bit more interesting:Overall the linear trend in both is almost exactly flat, pretty close to the overall plot above. But there's almost a quadratic relationship with the STEM fields, with a dip in median income for the middlingly popular fields. I wonder if there is some selection going on here, with a certain kind of person following the crowds for a decent income, the least successful forced into the less popular fields, and the superstars making it through the least popular fields for the biggest potential payoffs. It would be interesting to see these numbers adjusted for the number of spots available. If there is a lot of selection going on in the smaller programs (making them appear less popular), I think that these data would fit that story quite well.But since we're in Homo economicus mode here, maybe it isn't expectations of earnings, or the shot at high earnings, that matters, but unemployment:Doesn't look like it. Looks nice and flat. But let's break it out by STEM again, and it's pretty striking:The selection hypothesis begins to look a bit more interesting. Perhaps in STEM fields... Read more »

  • November 22, 2011
  • 04:51 PM
  • 574 views

Can We Reduce the Carbon Cost of Scientific Mega-Meetings?

by jebyrnes in I'm a chordata, urochordata!

I admit it. I love big scientific meetings. There’s something about the intense intellectual hubbub of thousands of my fields greatest minds gathered in one place for a few days of showing off the latest, greatest, flashiest work that just fills me with joy. Also a need to sleep for a week afterwards due to [...]... Read more »

Ponette-González, Alexandra G, & Jarrett E Byrnes. (2011) Sustainable Science? Reducing the Carbon Impact of Scientific Mega-Meetings. Ethnobiology Letters, 65-71. info:other/

  • November 22, 2011
  • 01:48 PM
  • 395 views

The trouble with in-laws… (Holiday Edition!)

by eHarmony Labs in eHarmony Labs Blog

The holidays are time for cheer and goodwill, but can sometimes be buried under stress and scrutiny – especially from your in-laws. Read on to learn about how research says you should handle these sometimes fragile relationships during the holidays. ... Read more »

  • November 21, 2011
  • 03:22 PM
  • 523 views

Why speeding neutrinos are interesting for social scientists

by Rense Nieuwenhuis in Curving Normality

In the world as we understand it, based on Einstein, nothing can go faster than light. This prediction based on the general theory of relativity has proven itself countless times in empirical research. And now, lo and behold, a group at CERN has observed neutrino’s racing through earth from France/Switzerland to Italy at the World-record breaking speed of slightly above light-speed!... Read more »

The OPERA Collaboraton: T. Adam et al. (2011) Measurement of the neutrino velocity with the OPERA detector in the CNGS beam. Arxiv. arXiv: 1109.4897v2

  • November 21, 2011
  • 03:08 PM
  • 969 views

Wrong for science?

by TGIQ in The Bug Geek

Last night I was up too late (again), nursing a too-busy brain with a good dose of Internet, when the Twitterverse led me to an post by Marie-Claire Shanahan on the blog Boundary Vision, entitled, “Who is the traditional right type of person for science?” It would appear there are some common themes in terms [...]... Read more »

  • November 20, 2011
  • 10:58 PM
  • 887 views

Level of Measurement and Archaeological Dating

by teofilo in Gambler's House

In 1946 the psychologist Stanley Smith Stevens, founder and director of Harvard’s Psycho-Acoustic Laboratory, published a short article in Science laying out a classification scheme for scales of measurement.  This system, and the four scales it proposed, would go on to become extremely influential in the quantitative sciences, and it is still widely used.  I [...]... Read more »

  • November 20, 2011
  • 04:25 PM
  • 578 views

Who is the traditional right type of person for science?

by Marie-Claire Shanahan in Boundary Vision

A study asking high school students for their views on what type of people qualify as the right type to do science.... Read more »

  • November 17, 2011
  • 06:42 AM
  • 645 views

The reluctance of science to open up

by Joerg Heber in All That Matters

I finally had the chance to read Michael Nielsen‘s book ‘Reinventing discovery‘ - a must read for anyone interested in scientific discovery. Why? Well, because the closed, individual way in which we organize science today in many ways is hampering progress and may eventually become a thing of the past. If you are in science, why did you [...]... Read more »

Hardin, G. (1968) The Tragedy of the Commons. Science, 162(3859), 1243-1248. DOI: 10.1126/science.162.3859.1243  

  • November 16, 2011
  • 07:54 AM
  • 1,044 views

On the importance of science research blogs and how YOU can vote to support students who blog about science!

by Heather in Escaping Anergy: The Immunology Research Blog

Blogs devoted to engaging the public with discussion about scientific research is vital to the advancement of our society, however these important sources of information need YOUR support to advocate science communication. Escaping Anergy was selected as a finalist for a blogging scholarship and needs YOUR vote to win!! Your support is greatly appreciated!!!... Read more »

  • November 15, 2011
  • 01:59 AM
  • 1,072 views

Diagnostic Errors in Psychiatry

by Dr Shock in Dr Shock MD PhD

Buffer Diagnostic errors are hot these days. this subject is of importance for patient safety and as such attention on this subject has increased. Previously I wrote about a diagnostic error, the availability bias. There are many more possible cognitive diagnostic errors to be made by physicians. Some diagnostic errors are more common in psychiatry. [...]


No related posts.... Read more »

  • November 14, 2011
  • 05:20 AM
  • 935 views

Can you spot the fake brain computer interface?

by Neurobonkers in Neurobonkers

A team of bogus developers are applying for crowd funding for a project that does not exist. Can you spot the flaws?... Read more »

Damian Cruse, Srivas Chennu, Camille Chatelle, Tristan A Bekinschtein, Davinia Fernández-Espejo, John D Pickard, Steven Laureys, Adrian M Owen. (2011) Bedside detection of awareness in the vegetative state: a cohort study. The Lancet. info:/10.1016/S0140-6736(11)61224-5

  • November 13, 2011
  • 03:15 AM
  • 956 views

...the rest is just details

by Bradley Voytek in Oscillatory Thoughts

(This cross-posted from my piece at Nature)Meet the electric brain. A pinnacle of modern science! This marvel comes complete with a "centrencephalic system", eyes, ears, medial and lateral geniculate, corpora quadrigemina, and visual cortex.(click to enlarge)The text reads:A giant electrified model of the human brain's control system is demonstrated by Dr. A.G. Macleod, at the meeting of the American Medical Association in New York, on June 26, 1961. The maze of twisting tubes and blinking lights traces the way the brain receives information and turns it into thought and then action.It's a cheap journalistic trick to pull out a single example of hubris from the past at which to laugh and to highlight our own "progress". But where did the Electric Brain fail? Claims to understanding or modeling the brain have almost certainly been made countless times over the course of human thinking.Hell, in moments of excitement with colleagues over a pint (or two) I've been known to shout, "I've figured out the brain!" But, of course, I have always been wrong.So here we are, exactly 50 years post Electric Brain, and I find myself once again at the annual Society for Neuroscience conference (SfN). Each year we push back the curtain of ignorance and, just as I have every year since I began my neuroscience career in 2003, I find myself surrounded by 30-40,000 fellow brain nerds.How do the latest and greatest theories and findings on display at SfN compare to the Electric Brain? One would like to think that, with this much brain power (har, har), surely we must be close to "understanding the brain" (whatever that might mean). Although any model of the human brain feels like an act of hubris, what good are countless scientific facts without an integrated model or framework in which to test them? The Electric Brain is an example of a connectionist model in which the brain is composed of a collection of connected, communicating units. Thus, the brain can be modeled by the interconnections between all the subregions of it; behavior is thought to be an emergent property of the complexities of these interconnected networks of units.A "unit" in the Electric Brain appears to be a whole region, whose computations are presumably modeled by a simple input/output function. The modern incarnations of this movement are seen in the rapidly maturing field with the sexy name of connectomics, the underlying belief of which is that if we could model how every neuron connects to every other neuron, we would understand the brain.With advancements in computational power, we've moved beyond simplified models of entire brain regions and toward attempts to model whole neurons, such as this model of 10^11 neurons by Izhikevich and Edelman (Large-Scale Model of Mammalian Thalamocortical Systems, PNAS 2008).There are also attempts to model the brain at the molecular level, such as with the Blue Brain Project.But as Stephen Larson, a PhD candidate at UCSD astutely noted on Quora, To give you a sense of the challenge here, consider the case of a simple organism with 302 neurons, the C. elegans. There has been a published connectome available since 1986... however, there is still no working model of that connectome that explains the behavior of the C. elegans.One issue with this approach is that the brain is dynamic, and it is from this dynamism that behavior arises (and the complexities of which are hidden in a static wiring diagram). Wilder Penfield summed it up nicely, "Consciousness exists only in association with the passage of impulses through ever changing circuits of the brainstem and cortex. One cannot say that consciousness is here or there" (Wilder Penfield: his legacy to neurology. The centrencephalic system, Can Med Assoc J 1977).In my most cynical moments, connectionist approaches feel like cargo cult thinking, whereby aping the general structure will give rise to the behavior of interest. But how can we learn anything without first understanding the neuroanatomy? After all, our anatomy determines the rules by which we are biologically constrained.This year I spoke at the 3rd International Workshop on Advances in Electrocorticography. Across two days there were a total of 23 lectures on cutting-edge methods in human and animal electrophysiology. Last year I attended a day-long, pre-SfN workshop titled "Analysis and Function of Large-Scale Brain Networks".During both of these sessions I sometimes found it difficult to restrain my optimism and enthusiasm (yes, even after all these years "in the trenches"). And while wandering through a Kinko's goldmine of posters and caffeinating myself through a series of lectures, occasionally I forget my skepticism and cynicism and think, "wow, they're really on to something."And that's what I love about this conference: every year it gives me a respite from cynicism and skepticism to see so many scientists who are so passionate about their work. Sure, it might be hubris to think that we can model a brain, but so what? When the models fail, scientists will learn, the field will iterate, and the process will advance.That's what we do, and that's why I keep coming back. The Society for Neuroscience conference is my nerd Disneyland.Izhikevich EM, & Edelman GM (2008). Large-scale model of mammalian thalamocortical systems. Proceedings of the National Academy of Sciences of the United States of America, 105 (9), 3593-8 PMID: 18292226Jasper HH (1977). Wilder Penfield: his legacy to neurology. The centrencephalic system. Canadian Medical Association journal, 116 (12), 1371-2 PMID: 324600... Read more »

Izhikevich EM, & Edelman GM. (2008) Large-scale model of mammalian thalamocortical systems. Proceedings of the National Academy of Sciences of the United States of America, 105(9), 3593-8. PMID: 18292226  

  • November 12, 2011
  • 08:14 PM
  • 706 views

What power laws actually tell you about wealth and the 1%

by Jon Wilkins in Lost in Transcription

So, there's an article published in yesterday's Guardian titled, "The mathematical law that shows why wealth flows to the 1%," which is fine, except for the fact that the "law" is not really a law, nor does it necessarily show "why" wealth flows anywhere.

To be fair, it's a perfectly reasonable article with a crap, misleading headline, so I blame the editor, not the author.

The point of the article is to introduce the idea of a power law distribution, or heavy-tailed distributions more generally. These pop up all over the place, but are something that many people are not familiar with. The critical feature of such distributions, if we are talking about, say, wealth, is that an enormous number of people have very little, while a small number of people have a ton. In these circumstances it can be misleading, or at least uninformative, to talk about "average" wealth.

The introduction is nicely done, and it represents an important part of the "how" of wealth is distributed, but what, if anything, does it tell us about the "why"?

To try to answer that, we'll walk through three distributions with the same "average," to see what a distribution's shape might tell us about the process that gave rise to it: Normal, Log Normal, and Pareto.







The blue curve, with a peak at 300, is a Normal distribution. The red curve, with its peak around 50, is a Log Normal. The yellow one, with its peak off the top of the chart at the left, is a Pareto distribution.
In each case, the mean of the distribution is 300.

The core of the issue, I think, is that there are three different technical definitions that we associate with the common-usage term "average," the mean, the median, and the mode. This is probably familiar to most readers who have made their way here, but here's a quick review:



The mean is what you usually calculate when you are asked to find the average of something. For instance, you would determine the average wealth of a nation by taking its total wealth and dividing it by the number of people.



The median is the point where half of the distribution lies to the right, and half lies to the left. So the median wealth would be the amount of money X where half of the people had more than X and half had less than X.



The mode is the high point in the distribution, its most common value. In the picture above, the mode of the blue curve is at about 300, while the mode of the red curve is a little less than 50.



The Normal (or Gaussian, or bell-curve-shaped) distribution, represented in blue, is probably the most familiar. One of the features of the Normal distribution is that the mode, median, and mean are all the same. So, if you have something that is Normally distributed, and you talk about the "average" value, you are probably also talking about a "typical" value. 



Lots of things in our everyday experience are distributed in a vaguely Normal way. For instance, if I told you that the average mass of an apple was 5 ounces, and you reached into a bag full of apples, you would probably expect to pull out an apple that was somewhere in the vicinity of 5 ounces, and you might assume that you would be as likely to get an apple that was bigger than that as you would be to get one that was smaller. Or if I told you that the average height in a town in 5 feet, 8 inches, you might expect to see reasonable numbers of people who were 5'6", fewer who were 5'2", and fewer still who were 4'10".



So what sorts of processes lead to a Normal distribution? The simplest way is if you have a bunch of independent factors that add up. For example, it is thought that a large number of genes affect height, with the specific variants of each gene that you inherited contributing a small amount to making you taller or less tall, in a way that is close enough to additive.




What would it mean, then, if we were to find that wealth was Normally distributed? Well, it could mean a lot of things, but a simple model that could give rise to a Normal wealth distribution would be one where the amount of pay each person received each week was randomly drawn from the same distribution. Maybe you would flip a coin, and if it came up heads, you would get $300, while tails would get you $100. Pretty much any distribution would work, as long as the same distribution applied to everyone. After many weeks, some people would have gotten more heads, and they would be in the right-hand tail of the wealth distribution. The unlucky people who got more tails would be in the left-hand tail. But most people's wealth would be reasonably close to the mean of the wealth distribution.




Image from Alex Pardee's 2009 exhibition "Hiding From The Normals"


Now, it's important to remember that just because a particular mechanism can lead to a particular distribution, observing that distribution does not prove that your particular mechanism was actually at work. It seems like that should be obvious, but you actually see a disturbing number of scientific papers that basically make that error. There will typically be whole families of mechanisms that can give rise to the same outcome. However, looking at the outcome (the distribution, in this case) and asking what mechanisms are consistent with it is an important first step.



Alright, now let's talk about the Log Normal distribution (the red one). Unlike the Normal, the Log Normal is skewed: it has a short left tail and a long right one. This means that the mean, mode, and median are no longer the same. In the curve I showed above, the mean is 300, the median is about 150, and the mode is about 35. 



This is where talk about averages can be misleading, or at least easily misinterpreted. Imagine that the wealth of a nation was distributed like the red curve, and that I told you that the average wealth was $30,000. What would you think? Well, if I also told you that the wealth was Log Normally distributed, and I gave you some additional information (like the median, or the variance), you could reconstruct complete distribution of wealth, at least in principle.



The problem is that we tend to think intuitively in terms of distributions that look more like the Normal. In practice, we hear $30,000 average wealth, and we say, "Hey, that's not too bad." We probably don't consciously recognize that (in this example), half of the people actually have less than $15,000, and that the typical (i.e., modal) person has only about $3500.



What type of process can give rise to a Log Normal distribution? Well, again, there are many possible mechanisms that would be consistent with a Log Normal outcome, but there is a class of simplest possible underlying mechanisms. We imagine something like the coin toss that we used in the Normal case, but now, instead of adding a random quantity with each coin toss, we multiply.



This is sort of like if everyone started off with the same amount of money invested in the stock market. Each week, your wealth would change by some percentage. Some weeks you might gain 2%. Other weeks you might lose 1%. If everyone is drawing from the same distribution of multipliers (if we all have the same chance of a 2% increase, etc.), the distribution of wealth will wind up looking Log Normally distributed.




Vilfredo Pareto, who grew a very long b... Read more »

Clauset, A., Shalizi, C., & Newman, M. (2009) Power-Law Distributions in Empirical Data. SIAM Review, 51(4), 661. DOI: 10.1137/070710111  

join us!

Do you write about peer-reviewed research in your blog? Use ResearchBlogging.org to make it easy for your readers — and others from around the world — to find your serious posts about academic research.

If you don't have a blog, you can still use our site to learn about fascinating developments in cutting-edge research from around the world.

Register Now

Research Blogging is powered by SMG Technology.

To learn more, visit seedmediagroup.com.