Unreliable research: Trouble at the lab | The Economist
Unreliable research: Nice #stats graphic showing why many of the positive results of low-powered studies are false
http://www.economist.com/news/briefing/21588057-scientists-think-science-self-correcting-alarming-degree-it-not-trouble
Nice graphic showing when testing lots of hypotheses, most of which are negative, using low powered studies, coupled with a reasonable FDR, results in most of the positive conclusions being false.
Scientists like to think of science as self-correcting. To an alarming degree, it is not
QT:
”
A statistically powerful study is one able to pick things up even when their effects on the data are small. In general bigger studies—those which run the experiment more times, recruit more patients for the trial, or whatever—are more powerful. A power of 0.8 means that of ten true hypotheses tested, only two will be ruled out because their effects are not picked up in the data; this is widely accepted as powerful enough for most purposes. But this benchmark is not always met, not least because big studies are more expensive. A study in April by Dr Ioannidis and colleagues found that in neuroscience the typical statistical power is a dismal 0.21; writing in Perspectives on Psychological Science, Marjan Bakker of the University of Amsterdam and colleagues reckon that in that field the average power is 0.35.
….
With this in mind, consider 1,000 hypotheses being tested of which just 100 are true (see chart). Studies with a power of 0.8 will find 80 of them, missing 20 because of false negatives. Of the 900 hypotheses that are wrong, 5%—that is, 45 of them—will look right because of type I errors. Add the false positives to the 80 true positives and you have 125 positive results, fully a third of which are specious. If you dropped the statistical power from 0.8 to 0.4, which would seem realistic for many fields, you would still have 45 false positives but only 40 true positives. More than half your positive results would be wrong.
“