Blog Post

Too Many Positive Results: Is Today's Science 'Teaching to the Test'?

Remember the opening line of NPR's Prairie Home Companion Show by Garrison Keilor, set in the mythical, midwestern, slightly parodic and gently nostalgic Lake Wobegone where "all the women are strong, all the men are good looking, and all the children are above average"?  It turns out that science today is a little like that.  A new analysis released  by the University of Edinburgh reveals that, in the 4,600 scientific research papers published between 1990 and 2007, there was a steady, significant decline in papers where the findings of the studies contradicted the scientific hypotheses being investigated.    Grants are renewed when your first experiment was successful.  So the tendency is to construct an experiment that yields the results you anticipate.   Readers prefer to read success stories so scientific publishers, seeking readers (of course), have a bias to the positive results.   "Over the period studied," the Edinburgh analysists write, "positive results grew from around 70 per cent in 1990 to 86 per cent in 2007. The growth was strongest in economics, business, clinical medicine, psychology, psychiatry, pharmacology and molecular biology."

Hey!  They accuse English teachers of grade inflation for their college students and 86% of published scientific studies are, well, strong, good looking, and above average?  Dr Daniele Fanelli, the lead author on the study, indicates the problem of "No Problem!" is greater in the U.S. than elsewhere, also suggesting that the great competition for shrinking funding in the U.S. might also be leading the positive bias.

Why is this a problem?  You need negative evidence to spur science to find better, other, more innovative solutions.   If everything is good, you replicate the good, you limit the scope of the challenge, you duplicate resources, and, if you know you are rewarded only for success, you set the bar of success lower, vaguer, more easy to attain.    You can read a popular account of the study here:

But now let me rail a bit.   We tell our students and our kids that failure is good, because you learn by being challenged.   That is true on every level, actually neurologically to culturally.  We measure individual success in such a mindless way (single item multiple choice tests, survey results, lack of longitidinual comparisons or side-by-side comparatives of alternative scenarios, single-bias statistics, single-blind peer review, and on and on).  Schools fail if students fail on tests so you control the result by teaching to the test and, hey, if that doesn't work, you just cheat on reporting the test scores (as has happened recently in NY state).   To my mind, too many false positives reveal a system that is itself derelict.   That is, if you have too many false positives, you need to step back and think critically about what conditions in that system bias towards successful results.   I don't mean promote and encourage real success; I mean motivate the reporting of success against a metric determined by the tester.    If such a motivation exists extrinsic to the experiment or test itself, that in itself is a bias in the test that casts doubt on the results.

By contrast, the reason game mechanics work so well for learning is because, when you are set a challenge and fail, you are encouraged to try harder.  When you are set a challenge and succeed, you are given a harder challenge.     If there is some accounting of challenges, if peers can give you points or badges for your accomplishment, without any reward for doing so, then you have a system that builds in non-positive-outcomes not as “failures” (i.e. publication rejection, grant denial) but as challenges to inspire you on.

I have a personal reason for being distressed by this study.  The reviews of  my Now You See It have been wonderful.  No author can ever complain when the range of reviews looks like this so, please, I’m not complaining.  But I’m curious about what it means when I’m referred to as a “techno-optimist.”   To me, that sounds like I’m trying to make people feel good about the results I report from neuroscience and management theory and from the people I Interview.   Actually, I am both trying to counter the punditry around “techno-pessimism” (“the Internet makes you shallow, dumb, lonely, isolated, distracted, etc”) and I’m trying to showcase the work of people who have made institutional change happen against all the forces of resistance and the status quo:   convention, tradition, ignorance, prejudice, fear.    Success against odds is very different from a “false positive” or from “techno-optimism.” 

 In the case of the overly positive scientific publications, we have a system that benefits the positive and so the temptation would be to set the bar low so you could be positive.    What I am describing (and what I am inspired by) is people who face a situation where everything mitigates against success, where there are no external reward systems for change, where the innovation comes from a mission or a desire or a basic need that is not rewarded or even recognized anywhere else in the systems set up to evaluate you (the opposite of grant re-funding mechanisms) and, faced by no prospects of external reward, you still go for it.

 To me, there is something sad and a bit cowardly and innately conservative about a push to publish 86% positive scientific papers.  By contrast, there is something heroic and inspiring when someone  reacts against the reward system of his or her job or society or social expectations in order to make a difference, make something happen,  make something work.   That’s not “techno-optimism.”   It’s activism.



Cathy N. Davidson is co-founder of HASTAC, and author of The Future of Thinking:  Learning Institutions for a Digital Age (with HASTAC co-founder David Theo Goldberg), and the forthcoming Now You See It:  How the Brain Science of Attention Will Transform the Way We Live, Work, and Learn (publication date, Viking Press, August 18, 2011).  below.

A starred review in the May 30 Publisher's Weekly notes:  "Davidson has produced an exceptional and critically important book, one that is all-but-impossible to put down and likely to shape discussions for years to come." PW named it one of the "top 10 science books" of the Fall 2011 season.

In the August 9 New York Times, columnist Virginia Heffernan calls the book "galvanic. . .  One of the nation’s great digital minds, she has written an immensely enjoyable omni-manifesto that’s officially about the brain science of attention. But the book also challenges nearly every assumption about American education. . . . As scholarly as “Now You See It” is — as rooted in field experience, as well as rigorous history, philosophy and science — this book about education happens to double as an optimistic, even thrilling, summer read. It supplies reasons for hope about the future. Take it to the beach. That much hope, plus that much scholarship, amounts to a distinctly unguilty pleasure."

For more information, visit or order on by clicking on the book below.

  [NYSI cover]


1 comment

I'm a biologist teaching university-level biology students. One of the most difficult conceptual steps many freshmen have to make is the understanding that just because their results don't match their expectations doesn't mean they have 'failed'. If I have a class with a 'failed' experiment, they automatically assume that they will get lower marks for any write-up they submit and that it's somebody 's fault. Faculty often contribute to the misunderstanding by doing something we would not do in our own research - which is to give students a different set of results that 'worked' to analyze so that they don't react this way. In my classes, this does not happen. I think students learn more about both the science content and the ways of scientists by writing up the 'failures' than by having things work all the time, even if they don' 'like' it.