Unfortunately, it's par for the course. There's a major problem in academia: there's too much to gain from publications of successful research ($$$, fame, citations). The result of all this data massaging is years of wasted research and millions (billions) of dollars chasing ghosts. (By the way: also in companies, but to a lesser extent)
"A former researcher at Amgen Inc has found that many basic studies on cancer -- a high proportion of them from university labs -- are unreliable, with grim consequences for producing new medicines in the future.
During a decade as head of global cancer research at Amgen, C. Glenn Begley identified 53 "landmark" publications -- papers in top journals, from reputable labs -- for his team to reproduce. Begley sought to double-check the findings before trying to build on them for drug development. [...] Result: 47 of the 53 could not be replicated. "
I really want to think that this particular attempt at reproduction is not quite as bad as it sounds, because replicating this stuff can be very hard, depends on special reagents (cell lines, viri, etc. etc.)....
On the other hand, these are by definition high impact studies, ones Amgen wanted to try to develop drugs upon, and they wouldn't have gone public unless they were reasonably sure they'd given them a legitimate try. And I already assume at least 1/2 of biomedical research is junk ... and it wasn't just because I found organic chemistry easy and fun, I switched my major to chemistry in part because I didn't like the ... vibes, for lack of a better word, I got from the (MIT) biology department, aside from my adviser Phillip Sharp (who was clearly going to get a Nobel soon, and from solid research; Lord Baltimore, though ... did not impress).
If the purpose of these science experiments is to "move the ball down the field" (add to the sum of human knowledge/science), wouldn't it be fair to say that if the results can't be duplicated by other scientists, then the original authors haven't succeeded in increasing human knowledge?
Well, it depends on the process. Normally when this sort of thing happens, those trying to build upon another's research will contact the lab that did it and try to figure out what's wrong.
My favorite professor, Jerry Lettvin, discovered a marvelous thing about frog's eyes: unlike/in addition to the basic edge detector sort of things we've got, they have a specific "bug detector", simulated in the lab with a bowl over the frog's head, and something black that's moved into the frog's vision via a magnet on the other side. And of course probe(s) in the optic nerve.
Another lab had difficultly reproducing this (this was seriously novel, and in theory easy and cheap to reproduce), so Jerry practiced for a while doing the procedure with side-cutters (https://en.wikipedia.org/wiki/Diagonal_pliers), and then showed up at that lab in his usual not clean shaven, simple and slightly dirty blue work shirt and black pants, and got the experiment to work with the lab's equipment (e.g. probe(s) and oscilloscope) ... and the side-cutters he brought ^_^.
(The frogs were, BTW, reported to apparently not be terribly hurt by this, and after healing up exhibited normal frog behavior. Then again, if I was in that lab, I'd be harvesting them for their tasty legs, as I did with all the bull frogs I shot with my BB gun growing up, in addition to dissecting them :-).
This is fairly common--if work is genuinely original there is likely a good deal of technique involved. There is a lot of laboratory science that is more like craft, and it's always been this way. Reproducibility of novel results should be expected to be poor, and getting to the point of reproduction will often be difficult.
So "failure to reproduce on the first few attempts" does not mean "bad science".
In genomics, however, failure to reproduce was the norm for many years. There were a few spectacularly good early results that held up, but the ubiquitous use of cross-validation (which unless done with insane care is simply invalid) and analysis of significance that was frequently just wrong meant that a lot of results were published that were the numerical equivalents of early Royal Society papers on deformed cows and the like: meticulous descriptions of anomalous one-offs.
A lot of what is happening is generational: the older generation of biological researchers were never trained or equipped with anything like the analytical tools required to cope with the large numerical datasets that labs started generating in the '90's thanks to new technologies in the wake of the Human Genome Project.
I have no problem with researchers claiming that their result only happens in a very narrow circumstance ... so long as the resulting drug is only FDA-approved for use in that specific circumstance ;-)
This is mostly because there are little to no real repercussions when most of your funding is government originated. Oh you might end up with a lengthy investigation or such but the school won't lose funding, just funding for one avenue of research if at all. Likely they just get their wrists slapped and no one ever finds out
http://www.reuters.com/article/2012/03/28/us-science-cancer-...
"A former researcher at Amgen Inc has found that many basic studies on cancer -- a high proportion of them from university labs -- are unreliable, with grim consequences for producing new medicines in the future.
During a decade as head of global cancer research at Amgen, C. Glenn Begley identified 53 "landmark" publications -- papers in top journals, from reputable labs -- for his team to reproduce. Begley sought to double-check the findings before trying to build on them for drug development. [...] Result: 47 of the 53 could not be replicated. "