Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Well, it depends on the process. Normally when this sort of thing happens, those trying to build upon another's research will contact the lab that did it and try to figure out what's wrong.

My favorite professor, Jerry Lettvin, discovered a marvelous thing about frog's eyes: unlike/in addition to the basic edge detector sort of things we've got, they have a specific "bug detector", simulated in the lab with a bowl over the frog's head, and something black that's moved into the frog's vision via a magnet on the other side. And of course probe(s) in the optic nerve.

Another lab had difficultly reproducing this (this was seriously novel, and in theory easy and cheap to reproduce), so Jerry practiced for a while doing the procedure with side-cutters (https://en.wikipedia.org/wiki/Diagonal_pliers), and then showed up at that lab in his usual not clean shaven, simple and slightly dirty blue work shirt and black pants, and got the experiment to work with the lab's equipment (e.g. probe(s) and oscilloscope) ... and the side-cutters he brought ^_^.

(The frogs were, BTW, reported to apparently not be terribly hurt by this, and after healing up exhibited normal frog behavior. Then again, if I was in that lab, I'd be harvesting them for their tasty legs, as I did with all the bull frogs I shot with my BB gun growing up, in addition to dissecting them :-).



This is fairly common--if work is genuinely original there is likely a good deal of technique involved. There is a lot of laboratory science that is more like craft, and it's always been this way. Reproducibility of novel results should be expected to be poor, and getting to the point of reproduction will often be difficult.

So "failure to reproduce on the first few attempts" does not mean "bad science".

In genomics, however, failure to reproduce was the norm for many years. There were a few spectacularly good early results that held up, but the ubiquitous use of cross-validation (which unless done with insane care is simply invalid) and analysis of significance that was frequently just wrong meant that a lot of results were published that were the numerical equivalents of early Royal Society papers on deformed cows and the like: meticulous descriptions of anomalous one-offs.

A lot of what is happening is generational: the older generation of biological researchers were never trained or equipped with anything like the analytical tools required to cope with the large numerical datasets that labs started generating in the '90's thanks to new technologies in the wake of the Human Genome Project.


I have no problem with researchers claiming that their result only happens in a very narrow circumstance ... so long as the resulting drug is only FDA-approved for use in that specific circumstance ;-)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: