Editors’ pickMed

Evidence-Based Lies

A fundamental tenet of science is that findings must be reproduced. One experiment does not establish new truths. The results have to be replicated by others using the methods described by the original investigators. Replication is key to ensuring that conclusions aren’t spurious. Nevertheless, science is currently plagued by hordes of irreproducible study results.

“ More than 70% of researchers have tried and failed to reproduce another scientist’s experiments, and more than half have failed to reproduce their own experiments.”

Certainly, misidentification of cells is a major contributor to the replication crisis in basic biological science. However, statistics and publication bias combine to form another formidable pseudo-scientific edifice that churns out irreproducible results across scientific genres and misleads the public.

The infamous P-value lies at the heart of the matter. Simply put, the P-value is an arbitrary estimate of the likelihood that results of a given experiment are due to chance. The cutoff widely accepted across scientific disciplines is 5%. In other words, as long as the statistics say that the likelihood a given result is due to chance alone is 5% or less, then the result is considered “significant.” That might sound good at first glance, but when examined a little more closely, in conjunction with the concept of publication bias, the limitations rapidly mount.

The significance of the 5%, or .05, P-value is utterly arbitrary. A man named Ronald Fisher made it up back in the 1920s. It’s based on the rough approximation of how much of a normal (Gaussian) distribution will fall within two standard deviations of the mean — about 95%. (I’m not going to get into the problems with the normal distribution in this post, but I will recommend that anyone interested in this concept read Nassim Nicholas Taleb’s book The Black Swan.)

A P-value of .05 implies that one result in 20 will be due to chance. But how many millions of results are obtained from scientific experiments each year around the world? An incalculable number. It’s virtually guaranteed that thousands of results due to chance alone emerge from the realm of theory and intrude on what we presume to call reality each year. And those are the results that get published.

Scientists working in academia must, as the saying goes, publish or perish. And the journals in which those anxious scientists try to publish their results need to make money, which necessitates reader interaction. Results that are not “statistically significant” are boring. No reader wants to pay for a journal full of articles that say “we did this study using really careful methods, and nothing happened, it didn’t work. End of story.” If science were fully transparent, and results of all experiments were published, however, this is exactly what the vast majority of papers would say.

The failure of negative study results to ever see the light of day creates staggering wastes. It’s likely that many basic experiments have been repeated over and over again, with uninteresting results, and subsequently never published. Then, another research group comes along and does the experiment again (because they didn’t know about the previous null results) and, by chance alone, finds a positive result. Of course, that result is interesting and gets published. This basic cycle is why John Ioannidis’s now-famous 2005 paper was titled “Why Most Published Research Findings Are False.”

Even for clinical trials, which represent huge time and resource investments, only about half of studies are published. This is perhaps not surprising; after all, no pharmaceutical company wants to publish results of a study they funded that says their drug doesn’t work. In fact, honestly reporting negative results about one of their own products puts pharmaceutical companies into an ethical conundrum when fiduciary duty to shareholders is taken into consideration. In many cases, pharmaceutical companies basically write “study” reports to say whatever they want them to say then pay academics to put their names on them. The Institute of Medicine recently summarized this issue thus:

“… recent news reports, legal settlements, research studies, and institutional announcements have documented a variety of disturbing situations that could undermine public confidence in medicine [such as]… academic researchers putting their names on manuscripts, even though they first became involved after the data were collected and analyzed and after the first drafts were written by individuals paid by industry.”

True breakthroughs in our understanding of reality are sure to be rare when so much of the “science” that gets disseminated is either a statistical fluke or flat-out made up.

Source link

Tags

Leave a Reply

Your email address will not be published. Required fields are marked *

Pin It on Pinterest

Share This

Share this post with your friends!