If you are a scientist and you haven’t paid a visit to RetractionWatch, what have you been doing your life?! It is a blog tracking and cataloging scientific retractions across dozens of fields and scientific journals into a RetractionWatch Database, published not so long ago together with an accompanying analysis in Science.
If you are not a scientist, then welcome to what could be described as academia’s walk of shame. Retraction is an action of withdrawal of a paper, usually on either journal’s or authors’ behalf, due to various reasons, that are occasionally accompanied by other types of repercussions. Normally, a journal will print an announcement of retraction in the issue following the decision, and the paper itself will sometimes become inaccessible on the journal’s website.
You might wonder how do journals decide if a paper should be retracted. Well, most journals subscribe to guidelines of COPE (Committee on Publication Ethics), which was conceived in 1997 when a group of concerned editors gathered to discuss scientific misconduct. Some two decades later, this organization counts over 12000 participating journal members. The decision to retract a paper rests on journal editors and authors, made in accordance with the following guidelines:
- Unreliable finding due to scientific misconduct (data fabrication, image manipulation)
- Unreliable findings due to honest error (miscalculation, improper variable assignment, experimental errors)
- Findings have been previously reported (redundancy)
- Ethical misconduct
Retractions are really not that common, however, there are a few notorious scientists who have had a rather sizable portion of their work retracted over the years. Retraction Watch offers a leaderboard of people with most retractions. Most notorious being Dr. Yoshitaka Fujii, a Japanese anesthesiologist who had 183 papers pulled due to scientific misconduct — mostly data fabrication. In fact, 126 of them were found to be totally fabricated.
Just as in any other year, 2018 has seen its fair share of retraction, and Retraction Watch has already written about their Top 10 Retractions in The Scientist. Here, I offer my top 12 retractions in biological sciences for 2018.
January — Paris polyphylla vs. Second Degree Burns
Paris polyphylla is a plant commonly used in Nepalese traditional medicine as a cure-all. A paper in Tropical Journal of Pharmaceutical Research set off to investigate P. polyphylla effects on second-degree burns in rats. Obviously, as it goes with any type of alternative medicine research, the findings concluded that this plant extract is an effective medicinal herb that holds promise for the treatment of second-degree burns. However, soon after its publication, the editors retracted the paper over confirmed suspicions of falsification of data in the paper.
This is really a quintessential retraction due to scientific misconduct with a simple lesson — don’t falsify your data, people will notice.
February — In-vitro Model of Blood-Brain Barrier
Every now and then, there is a really cool paper that derives a technique that could be extremely useful. This was one such paper, published in Stem Cell Reports. Essentially, the authors claimed to have successfully derived an in-vitro model of blood-brain barrier using induced pluripotent stem cells. Well, unfortunately, it was all a lie because the first author of the paper, Kohei Yamamizu manipulated/falsified all the main figures and all but one supplementary figures. This has prompted the journal to retract the paper.
Yet another example of malicious scientific behaviors that only hurts the field as well as other scientists who have relied on that work.
March — CRISPR Off-Target
This paper was published in May, 2017 in Nature Methods, and the same day CRISPR companies’ stock value took a significant hit. It was one of those papers that spread like a wildfire over email. I rarely immediately read a paper that I get into my inbox (I have a super long reading list), but this one claimed such an insane number of off-target effects, that I had to.
However, mid-way through the paper — something started to stink. Their findings were shocking, but their methodology seemed to be a bit lacking. I remember being disappointed at first that something like this made it out to the press, and subsequently shocked at how much immediate impact bad science can have.
The key issue with this paper was proper experimental rigor to prove that the observed genomic variations are due to CRISPR and not just normally occurring. They haven’t accounted for all the possible sources of the observed variation, which ultimately undermined their conclusions.
The lesson we should all learn from this is to always be aware of the extraneous sources of variation our data might be subjected to. In fact, in our first-year statistics class, we had an exercise where we had to list all the sources of variation we could think of that might affect our study — from cage placements to the number of mice per cage, or food batch effects.
April — Nuclear Stability in Metastasis
This paper, published in high profile journal Cell has found that protein FMN2 (Formin 2) plays an important role in forming actin structures around the nucleus of a cell to protect it during cell migration associated with metastasis. It is not a story that a study this elegant and with such an interesting (and impactful) findings made it to Cell. However, when the lab shared the reagents, the expression profile of the cells did not fit the one reported in the paper. Upon further investigation, the authors found that the first author had fabricated and falsified data by “reporting data that did not originate from experimental observations, selectively including and omitting data points, selectively omitting images and conditions from analyses, falsifying the quantitation of data including statistical analyses, and falsifying a western blot.” The authors have thereafter requested retraction of the paper.
The lesson here is simple — science builds on itself. We as scientists use other scientists’ reagents and methodology to answer a completely different set of questions. This, in a lot of cases, results in attempts to replicate the original findings of the published paper. Failing to do so might cast a shadow of doubt on the research, possibly resulting in retraction.
May — Diagnosis of Early-Onset Neonatal Sepsis
This paper, published in the journal Infection and Drug Resistance discussed biomarkers useful to detect early-onset neonatal sepsis. Their analysis has found that procalcitonin, interleukin-6, and interleukin-8 may have superior diagnostic power over routine blood analyses such as CRP, WBC, or platelet counts. Unfortunately, upon further examination, the authors have realized that some miscalculation took place and initial correlations were false. Additionally, some of the patients were diagnosed outside of the permitted age range. This prompted an author-requested retraction. Full length of the paper is still available online, and so is the retraction notice.
This is my top retraction of May because it teaches us potential dangers of haphazard data handling. This mistake, albeit likely unintentional, may have led to changes in diagnostic practices of early-onset neonatal sepsis. This, in turn, could have lead to improper or untimely diagnosis which would be detrimental to the patients. It teaches us the importance of double checking our analyses — starting with the raw data processing.
June — Decolorizing Bacteria
There is nothing wrong with the experimental design or data analysis in this article in F1000Research, except that the data was taken from a paper previously published in Annual Research & Review in Biology. The paper identifies the bacterial strains and optimum conditions at which they can decolorize azo dyes. Obviously, the study was retracted for plagiarism.
Why did I pick this retraction as my top for June? Blatant plagiarism. It’s a simple lesson, really — don’t steal other people’s work.
July — Acupuncture vs. Chronic Constipation
This paper looks at that pseudoscientific side of biomedicine, specifically speaking — acupuncture. Ironically, this meta-analysis of 40 studies, published in PLoS One concludes that acupuncture is superior to conventional drug treatment in both remedial effect and adverse effect. In other words, the conclusion is that acupuncture is “more effective than drugs in improving chronic constipation and has the least side effects.” This is great news, except — this is a post on retractions. PLoS One (without the support of the authors) decided to retract the paper because the overall conclusions of the study are not reliable due to the poor study inclusion criteria, problematic representation of primary data in the meta-analysis, and the reporting quality of included studies.
The lesson here is simple, do not conduct statistical analysis with a wishful thought. Bending data to fit your hypothesis is not only wrong but also painfully obvious. Ultimately, stop trying to make acupuncture or any other form of alternative and quack medicine happen. It’s not superior in effect, it’s often dangerous, and almost always misleading. Putting a needle into your shoulder won’t help your constipation.
August — ILK in Cancer Metastasis
This paper published in PLoS One found that ILK (integrin-linked kinase) has a positive effect on growth and metastatic potential in the course of tumor progression. Subsequently, Ohio State University Office of Research Compliance found that some Western blot figures in the paper were falsified and some data were intentionally not reported. This resulted in paper retraction.
A lesson to learn from this retraction is that most universities have local offices that oversee research practices. They are also an important resource to which any concerns regarding research conducted at that university may be reported.
September — Brian Wansink
The month of September doesn’t belong to a retracted paper, but rather a researcher who had 6 articles retracted from JAMA journals under allegations of p-hacking and poor statistics. This is in addition to 7 more studies retracted and 15 corrected. Brian Wansink is probably one of the most famous food marketers out there, so this retraction constituted a scandal that grabbed the attention of a rather wide audience.
Lesson to be learned here is that even the most prominent researchers with most promising research may engage in improper methodology or other types of scientific misconduct.
October — Vasculogenic Mimicry and Galectin-3
Vasculogenic mimicry is a cancer feature similar to angiogenesis, but different in that it happens completely de-novo and does not require a presence of endothelial cells. Essentially, it is a process by which the tumor cells from blood-vessel-like structures for blood supply. This study, published in the journal Oncology Letters has identified galectin-3, an important cell-cell adhesion protein, as a potentially important modulator of vasculogenic mimicry in esophageal cancer. However, the authors have found an error in one of the figures, and due to unrelated circumstances the corrigendum was not a possibility — consequently resulting in retraction of the paper by the request of the authors. The paper is still available online, as well as the retraction notice.
This retraction is my top for October because it teaches us the importance of handling experiments with care — especially as we transition from bench to computers. A small error in the orientation of one small part of a figure may cast doubt on the rest of the paper. Additionally, it teaches us the importance of keeping a thorough record of all the experimentally generated results — especially those that end up in a publication.
November — Yuhji Saitoh
This is another month dedicated to a person. Saitoh had 5 papers retracted in November. Saitoh now holds the 5th place on the RetractionWatch leaderboard with whopping 48 retractions. It should not be surprising that he was commonly featured on papers with Yoshitaka Fujii, a current reigning king of retractions.
December — Tumor Suppressing Function of Micro-RNA in Glioma
The last retraction instance I would like to focus on is a paper in Tumor Biology claiming that MiR-592 targets IGFBP2, resulting in suppressing activity. However, this finding has been questioned for a couple of reasons. First, the authors mistakingly classified normal tissue surrounding glioma as tumor tissue which has led to a false observation of down-regulation of miR-592 in glioma. Second, the cells used in the functional assessment were contaminated, thus invalidating experimental findings from those assessments. Third, it appears that MiR-592 may not even target IGFBP2, to begin with.
This lesson is all about appropriate bench-side manners and care. It is important to handle all the reagents, cells included, with extreme care. Contaminated reagents may severely affect the results of experiments, possibly leading us down the path based on a false initial premise.