Skip to main content
xYOU DESERVE INDEPENDENT, CRITICAL MEDIA. We want readers like you. Support independent critical media.

Mega Project Failed to Reproduce High Impact Cancer Researches, Raises Questions

The project team reported their findings in two papers published on December 7 in eLife and concluded that around half of the experiments could not be replicated on most criteria.
SCI

The Reproducibility Project, Cancer Biology, was a megaproject launched back in 2012 to check the reproducibility of some high impact published research on cancer biology. The project was overseen by the Center for Open Science and attempted to replicate 193 experiments from 53 high-impact papers published between the period 2010-2012. The experiments were selected based on citations and readership. However, in almost a decade-long effort, the project looked at 50 experiments from 23 papers. Lack of transparency about data, reagents and protocols by some of the authors of the papers resulted in a reduced amount of experiments than what was aimed, according to Tim Errington, the leader of the project.

The project concluded that around half of the experiments could not be replicated on most criteria. The project team reported their findings in two papers published on December 7 in eLife.

The project researchers used five criteria to evaluate their replication, as there is no standard way of judging whether replication is successful. The criteria used by the project researchers were whether the effect measured was similar to the original research and whether they were statistically significant; whether the replication effect size was at per the originally published research. Effect sizes estimate how significant the effect of a study finding is. For example, let’s assume that two studies have found some chemicals that can kill cancer cells. But the same chemical kills a different percentage of cells in different experiments- say, in one experiment, it kills 40% of the cancer cells, while in another, it kills 80% of the cells. Then the first experiment would be considered as having half of the effect size than the second one. The five criteria that the team used consisted of four focused on the size effect.

The researchers were able to apply the criteria to 112 experimental effects and found that only 46% of them met at least three of the five criteria, as reported in eLife. “The report tells us a lot about the culture and realities of the way cancer biology works, and it’s not a flattering picture at all,” said Jonathan Kimmelman, a bioethicist at McGill University in Montreal who co-authored a commentary on the findings highlighting the ethical aspects.

The lack of reproducibility raises several questions, especially when non-reproducible research is used to launch cancer drug clinical trials. “If it turns out that the science on which a drug is based is not reliable, it means that patients are needlessly exposed to drugs that are unsafe, and that doesn’t even have a shot at making an impact on cancer,” Kimmelman cautioned in a statement.

At the same time, we need to be cautious of overinterpreting the project findings and saying that the current cancer research system is all broken. “We actually don’t know how well the system is working. One of the many questions left unresolved by the project is what an appropriate rate of replication is in cancer research since replicating all studies perfectly isn’t possible. That’s a moral question, a policy question. That’s not really a scientific question,” Kimmelman commented in the statement.

It is noteworthy at this point that 19 of the 20 drugs entering the clinical trials have never received approval from the US FDA(Food and Drug Administration).

A failure of replicating an experiment may not necessarily mean that the original research was wrong. Commenting on this aspect, Brian Nosek, executive director of the Center for Open Science, was quoted to have said, “While the original could be a false positive, it’s also possible that the replication was a false negative. Or both could be correct, with the discrepancy due to differences in the experimental conditions or design.” Adding further, he said, “We tried to minimise that with high statistical power, using the original materials as much as possible, and peer review in advance.”

During the course of the project, the researchers had to face some obstacles, especially when none of the original experiments had more details of their methods. It became a huge problem to replicate them. The project researchers contacted the authors of the original researches directly. They found that only a few of the original authors were cooperating. In contrast, others were not supportive in either providing the method details or otherwise, the project researchers said.

Get the latest reports & analysis with people's perspective on Protests, movements & deep analytical videos, discussions of the current affairs in your Telegram app. Subscribe to NewsClick's Telegram channel & get Real-Time updates on stories, as they get published on our website.

Subscribe Newsclick On Telegram

Latest