Cancer scientists may rely too much on mice
According to a new study, it is not possible to repeat experiments on mice to get similar results
Cancer scientists may be overly confident in their ability to repeat experiments in mice and get similar results the second time around, according to a new study that offers fresh evidence of a so-called reproducibility crisis in medical research.
For the study, researchers asked 196 scientists to predict whether six mouse experiments published in prominent medical journals could be reproduced - that is, done again achieving the same results with the same effect size, such as shrinking tumors by the same amount, and the same level of statistical significance, meaning the findings aren’t simply due to chance.
None of the six experiments researchers asked scientists to review got the same statistical significance or effect size when they were repeated by investigators at the Reproducibility Project: Cancer Biology, a collaboration between the Science Exchange and the Center for Open Science that is independently testing the reliability of experiments published in prominent medical journals (osf.io/e81xl/).
But, on average, scientists participating in the survey predicted a 75 percent probability of reproducing the statistical significance and a 50 percent probability of getting the same effect size.
“This is the first study of its type, but it warrants further investigation to understand how scientists interpret major reports,” said senior study author Jonathan Kimmelman of McGill University in Montreal.
“I think there is probably good reason to think that some of the problems we have in science are not because people are sloppy at the bench, but because there is room for improvement in the way they interpret findings,” Kimmelman said by email.
The work follows on numerous reports exploring biomedicine’s reproducibility crisis. In the last 10 or 15 years, there have been mounting concerns that some of the techniques and practices used in biomedical research lead to inaccurate assessments of a drug’s clinical promise, Kimmelman’s team writes in PLoS Biology.
The results do, however, raise the possibility that training might help many scientists overcome certain cognitive biases that affect their interpretation of scientific reports, the researchers propose.
The study team asked both established cancer scientists and trainees in elite education programs to assess the six mouse experiments, and they found the more experienced and more influential scientists tended to be more accurate.
“What is surprising here is that researchers are not very accurate, actually they are less accurate than chance, at predicting whether a study will replicate,” said Dr. Benjamin Neel, director of New York University’s Perlmutter Cancer Center.
Study participants reported their own level of expertise in the field, however, they were not, on average, the most influential scientists as measured by how many publications they had and how often other researchers cited their work, Neel, who wasn’t involved in the study, said by email.
“Probably the biggest reason the studies don’t hold up is because the sample size is too small. For example, if only 5 to 10 mice are used, a 50-animal study might not yield the same result,” Neel said.
“I think all pre-clinical results should be validated by an independent laboratory before they are used as the basis for clinical trials,” he added.
For patients, even the experiments that fail in mice or can’t be reproduced can help scientists determine if human research is possible, a necessary step in discovering new treatments, said Dr. Anthony Olszanski, director of the phase 1 developmental therapeutics program at Fox Chase Cancer Center in Philadelphia.
“Today, it is very hard to predict if promising results in preclinical studies will be seen in humans,” Olszanski, who wasn’t involved in the current study, said by email. “However, the majority of effective anti-cancer agents used today became medicines based, in part, on the findings of early researchers.”