Half of cancer experiments not replicable
Eight years ago, a team of researchers launched a project to carefully repeat early but influential lab experiments in cancer research.
They recreated 50 experiments, the type of preliminary research with mice and test tubes that sets the stage for new cancer drugs. The results reported in December: About half the scientific claims didn’t hold up.
“The truth is we fool ourselves. Most of what we claim is novel or significant is no such thing,” said Dr. Vinay Prasad, a cancer doctor and researcher at the University of California, San Francisco, who was not involved in the project.
It’s a pillar of science that the strongest findings come from experiments that can be repeated with similar results.
In reality, there’s little incentive for researchers to share methods and data so others can verify the work, said Marcia McNutt, president of the National Academy of Sciences. Researchers lose prestige if their results don’t hold up to scrutiny, she said.
And there are built-in rewards for publishing discoveries.
But for cancer patients, it can raise false hopes to read headlines of a mouse study that seems to promise a cure “just around the corner,” Prasad said. “Progress in cancer is always slower than we hope.”
Current treatments not affected
The new study reflects on shortcomings early in the scientific process, not with established treatments. By the time cancer drugs reach the market, they’ve been tested rigorously in large numbers of people to make sure they are safe and they work.
For the project, the researchers tried to repeat experiments from cancer biology papers published from 2010 to 2012 in major journals such as Cell, Science and Nature.
Overall, 54% of the original findings failed to measure up to statistical criteria set ahead of time by the Reproducibility Project, according to the team’s study published online by eLife. (The nonprofit eLife receives funding from the Howard Hughes Medical Institute, which also supports the Associated Press Health and Science Department.)
Among the studies that did not hold up was one that found a certain gut bacteria was tied to colon cancer in humans. Another was for a type of drug that shrunk breast tumors in mice. A third was a mouse study of a potential prostate cancer drug.
A co-author of the prostate cancer study said the research done at Sanford Burnham Prebys research institute has held up to other scrutiny.
“There’s plenty of reproduction in the [scientific] literature of our results,” said Erkki Ruoslahti, who started a company now running human trials on the same compound for metastatic pancreatic cancer.
This is the second major analysis by the Reproducibility Project. In 2015, they found similar problems when they tried to repeat experiments in psychology.
Study co-author Brian Nosek of the Center for Open Science said it can be wasteful to plow ahead without first doing the work to repeat findings.
“We start a clinical trial, or we spin up a startup company, or we trumpet to the world ‘We have a solution’ before we’ve done the follow-on work to verify it,” Nosek said.
Lack of cooperation a problem
The researchers tried to minimize differences in how the cancer experiments were conducted. Often, they couldn’t get help from the scientists who did the original work when they had questions about which strain of mice to use or where to find specially engineered tumor cells.
“I wasn’t surprised, but it is concerning that about a third of scientists were not helpful, and, in some cases, were beyond not helpful,” said Michael Lauer, deputy director of extramural research at the National Institutes of Health.
NIH will try to improve data sharing among scientists by requiring it of grant-funded institutions in 2023, Lauer said.
“Science, when it’s done right, can yield amazing things,” Lauer said.
For now, skepticism regarding novel findings is the right approach, said Dr. Glenn Begley, a biotechnology consultant and former head of cancer research at drugmaker Amgen. A decade ago, he and other in-house scientists at Amgen reported even lower rates of confirmation when they tried to repeat published cancer experiments.
Cancer research is difficult, Begley said, and “it is very easy for researchers to be attracted to results that look exciting and provocative, results that appear to further support their favorite idea as to how cancer should work, but that are just wrong.”
—AP