Statisticians at St Jude Children’s Research Hospital in the USA have developed a new technique that allows researchers to compare new treatments to existing drugs, without the need to enrol patients into control arms.

The new statistical tool allows researchers to make a new treatment available to everyone in the study while still adhering closely to the gold standard of clinical study designs - the prospective randomised controlled trial (RCT), according to Xiaoping Xiong, the paper’s first author.

The new technique is expected to be especially useful when results of preliminary studies suggest that the treatment will be effective and when investigators do not want to deny that treatment to people who could benefit from it.

Rather than including a control group, the treatment group in the study is compared with a historical control group composed of patients who received the existing treatment in a previous study. A report on the new method appears in the August issue of Statistics in Medicine.

The St Jude report is the first to describe this novel statistical method called sequential interim analysis using a historical control group, the authors said.

In an interim analysis, researchers statistically analyse the accumulating results of the clinical trial at several points during the course of the study, rather than waiting until the end of the trial to determine if the trial should be stopped early.

In an RCT, participants are randomly assigned to either the group that receives the new treatment or the group that does not.

“It’s not always possible to do a standard RCT when there are a limited number of patients available to participate, or when patients do very poorly on the standard treatment that the new treatment is intended to replace,” Xiong said. “In such cases, the best option is to design a trial that allows all participants to get the new treatment and use previously treated patients as a historical control group.”

Until now, there were no statistically valid methods that included interim analysis in the design of clinical trials that used historical controls, said the paper’s co-author, James Boyett, chair of the St. Jude Department of Biostatistics. “This technique also relieves investigators of the uncertainty they would otherwise have if they stopped a clinical trial before its planned endpoint, because their interim analysis tells them either that the treatment works or it doesn’t,” Boyett said.

Specifically, Xiong’s technique lets investigators determine the probability that their decision to stop the trial would have changed if they had let the clinical trial continue to the end.

“This is a novel advantage of Dr. Xiong’s technique,” Boyett said. “Investigators are ethically obligated to cease recruiting additional patients to the clinical trial as soon as there is statistical evidence that it is an improvement over - or is inferior to - the new treatment, compared to the historical control group. Now, if they decide to stop the trial they can be confident they are making the right decision.”