Most group-randomised trials for the prevention and control of cancer use questionable statistical methods, suggests a review of studies published between 2002 and 2006.

As a result, many of these trials have exaggerated the benefits of therapy or even indicated a positive outcome when there was none, said the authors of the survey in the Journal of the National Cancer Institute.

In group-randomised trials the intervention occurs at group level (typically, physicians or clinics) but observations are made on individuals within the groups (e.g., patients). Group randomisation is seen as particularly useful where there is a high risk of contamination bias if group members are randomised as individuals.

Their review of 75 group-randomised cancer trials from 41 journals found that fewer than half of these studies used the proper statistical methods to analyse the outcomes.

Nearly a third of the studies "reported statistically significant effects that, because of analysis flaws, could be misleading to scientists and policymakers", the authors noted.

"We cannot say any specific studies are wrong. We can say that the analysis used in many of the papers suggests that some of them probably were overstating the significance of their findings," commented Professor David Murray, lead author and head of epidemiology at the College of Public Health, Ohio State University in the US.

Only 34 of the papers, or 45%, reported the use of appropriate methods to analyse the results. With 26 or 35% of the articles, only inappropriate methods were used in the statistical analysis. The researchers found that 8% of the articles used a combination of appropriate and inappropriate methods, while nine articles did not provide enough information even to judge whether the analytic methods were appropriate or not.

The core of the problem was the failure of investigators to use correct group-randomised study methods.

In essence, these are designed to take into account any similarities among group members or any common influences affecting the members of the same group. But all too often, Professor Murray observed, similarities among group members were not factored into the final statistical analysis.

This can result in a so-called "Type 1 error", in which a difference is found between outcomes in groups that does not really exist.

Professor Murray said similar flaws were probably prevalent in other fields of clinical investigation and he called on researchers to seek assistance from statisticians familiar with group-randomised study methods.

He added that funding agencies and journal editors also had a responsibility to ensure that studies of this kind were correctly designed.

"Failure to do so can lead to errors that mislead investigators and policy-makers, and slow progress toward control and prevention of cancer," he commented.

However, that there was no evidence to suggest that the design flaws were introduced deliberately to influence the outcome of trials, Professor Murray said.