Most reports of randomised clinical trials in leading medical journals still fail to use systematic reviews to place their findings in the context of the existing evidence base. Nor has there been much discernible improvement in the use of systematic reviews over the last 12 years, an analysis by UK researchers has found.

Mike Clarke and Sally Hopewell of the Cochrane Centre in Oxford, together with Ian Chalmers of the James Lind Library, have been looking at the issue since 1997 – a year after the CONSORT statement published in the Journal of the American Medical Association (JAMA) specified that data from a new trial should be interpreted “in the light of the totality of the available evidence”.

If a clinical trial is to be justifiable both scientifically and ethically, “it should be designed in the light of an assessment of relevant previous research, ideally a systematic review”, Clarke et al comment in The Lancet. “When its findings are reported, these should be set in the context of updated reviews of other, similar research.”

In 1997, 2001 and 2005, the researchers assessed reports of randomised trials published during the month of May in the Annals of Internal Medicine, the BMJ, JAMA, The Lancet and the New England Journal of Medicine. They found that only a small proportion of these reports included sufficient information to assess the contribution of the new findings to the totality of the available evidence.

Clarke et al repeated the exercise in May 2009, once again examining the discussion sections of the trial reports and, as in 2005, investigating the extent to which reports referred to systematic reviews used in the design of the new research in their introductory sections.

Out of a total of 28 trial reports identified between 1997 and 2009, only 11 included references to systematic reviews in their introductory sections. With five of the 28 reports, the researchers noted, the authors claimed their study was the first to have addressed the question concerned. One of the reports that did not make this claim placed the results of the new study in the context of an updated systematic review of other research in the discussion section.

Reference was made to relevant systematic reviews in 10 other trial reports, but “without any integration of the results of the new trials into an update of these reviews”, Clarke et al found. In the remaining 13 reports, there was no indication that any systematic attempt had been made in the discussion section to set the new results in the context of previous trials.

The researchers found “no evidence of progress” between 1997 and 2009 in the use of updated systematic reviews to discuss the findings of trials published in the five medical journals included in their analysis. Although the proportion of trials referring to systematic reviews has increased, “most reports still fail to do this”, they commented. “Similarly, most researchers do not seem to have considered systematic reviews when designing their trial.”

The expectation, as in the CONSORT statement, that a new trial should be reported in the context of an up-to-date systematic review “does not imply that their Discussion section should contain a full account of the materials, methods and findings of such a review”, Clarke et al add.

“The technology has existed for some time to enable a brief review of the evidence to be included in the Discussion section, and for links to relevant, up-to-date systematic reviews published elsewhere. With several thousand systematic reviews published each year and 4,000 full Cochrane reviews now published, the availability and accessibility of systematic reviews has never been greater.”

Instilling confidence

People who make decisions about healthcare “should be able to be confident in the use of randomised trials to inform their decision”, the researchers say.

“Such confidence requires that these trials be designed and reported in the light of other similar research … In the absence of other evidence, therefore, our findings have shown that editors and authors – in these five high-impact journals at least – continue to fail to serve the needs of those who wish to use the results of randomised trials to make decisions about healthcare.”

Professor Clarke has welcomed an announcement in the same issue of The Lancet that the journal will now ask authors of all research reports submitted after 1 August 2010 to put their work into context of the existing evidence base, either by reporting their own, up-to-date systematic review or by citing a recent systematic review conducted by others.