Published studies reporting the outcomes and adverse effects of breast-cancer treatments in late-phase clinical trials are often susceptible to “spin and bias”, a Canadian analysis suggests.
Of the 164 trials of breast-cancer therapy reviewed by researchers from the Princess Margaret Cancer Centre and the University of Toronto in Canada, 33% were found to show bias in the reporting of the primary endpoint and 67% in the reporting of toxicity.
No association was seen between these perceived distortions and the source of funding for the trials (i.e., industry or academic).
The findings were reported in the latest issue of the monthly journal Annals of Oncology.
Professor Ian Tannock, senior scientist in the Division of Medical Oncology and Hematology at the Princess Margaret centre, and colleagues searched PUBMED for randomised, controlled Phase III trials with breast cancer therapies that had been published between January 1995 and August 2011.
Out of a total of 568 articles, 164 were considered eligible for inclusion in the analysis. Exclusion criteria included clinical trials with fewer than 200 participants, review articles, observational studies, meta-analyses, ongoing studies and articles for which only the abstract was available.
The researchers defined ‘bias’ in this context as “inappropriate reporting of the primary endpoint and toxicity, with emphasis on reporting of these outcomes in the abstract”.
‘Spin’ was characterised as “the use of words in the concluding statement of the abstract to suggest that a trial with a negative primary endpoint was positive based on some apparent benefit shown in one or more secondary endpoints”.
Tannock et al paid particular attention to outcomes reported in the study abstract because “busy clinicians often read only the abstracts of publications”, they noted.
Of all the trials, 72 (43.9%) were found to have a positive outcome, with a significant P-value for the difference in primary endpoint favouring the study’s experimental arm.
However, 54 or 33% of the trials analysed were reported as positive based on secondary endpoints, despite not showing a statistically significant benefit in the primary endpoint.
“These reports were biased and used spin in attempts to conceal that bias,” the researchers said.
They found that 59% of 92 trials showing no benefit from the experimental therapy (i.e., negative primary endpoint) used secondary endpoints to suggest benefit from the treatment.
Compared with studies where there was a statistically significant difference in PE between the two arms, studies with a non-significant difference showed a statistically significant association with not reporting the primary endpoint in the concluding statement of the abstract (27% versus 7%).
A total of 110 (67%) papers met the researchers’ definition of biased toxicity reporting.
There was a statistically significant association between biased reporting of toxicity and observation of a statistically significant difference in the study arms for the primary endpoint – in other words, if a trial showed a positive primary endpoint, then toxicities were more likely to be under-reported.
One limitation of the analysis was that only 18% of the trials reviewed were registered on the ClinicalTrials.gov database. In some of these studies, though, the primary endpoint was changed between the time of registration and results reporting.
“Among these trials, there was a trend towards change of the PE being associated with positive results, suggesting that it may be a strategy to make a negative trial appear positive,” the authors commented.
“Trial registration does not necessarily remove bias in reporting outcome, although it does make it easier to detect,” they added.
Better and more accurate reporting of clinical trial outcomes is “urgently needed”, Professor Tannock concluded.
“Journal editors and reviewers, who give their expertise on the topic, are very important in ensuring this happens,” he pointed out.
“However, readers also need to critically appraise reports in order to detect potential bias. We believe guidelines are necessary to improve the reporting of both efficacy and toxicity.”