Both investigators and medical journals should have full access to data from industry-sponsored clinical trials that are published in those journals, a recent analysis in the BMJ contends.
The analysis by Robert Steinbrook, adjunct associate professor at Dartmouth Medical School and Yale School of Medicine, and by Jerome Kassirer, professor at Tufts University School of Medicine in the US, featured in the same issue of the BMJ as a German meta-analysis of published and unpublished clinical trials of Pfizer’s reboxetine (Edronax).
The German review found not only that the product “an ineffective and potentially harmful antidepressant” but that the available evidence on reboxetine had been “substantially affected by publication bias”, prompting further calls for mandatory disclosure of all clinical trial data.
Steinbrook and Kassirer refer to the controversy over GlaxoSmithKline’s RECORD (Rosiglitazone Evaluated for Cardiovascular Outcomes in Oral Agent Combination Therapy for Type 2 Diabetes) post-marketing study, which compared the safety of Avandia to that of standard diabetes therapies.
Concerns about the reliability of the RECORD data have, they say, “once again raised an uncomfortable question – what criteria should medical journals use when they consider reports of industry-sponsored clinical trials for publication?”
As Steinbrook and Kassirer point out, pharmaceutical companies have a financial interest in the outcome of the studies they sponsor. These companies also own the data and set the rules for access to those data.
“Unfortunately, they cannot be relied on to consistently provide dispassionate evaluations of their own drugs and medical devices,” the authors comment. “Moreover, many investigators have notable financial interests with the same sponsors.”
Some regulatory agencies, such as the US Food and Drug Administration, have the legal authority independently to scrutinise companies’ clinical trial data, Steinbrook and Kassirer note. “For example, when the FDA restricted access to rosiglitazone, it acknowledged that the RECORD data were not reliable and required that the sponsor convene an independent group of scientists to re-adjudicate the endpoints at the patient level.”
Journal editors, however, have no such authority, the authors add. And clinical investigators “are caught in an awkward catch-22”.
The uniform requirements for manuscripts submitted to biomedical journals – as set out by the International Committee of Medical Journal Editors – specify that when a study is “funded by an agency with a proprietary or financial interest in the outcome”, the authors should attest that “I had full access to all of the data in this study and I take complete responsibility for the integrity of the data and the accuracy of the data analysis”, Steinbrook and Kassirer observe,
Yet the principles of the Pharmaceutical Research and Manufacturers of America (PhRMA) “include vague statements such as ‘we seek to provide investigators with meaningful access to clinical data from the studies in which they participate’ and ‘investigators will be given access to any tables, figures, and reports they need from the study that are related to the hypothesis being tested or explored or which are needed in order to understand the results of the study’”.
The PhRMA principles on the conduct of clinical trials and the communication of clinical trial results do not include provisions for full and unrestricted access to the trial database, as determined by the researchers and not the company, the authors note.
“Thus investigators may be unable to examine the data independently, confirm findings, and conduct their own analyses. Without such unfettered access, investigators cannot guarantee that they have met journals’ standards for the conduct and reporting of research.”
A desirable situation, Steinbrook and Kassirer say, would be “for considerably more clinical trials to be sponsored, funded and conducted by organisations that are independent of industry and for considerably fewer investigators to have financial associations with industry other than research support and bona fide consulting related to research”.
They acknowledge, though, that in reality companies will continue to sponsor trials and journals will continue to publish them.
All the same, the authors comment, it is “time for journals to tighten their standards further”. They suggest three possible approaches here:
• Journals should explicitly define “full access to all of the data” – for example, as “unrestricted access to the trial database, as determined by the researchers, the ability to examine the primary data independent of the sponsor, including the conduct or confirmation of statistical and other analyses, and control over the decision to publish”.
• An author who is independent of a sponsor with a proprietary or financial interest in the trial outcome – i.e., one with no recent, current, or pending financial association with the sponsor, other than research support administered by the investigator’s institution or employer – should serve as the principal investigator and take responsibility for the integrity of the study data and the accuracy of the data analysis.
• The responsible author should be “prepared and able” to provide the data to the journal, if requested, before acceptance and for a specified period of time after publication – the authors suggest five years.
These standards should apply to all clinical trials, and journals should decline studies that do not meet them, Steinbrook and Kassirer insist. If concerns about data integrity arise after publication, editors should “promptly pursue appropriate actions, such as an independent review of the data, corrections, retractions, and expressions of concern”.
It is likely that editors would ask to see primary data from clinical trials “rarely and only for well-defined reasons”, the authors suggest, However, the “mere requirement of availability of data for independent examination by journals would be an important safeguard”.
Too much responsibility?
A commentary in the same issue of the BMJ by Nick Freemantle, professor of clinical epidemiology and biostatistics at the University of Birmingham in the UK, questions whether journals should have to shoulder so much responsibility.
For one thing, Freemantle comments, large-scale randomised clinical trials are “complex, costly and bureaucratic”. Good chairs of steering committees “do check carefully that things are done properly, at least to the extent that they are able, but such scrutiny is fundamentally, in the context of drug development and safety, the responsibility of the regulators”.
Moreover, Freemantle argues, securing access to original trial data would be “very troublesome” for a journal editor. Getting to grips with a trial dataset is a time-consuming exercise and “requires special skills that editors generally do not have. Furthermore, the locked trial dataset would not answer questions about the manner in which those data were derived; such questions require source verification and substantial extra work and resources”.
What is needed, Freemantle believes, is “for the regulators to do their jobs properly, and ensure that resulting publications of clinical trials are reliable and the data pass scrutiny”.
The role of a medical journal, he adds, is obviously to publish scientific work, but also to “engage in scrutiny and debate on the validity and interpretation of that work”.
In that light, rather than “simply bashing the companies (which may be deserved, but does not progress us very far)”, Freemantle suggests, “surely journals should encourage and communicate debate that will help prescribers decide what to do in such circumstances?”