CTTI analysis finds too many clinical trials too small to be useful

by | 3rd May 2012 | News

Small, single-centre trials dominate the clinical studies logged on US-based registry ClinicalTrials.gov between 2007 and 2010, a new analysis has found.

Small, single-centre trials dominate the clinical studies logged on US-based registry ClinicalTrials.gov between 2007 and 2010, a new analysis has found.

There were also significant disparities in the methodological approaches used for the registered trials, including the use of randomisation, blinding and data monitoring committees (DMCs).

“Our analysis raises questions about the best methods for generating evidence, as well as the capacity of the clinical trials enterprise to supply sufficient amounts of high-quality evidence needed to ensure confidence in guideline recommendations,” concluded researchers from the Clinical Trials Transformation Initiative (CTTI) in the 2 May issue of JAMA.

The CTTI is a public private partnership between the US Food and Drug Administration and Duke University Medical Center in Durham, North Carolina.

The research team led by Dr Robert Califf of the Duke Translational Medicine Institute looked at the fundamental characteristics of interventional clinical trials registered on the ClinicalTrials.gov database, with a focus on data components that would help to generate reliable evidence from studies.

A dataset comprising 96,346 clinical trials was downloaded and entered into a relational database for analysis. Interventional trials were identified and the focus narrowed to three clinical specialties – cardiovascular disease, mental health, and oncology – that together account for the largest number of disability-adjusted life-years lost in the US.

Small samples

Califf and colleagues found that the number of trials submitted for registration on ClinicalTrials.gov increased from 28,881 in October 2004-September 2007 to 40,970 in October 2007- September 2010.

Of these studies, 96% had an enrolment target of 1,000 or fewer participants and 62% anticipated enrolling 100 or fewer participants. The median number of participants per trial was 58 for completed trials and 70 for trials that had been registered but not yet completed.

Data on funding sources and the number of sites were available for 37,520 of the 40,970 clinical trials registered during the 2007-2010 period.

The largest proportion (17,592, 47%) of these studies were funded by neither industry nor the US National Institutes of Health (47 percent, n = 17,592), with industry funding 16,674 (44%) of the total, the NIH 3,254 (9%) and other US federal agencies 757 (2.0%).

The majority (66%) of the trials registered over this period were single-site, while 34% of the total were multi-site studies.

Heterogeneity of approach

“Heterogeneity in the reported methods by clinical specialty; sponsor type; and the reported use of DMCs, randomization, and blinding was evident,” the authors wrote.

“For example, reported use of DMCs was less common in industry-sponsored vs. NIH-sponsored trials, earlier-phase vs. Phase 3 trials, and mental health trials vs. those in the other two specialties. In similar comparisons, randomisation and blinding were less frequently reported in earlier-phase, oncology, and device trials.”

The finding of substantial differences in the use of randomisation and blinding across specialties raises “fundamental questions about the ability to draw reliable inferences from clinical research conducted in that arena”, Califf et al suggested.

That 50% of interventional studies registered on ClinicalTrials.gov between October 2007 and September 2010 included fewer than 70 participants by design may also have important policy implications, they added.

While small trials “may be appropriate in many cases, they are “unlikely to be informative in many other settings, such as establishing the effectiveness of treatments with modest effects and comparing effective treatments to enable better decisions in practice”, the authors commented.

Tags


Related posts