Why cancer study designs fail

by | 5th Feb 2007 | News

Researchers produce ever more sophisticated cancer treatments. Unfortunately, clinical trial design hasn’t kept pace, researchers at Memorial Sloan-Kettering Cancer Center warn in Clinical Cancer Research.

Researchers produce ever more sophisticated cancer treatments. Unfortunately, clinical trial design hasn’t kept pace, researchers at Memorial Sloan-Kettering Cancer Center warn in Clinical Cancer Research.

Only nine of the 70 Phase II studies relying on historical data that they reviewed provided sufficient information to allow clinicians and researchers to accurately judge the benefits offered by a new agent. “We are facing a new and growing problem in clinical trial testing,” said the study’s lead author, Andrew Vickers, a research methodologist. “While the drugs have changed, researchers are still using the same old methods to gauge how effective they are.”

Conventionally, researchers tested whether cancer therapies, usually chemotherapy, shrunk tumours in patients with advanced cancer. However, some modern targeted therapies slow tumour progression and, therefore, the studies enrol patients with less advanced cancer. Furthermore, new drugs are often added to existing combinations. In such cases, “it can be hard to answer the question of whether patients are doing better than expected,” Dr Vickers said.

Protocol variance

In Phase II studies, if the results meet or exceed the target, the treatment moves along development. If it fails, and there is no compelling reason otherwise, development stops. Some protocols specify that a certain proportion of patients show a complete or partial response for the treatment to be deemed effective. Other protocols base the assessment on the proportion of the original group alive at a predetermined time. Assume that 30% of patients taking a combination of two chemotherapy drugs survive a year. Any additional drug has to jump over the 30% barrier. “So we have to be pretty certain that the 30% target is correct,” he said. Unfortunately, that’s where some current research can fall short.

In some cases, the benefits should be obvious. The paper remarks that few second and third line cytotoxics show a survival benefit and “response rates are typically very low”. In such cases it’s “highly unlikely” that a tumour will shrink without treatment. So researchers set the barrier slightly above zero, typically 5 or 10%. However, when researchers add the novel agent to an existing standard, they rely on historical data to the standard regimen to set the barrier.

Against this background, a systematic review performed by Vickers and colleagues found that 52% of 134 eligible phase II trials published in the Journal of Clinical Oncology or Cancer in the three years to June 2005 required historical data. However, 46% of these papers did not cite the source of the historical data and just 13% “clearly gave a single historical estimate” as the rationale for setting the barrier. No study used statistical methods to account for either sampling error or possible differences in case mix between the phase II sample and the historical cohort.

The researchers showed that 82% of the trials that did not cite historical data appropriately declared an agent to be active. This compared to 33% of those that cited historical data correctly. This difference reached statistical significance. Given that much of the expense of developing a drug arises late clinical development, more accurately predicting the drugs likely to be active could help control costs.

Fortunately, Dr Vickers told PharmaTimes Clinical Newsthat some simple guidelines help the design and reporting of Phase II trials that require historical data. Firstly, he suggests describing the historical cohort fully including the type of study, the diagnoses (disease and stage), dates of accrual, treatment received and number of patients. Secondly, researchers should explicitly justify why the barrier is higher, lower or equal to the historical estimate. Thirdly, he suggested providing a single estimate, rather than a range, for historical response/survival rate. Finally, researchers should consider adjusting Phase II results to account for differences in case mix, when possible. By Mark Greener

Tags


Related posts