stethoscope and human outlines to represent clinical trials

A Power Boost for COVID-19 Clinical Trials

Strategies for better trial design could streamline and accelerate testing of new treatments.

By Michael Eisenstein

More than 3,000 COVID-19 clinical trials have launched in the past year. That sounds like good news, but many of these are ill-suited for identifying interventions that save lives. Bloomberg School researchers have responded with strategies that can better distinguish clinically meaningful signals from confounding statistical noise.

Many COVID-19 trials have too small a sample size, and thus lack the “statistical power” to confidently determine whether patients are responding to treatment, says Elizabeth Ogburn, PhD, MS, an associate professor in Biostatistics. Such underpowered trials may not have enough participants to produce a measurable effect from therapy; they may also yield false-positive results if a few patients’ conditions improve by chance rather than in response to treatment.   

Ogburn and colleagues aim to address this problem with the COVID-19 Collaboration Platform, an open resource for researchers to share study protocols and data and create opportunities for teamwork. “We’re trying to facilitate matchmaking for clinical questions,” says Ogburn, who hopes pooled efforts prevent resources from being squandered on redundant, underpowered studies.  

So far, clinical trial program administrators are generally very supportive, but many clinical researchers are unaccustomed to sharing protocols and data, and lack incentives from journal publishers or funders to buck their siloed scientific culture. “I don’t think it’s shifting in time for COVID, but I’m cautiously optimistic that it’ll change in the aftermath,” says Ogburn.

Another Biostatistics associate professor, Michael Rosenblum, PhD, MS, is implementing a statistical strategy that streamlines trials without sacrificing quality. He’s zeroed in on confounding factors in study cohorts like age and preexisting medical conditions that can affect the outcome of randomized controlled trials. “Randomization almost never gets perfect balance on all of these baseline variables,” says Rosenblum. For example, overrepresentation of older adults, who tend to have a weaker immune response, could make it harder to determine the effectiveness of an experimental vaccine.

In a study published October 11 in Biometrics, his team performed simulated trials using medical records from hospitalized COVID-19 patients. By using a simple statistical tool known as covariate adjustment, they could dampen the effects of these confounding factors in their simulated trials. This allowed them to reduce the number of trial participants needed to obtain statistically significant results by 4% to 18%, a potentially substantial savings in cost, time, and labor. 

Rosenblum is developing open-source software tools that will make it easier for trial planners to incorporate this underused technique into their studies. “It’s the closest thing to a free lunch that I’ve ever seen in statistics,” says Rosenblum.

Related Story