Skip Navigation
Behavior on Trial

Jon Reinfurt

Behavior on Trial (continued)

There’s another wrinkle: many behavior change interventions are aimed at communities, not at individuals. (Individuals get circumcised; communities get the billboard.) And tracking outcomes in individuals is much more clear-cut than tracking outcomes on a village or town level.

Historically, the government agencies or NGOs that implement interventions such as media campaigns evaluate their own programs. Sometimes programs overlap, often inefficiently, with several interventions targeting the same people, and the effectiveness of the interventions is at the mercy of social factors such as migration or civil war. But, to avoid any perception of bias, independent evaluations should be made.

The goal of Kerrigan’s project is to provide objective, academic evaluation. She and colleagues will study dozens of interventions already under way in various villages and districts, which are implemented by dozens of different partners. Randomization is out of the question: in such complex situations, it’s impossible to conduct a conventional RCT. It’s messy, says Kerrigan, an associate professor in Health, Behavior and Society. “But this is real-world stuff,” she says. “We can’t slow down this train.”

Almost Gold?

The best she and her colleagues can hope for, says Kerrigan, is to identify solid control arms and conduct a valid observational study [see below]. But there is some help for the herculean task ahead for her and other researchers. Where randomization is not possible, recently introduced statistical tools such as propensity score matching may help close the gap between observed and randomized trials, compensating somewhat for the loss of comparability between study arms when randomization is not possible. (Propensity score matching is one of many statistical tools used to make intervention and control arms similar enough to allow fair comparisons.)

“We have statisticians bringing some observational studies very close to RCTs in terms of confidence in their findings,” says Goodman, MD, PhD. Adds Kerrigan, PhD, MPH, “We want the most rigorous design feasible.”

But Kay Dickersin, who directs the Center for Clinical Trials at the Bloomberg School, advises caution when relying on observational studies to determine the effectiveness of new interventions. “Certainly we should use whatever data we have. But we’ve seen major mistakes that make us shy about using observational data” to determine intervention effectiveness.

The hormone replacement therapy controversy is a good example: Many observational studies showed a cardiac benefit for post-menopausal women treated with estrogen plus progesterone; when the treatment was tested in the context of a large RCT, however, it showed no cardiac benefit, and perhaps even a higher risk of heart disease. “That trial was like, ‘Oops, we blew it.’ … I’m intrigued that we might be able to emulate RCT findings using observational studies and special statistics. I’d like to see studies validating the modeling alternative, comparing findings to those of RCTs,” says Dickersin, PhD, professor in Epidemiology. “As far as I know, it’s still an open question.”

Goodman notes that a recent reanalysis of the largest observational study on estrogen showed that the observational results were quite similar to the clinical trial’s.

What’s an Investigator to Do?

Albert Einstein is credited with saying, “Things should be as simple as possible, but no simpler.”

Ethicists believe that studies should be designed to accommodate ethical obligations, and investigators agree. Celentano thinks that a larger sample size would help: “With reduced risk in the control arm, the anticipated difference between the two arms is smaller than anticipated, and so you need a larger sample size to demonstrate that one arm’s intervention is more effective than the other. … But we can’t always afford a larger sample size.”

Goodman, MD, PhD, suggests that in some cases, an RCT might be overkill; sometimes what we learn from an observational study is good enough. Dickersin acknowledges that “randomized-controlled trials are hugely expensive, and they can only address one question at a time, usually over the short term, whereas with huge data sets you can address multiple questions and rare, longer term outcomes. [But] I think we better be very careful if we rely on observational data to determine the effectiveness of an intervention.”

There’s no right or simple solution for measuring the effectiveness of behavior change interventions in the field of HIV/AIDS prevention. The virus is wily, and the epidemic entrenched. What kind of evaluation is the best evaluation? Says Goodman, “What you’ll learn is defined by the purposes at hand. … You measure the risk, the cost, the stakes, the consequences of being wrong. … A clinical trial cannot be done in all situations, regardless of the stakes.”

Observation vs. Randomization

In an observational study, investigators observe rather than influence exposure and disease among participants. The combinations are self-selected or “experiments of nature.”

In a randomized-controlled trial, participants are randomly assigned to treatment groups (intervention or comparison) by investigators.

Source: CDC

Comments

This forum is closed
  • JAMES LAHAI KAMARA

    FREETOWN, SIERRA LEONE 05/12/2011 12:21:10 PM

    These findings are really great, but how do we relate the paradox with the objective of the intervention, as to me, it seems it will mitigate the wider goal. Kind regards, JAMES

Read about our policy on comments to magazine articles.

design element
Online Extras

Malaria Life Cycle

"His and Her" Heart Attacks

Researcher Sabra Klein explains how women’s heart attacks differ from men’s.

Watch Now

Make a Gift

Talk to Us

Amazed? Enthralled? Disappointed? We want to hear from you. Share your thoughts on articles and your ideas for new stories:

Download the PDF

Get a copy of all Feature articles in PDF format. Read stories offline, optimized for printing.

Download Now (3.1MB)