In early 2010, text4baby launched across the country. Since then, more than 260,000 have enrolled. But she and her colleagues wanted to make sure that text4baby was a success by other standards as well, so they built in measures to evaluate the program from the start.
According to Piers Bocock, project director for the Knowledge for Health Project, run by the Bloomberg School’s Center for Communication Programs (CCP), mHealth evaluation remains a huge hurdle. Governments and donors want to make sure that mHealth interventions can be measured so they can make the right decisions about funding comprehensive mHealth programs. “There are a lot of pilots out there,” says Bocock, “but not a lot at scale.”
CCP, which includes mHealth components in more than a dozen of its projects around the world, is constantly working to understand how mobile efforts are adding to the effectiveness of its programs.
“We all realize mHealth can be a game-changer, especially when it is part of other social and behavior change communication activities,” says Bocock. “The question we want to answer is how to quantify its effectiveness within the context of broader public health interventions.”
Garrett Mehl, PhD ’00, MHS ’94, a WHO scientist and a chair of its Health Data Forum Working Group on mHealth, notes that insufficient attention to the role of research has been the downfall of countless other mHealth projects.
Free phones and airtime for researchers and subjects have been the kiss of death for many mHealth pilot projects as they try to scale up to full-size programs.
“I think we can definitely say that there have been a considerable number of pilot mHealth projects, and a lot of them have failed in either their ability to demonstrate some health impact or in their ability to find a mechanism to sustain them,” he says. Mehl adds that in a joint project with a Bloomberg School intern that assessed the global state of evidence generation among mHealth projects, a considerable proportion of the projects were struggling with research—and donor support—needed to validate their efforts. Despite their presence in public databases, many mHealth projects were found to have already ended, suggesting their inability to transition from pilots to scaled-up programs. Often projects, Mehl notes, are driven by the pleas of donors to get implementations running quickly, without any forethought about how to judge success or pressure to plan for scale-up and sustainability.
“You begin to worry that a lot of investments are being made in this area, and if they fail, you worry that people won’t want to continue to invest,” he says. Fortunately, Mehl notes that donors are now beginning to pay more attention to evaluation and invest in mHealth evidence generation and synthesis.
To head off these problems, Jordan and her colleagues incorporated some unique evaluation methods into the fabric of text4baby. For example, to see whether the program is reaching its intended audience, researchers ask participants for their ZIP codes during registration. The result is a real-time map across the country that text4baby’s partners, including local health offices, can access and watch enrollment numbers change on a minute-by-minute basis. They can also instantly see whether ads to entice women to sign up have the desired effect. An ad for text4baby during the popular MTV program 16 and Pregnant, caused a huge spike in enrollment.
“It’s a huge strength of the program to see whether we’re hitting our intended audience,” Jordan says.
But demonstrating whether these texts are improving outcomes for mothers and babies is a much tougher problem to tackle, Jordan notes. “It’s easy to tell whether women are enrolling, find out whether they like the messages or see if the number of texts they get each week is acceptable,” she says. “It takes a lot more time, effort and evaluation strategies to demonstrate knowledge and behavior change.”
One step toward judging whether they’re achieving this goal, Jordan adds, is a series of interactive modules that the text4baby team recently began inserting into the typical texts that users receive. Around the end of October 2011, they sent their first interactive module: a questionnaire on whether users had received the flu shot, and if not, why. Within 48 hours, nearly a third of the 96,000 users who received the module responded, giving Jordan and other researchers involved with the project reassurance that users were engaged and interested in sharing information, as well as lending insight into their health behaviors.
Alain Labrique shows off a trove of low-cost technological treasures that support research from Kenya to Bangladesh.
Amazed? Enthralled? Disappointed? We want to hear from you. Share your thoughts on articles and your ideas for new stories:
Get a copy of all Feature articles in PDF format. Read stories offline, optimized for printing.