Making Sense of Myriad Models

Making Sense of Myriad Models

What you need to know about all those COVID-19 predictions.

By Michael Eisenstein

Four months ago, it was unimaginable that the public would be routinely grappling with terms like “R0” or contemplating logarithmic curves. But epidemiological models and their predictions are now regular fodder for the news and social media debates. These models can be confusing for nonexperts, so Justin Lessler and Elizabeth Lee of the Bloomberg School’s Infectious Disease Dynamics group clarify things by highlighting four important considerations.

Different models make different assumptions. Modeling explores questions ranging from how long infected individuals are contagious to the effectiveness of stay-at-home orders. In each case, one must define the current situation and likely future conditions before making projections. For example, a model of viral spread might assume that a community mostly stays continuously sequestered—a reasonable short-term assumption that starts falling apart as months pass. Lee thinks modelers must be clear about the “ground rules” they’re following: “There needs to be more of an upfront statement about what assumptions are being made and what the model can or can’t do.” 

Models are built on incomplete information. Lessler sees a Catch-22 in pandemic modeling: “Models are most useful when we have the least data on which to base our decisions,” he says, “but that’s also when the models are the least well-informed.” With SARS-CoV-2, scientists have had to learn on the fly about fundamentals like how the coronavirus is transmitted or persists in different environments. Along the way, they have gained clarity on things like the infection fatality rate—estimated between 0.5% and 1% of infections—and the role of superspreading events. These insights are helpful for gaming things out, but researchers still lack critical information, including how widespread and durable post-infection immunity is.

Insights may not be broadly generalizable. Researchers initially leaned heavily on early findings from China and Italy. But the resulting models may not be directly comparable to other regions. Lee cites differences in pandemic countermeasures  and health care systems—including patient treatment protocols and access to testing —as important confounders. Many other factors shape public health as well. Lessler notes, for example, that it remains unclear why New York City experienced such a severe crisis relative to other U.S. cities. “Maybe the disease isn’t as transmissible in less dense areas as [it is] in denser areas, or maybe there’s a big effect of climate,” he says. “But we are still figuring that out.”

Predictions are not prophecies. Nonscientists may be confused by the idea that “good” models often fail to predict actual outcomes. The reason is that these models are also guiding policy; for example, efforts to flatten the curve have helped prevent worst-case forecasts of infection and mortality from transpiring. “We did the things that the model suggested we should do to avoid this fate,” says Lessler. Similarly, models whose predictions shift greatly over time may be misperceived as unreliable, but Lee points out that this is simply a matter of evolution as new knowledge comes to light. “It’s a very iterative process,” she says. “You’re going to have to revise the model’s structure and assumptions all the time.”