top of page
Writer's picturedata2thepeople

The folly of COVID-19 forecasts

Updated: Apr 22, 2020


With the proliferation of COVID-19 forecasts, the average person and government official alike is likely feeling overwhelmed. Is everyone already infected? How many deaths will we really see from this? Will we need to shelter in place for two weeks, or for six months? What will happen to my business or to my children’s schools? Predictions of what lies ahead have, arguably, never been given more attention, nor have the stakes been higher.


As data scientists involved in modeling this pandemic, we know how forecasts help us navigate the unknown. Many of us look to them for a shred of certainty in the midst of so much that is out of our control. But, forecasts are not as certain as we could hope. Even amongst experts, the accuracy of different forecasts is hotly debated. However, nearly all of these forecasts are missing something critical -- and lives depend on this missing consideration.


What’s missing is this: the goal of a COVID-19 forecast is not to be accurate. It’s to save lives.


What we need to realize is that in this dynamic, unprecedented situation, all forecasts have short shelf lives. We must think of forecasts not as inevitable futures, but as guideposts that inform us about the imperative to act and care for one another today.


With this new goal in mind, we’d like to offer 3 practical tips to help us make the most of forecasts.


  1. Beware of any forecast that tells you an exact number. Look for a range to prepare your expectations. If you are making a forecast, share that range.

  2. Consider the ethics of a too high or too low forecast. If you are seeing lots of different forecasts make sure you pay attention to the high numbers too. Forecasters: do not discount high predictions.

  3. Account for how people will react to the forecast. As soon as people’s actions change, many predicted scenarios become less likely. A wise forecaster sets expectations that the outcome will change based on how people respond.


Let’s dive into each of these.


First, know that all data related to COVID-19 are uncertain. The number of COVID-19 cases is subject to how many tests are available and how those tests are administered. Are tests given to only the very sick? To anyone who wants one? Even data on the number of deaths, while likely more accurate, has uncertainty. Was everyone who died from the virus tested? Were there delays in the testing results?(see this article for more) It is impossible to have certainty in a forecast given this shaky data foundation.


All statistical models, even with rock solid data, produce probabilities, not certainties. This uncertainty is typically communicated by statisticians with a “confidence interval” which estimates an upper and lower bound for the forecast.


The problem is that many COVID-19 forecasts being shared and actively used by the public and by policy makers are not using confidence intervals. Any time data is expressed as an exact number, it gives a false sense of certainty.


So projecting 300 hospital admissions in Washington, D.C., on May 14 (as the Penn CHIME model does) is not helpful. Saying “9 days till the projected peak” (as the IHME model cited by the White House does) is similarly irresponsible.


Screenshot of the Penn CHIME model on April 20th where the forecasts show exact numbers:


I believe the forecasters sharing exact numbers mean well; perhaps they think it’s simpler for people to interpret an exact number rather than a range, or perhaps they aren’t able to estimate the uncertainty. Yet scientists are also educators, and as such, they should share the range of potential outcomes. If they do not, ask for that range. Compare multiple sources of models. The discrepancy between forecasts is not a scary thing; it’s a realistic reflection that there are a range of potential outcomes.


Recommendation for the forecaster: Present uncertainty with a range of numbers such as a “worst case” and a “best case,” which is clear language that the average person can understand.


Second, make sure to pay attention to the high estimates. It may feel good to look at the low end of a range, but we argue there is a greater risk in not paying attention to the higher estimates. In addition, low estimates often are created assuming that we will take preventive measures.

We should ask ourselves:


If we act on a forecast that is too low, what will happen? We may not prioritize changing behavior, changing policy, or investing in research. We will not be adequately prepared to prevent fatalities. Associated economic suffering in the long term will be enormous as well. If a forecast is too low, significantly higher fatalities are likely.


If we act on a forecast that is too high, what will happen? Governments may over-prepare or mis-allocate resources. Given the limited preparation taken by most governments so far, over-preparing seems to be a lower risk. The economic impact of over-preparing could be greater in the short-term (arguably not as high in the long term though, because the economic impact will last much longer if the virus spreads widely). We can compensate for a loss of a job. We can’t compensate for loss of life.


As an example of how this can go wrong, on March 2nd, scientists in the UK predicted that there could be an astonishing 500k deaths from COVID-19. Yet, the forecasts were said to be unlikely even by the scientists themselves. The result: the UK waited too long to act. Of course, this isn’t an occasion to cherry pick high numbers. The model choices need to be reasonable. The point is that it is not a good idea to downplay high numbers just because they “seem too high.”


Recommendation for the forecaster: Ensure that the high numbers are not overlooked, using language such as “the number could be as high as X”.


Finally, let’s address the broader point of the impact of sharing a forecast on the outcome. The prophet’s dilemma warns that telling people a prediction can cause a reaction that either contradicts or fulfills that prediction--also known as self-defeating or self-fulfilling prophecies.


This became especially salient in 2016, when Nate Silver and most political forecasters predicted Hilary Clinton would win, and she didn’t. Research showed that because of widely-shared forecasts of her win many individuals may not have voted thinking their vote was not needed, which then cost Clinton the election. In this case, the forecast became a self-defeating prophecy: it led to its own inaccuracy.


The same is true for COVID-19 forecasts. Any projection that is widely-shared can potentially change— for better or for worse— the outcome, depending on how people respond to it. When we see a forecast we should make sure we look into how the forecast has accounted for the impact of behaviors changing. If no changes are accounted for, we should look at a forecast as “what will happen if we continue behaving the same way we have behaved before.”


Recommendation for the forecaster: Make it clear what assumptions you make about human behavior changing when producing the forecast.


Thus, the goal of producing or reading a forecast shouldn’t be to find the most accurate prediction, but to save lives. What ends up happening will be highly dependent on actions we take, and forecasts directly influence this.


As consumers of forecasts, we can hold our statisticians accountable. The role of the scientist and statistician during this pandemic is not simply to share numbers but to take responsibility for the outcome of sharing those numbers. At times like these, where we are in a crisis and faced with immense consequences of decisions, this is even more urgent. Together, we can make the high forecasts wrong.


Thanks for reading, and if there is something we missed here, please let us know in the comments!




676 views0 comments

Comentarios


bottom of page