The Imperial College Pandemic Model
There is probably no model more under attack than that of the Imperial College. I wrote about them two months ago and described them as reasonable, although they greatly overestimated the health carnage from COVID-19. Many of the model’s sharpest critics decry how they failed to account for economic damages wrought by its recommendations. To be fair, they did provide a qualification.
“However, there are very large uncertainties around the transmission of this virus, the likely effectiveness of different policies and the extent to which the population spontaneously adopts risk reducing behaviours.”
This buyer-beware disclaimer, akin to “past performance does not indicate future outcomes,” was in the text of the report, but how many of the policymakers actually read the report, or did they just act based on the PowerPoint presentation?
The subsequent criticism of the report began with a call for its authors to provide the algorithms used in making their projections, so that outside experts, consider them peer-reviewers, might look at how the model was constructed and its assumptions. It has taken six-weeks of pressure to have that information released, and the critics have pounced on two main issues.
First, the model itself is a series of calculations, which, when bound together, can be called an algorithm. Both the equations and the binding together are done by programming the computer. The most basic rule of programming is to document, within the code itself, the meaning of the variables and what each calculation is supposed to produce. Here are equations with and without documentation
D = I*P
Dividend = Interest * Principle 'determines payment to a customer
Which of the two is easier to understand, and more importantly, which of the two allows someone to evaluate the quality of the model? As you might have guessed, the programming by the Imperial College was poorly documented, making it harder for anyone other than the authors, to make heads or tails of what was going on. It took what should have been a clear path and created a black box. At the very least, it shows a lack of care for a basic tenet of programming; at worst, it suggests obfuscation; in either case, it creates doubt about the inner workings.
Second, the model employs over 450 variables. The advantage of so many variables is that it permits the model to resemble reality more closely. The disadvantage is that many of the variables are unknown and that assumptions need to be made. With so many assumptions, you can pretty much dial in whatever result you want. Making assumptions about unknowns is part of modeling, and it is an area that is open to criticism by other experts who maintain equally useful, but different assumptions.
The cure for both of these valid concerns is summarized as transparency, using
“...exactly the same data to see if the same result emerges from the analysis by using the same programs and statistical methodologies that were originally used to analyze the data.”
That quote comes from the administration’s proposal for scientific studies used by the EPA in regulatory decision making. Back in November, this proposal was decried by many scientists as a politicization of science. I wonder if they share the same belief now?
Regret
The decisions to lock us down were based on the best available information we had in January and February. At the time, with the data from China, the modeling, and my own experience, I thought that social distancing was appropriate – it was, in my view, the best choice given our uncertainty. In hindsight, I might have modified my opinion, but I don’t regret my choice. Regret is about a loss we might have avoided. It is an emotion of could of, would of, should of – a feeling that never goes away and just keeps eating at you.
The decision to socially isolate was always littered with unknowns and uncertainty. The transparency of the models and their assumptions would have given us, perhaps, greater confidence in our path forward. It is the confidence of our choice, not the uncertainty of the choices that allow regret to enter and fester. It is a lack of faith in our government to make the best choices that allows regret.
Making the science underlying regulatory decisions transparent increases confidence; it is another of COVID-19’s lessons. It improves our thinking; it allows us to identify our errors so we can try and avoid them in the future; and it keeps us from the social consequences of regret, which include anger and shame.