That is actually a straw-man argument since the reality is that we need both, and these two poles of thinking have been in contention for some time. With decisions about opening up on our minds, this is a good moment to see how these approaches differ and blend.
Models
Models are a series of mathematical equations, algorithms that seek to approximate real-life experiences; in this case, the spread of COVID-19. Much of our literature on air pollution and climate change come from these approximations. Like any approximation, it involves simplification, providing a broad outline, not the fine detail. Models are based on theories, hypothesis, of what is actually occurring. The classic epidemiologic model involves a SIR framework – those that are susceptible, infected, and recovering. Using those three groups, one can identify what one suspects are essential “drivers” of change, and predict a variety of outcomes. Models provide scientists with a sandbox to play in, tweak the drivers, and see how things change.
Models that more closely reflect real-world outcomes are better than those that do not, but all models are ultimately wrong. How can they not be, at heart they are simplifications. With the advent of computing, we have been able to improve our modeling by taking those “drivers of change,” which were initially static values, say an R0 of 2, and making them dynamic, making the R0 vary from 3 to 0.5 over the course of a few weeks as we socially distance. These changes can both improve and deteriorate a model’s approximation of reality, but they inevitably reduce our confidence in the model’s prediction. For scientists and science, the strength of models is not really in their predictive capabilities but in that sandbox ability to see how the many different parts fit and interact. That being said, in the presence of little or no information, models can help inform the thinking and decision of policymakers – the government agents and politicians who close and open our world.
Evidence
One has only to consider the phrase “smoking gun” to see how evidence sounds so much more substantial. But the evidence, too, has its limitations. We can only find evidence where we look. Models interact with evidence by providing clues for what factors are driving outcomes – where to look. Evidence can change, especially over time. For most of the past few months, the evidence “showed” that children were immune to COVID-19; the last few weeks are beginning to paint a different story. Evidence, in its turn, helps to improve models by providing numbers to put into those equations resulting in greater confidence in the outcomes.
As an old professor put it, although, in a slightly different context, models and evidence are like sin and confession, without the one, there is nothing to say in the other. In academia, where we find both the modelers and the evidence-based deciders, specialization allows them to work independently, occasionally sniping at one another’s weaknesses. In the real world, though, they are hopelessly and critically entangled, especially when it comes to making decisions.
“It must combine theory with evidence and make use of diverse data while demanding data of increasingly higher quality. It must be liberal in its reasoning but conservative in its conclusions, pragmatic in its decision making while remaining skeptical of its own science. It must be split-brained, acting with one hand while collecting more information with the other. Only by borrowing from both ways of thinking will we have the right mind for a pandemic.”
I like that quotation, but it applies not just to pandemics, but to all the ways in which we use modeling and evidence. As we gain perspective and distance on COVID-19, we should keep these strengths and weaknesses in mind, as we consider all the issues for which we seek a scientific, evidence-based understanding.
Source: Models v. Evidence Boston Review