Which came first, the misconception or its exploitation?

"Climate models, like weather models, have been continuously tested and refined for more than a half century," writes NCSE Executive Director Ann Reid as she examines misconceptions related to models.

A screenshot from the movie Zoolander

Misconception of the Month emblemAt NCSE, we design all our curricular resources around the concept of misconception-based learning. But why do we care so much about the need to confront and correct misconceptions as part of a great science education? Here’s my answer: misconceptions are like vulnerabilities in your home security system that a burglar can exploit to break in. When you have a misconception about how science works, you’re a little less likely to recognize a bad argument and a little more likely to believe something that is not true. And when it comes to evolution and climate change ­— the topics NCSE focuses on — there are a lot of bad arguments out there. We strive to make sure that students get the skills they need to recognize and resist those bad arguments. In this Misconception of the Month, we’ll discuss how common misconceptions about the way models work in science can make people vulnerable to efforts to misinform and confuse them about climate change.

Models serve a lot of purposes in science, and in science education. As representations of reality, models can help us picture things that are too small or too big for us to see, explore complicated relationships among variables to test how they interact, and understand and predict the behavior of systems that cannot be directly manipulated. But because of the sometimes complicated relationship between models and the reality they are intended to represent, confusion can arise.

Like with this model, for example:

Okay, that’s pretty funny. But before you get all smug, most of us do have some pretty inaccurate mental models of how big things are in relation to other things. To prove that to yourself, before you open the next link, draw a picture that represents how big an average human cell is in comparison to an average bacterium and an average virus.

Done?

Now open this site (use the slider below the illustration to zoom in) and see how close your mental model came to reality. I suspect that you were off by at least an order of magnitude.

We are probably all harboring not only some mistaken models (as we’ve just seen with regard to scale) but also some misconceptions about models — and, perhaps, especially about models that predict the future. You know, like weather models, or economic models, or, what’s been on all our minds lately, coronavirus pandemic models. There are a lot of models about the future, but only one future comes to pass: doesn’t that mean that models that predict the future are generally no good? That’s a common misconception about models, reinforced by the forbidding mathematical and computational complexity in areas of science where models have been employed for a long time and have become highly sophisticated.

But here’s the thing. Models indeed vary with respect to their desirable qualities — accuracy, precision, scope, and so forth — and none is perfect. But it only takes a moment’s reflection to realize that just because no model is perfect, that doesn’t mean that all models are no good.

If models vary in quality, how do you tell a good one from a bad one? One way is to confirm that the inputs being used make sense and are based in reality. Here’s an example: say you go into a bank asking for a car loan to buy a Maserati and show the banker your personal financial model. “Look,” you say, “$100,000 coming in each month and only $2,000 going out! I can pay off a $150,000 car in just a couple of months!” That’s a swell model, but the banker is going to ask for a pay stub. And that little reality check had better match your personal financial model or you’re not getting the car.

The same is true for the more complicated models that are used to predict phenomena like the weather, the course of the coronavirus pandemic, or the future climate of planet Earth. In all of these cases, there are hundreds, or even thousands, of variables whose connections to each other are known with varying levels of confidence. No model can predict the future with absolute precision and confidence (indeed, there is a natural trade-off between those two characteristics), but scientists have developed many techniques to determine whether the model is capturing the important aspects of the system they’re trying to represent including running the model. They can use data from the present and try to predict the future (forecasting) and they can use data from the past and try to predict the present (hindcasting). Either way, if the model is able to reproduce what actually happened, that suggests that the connections between the variables reflect how the system actually works.

Scientists are constantly testing how their models perform as new data come in and trying to improve the fit of the models’ predictions to reality.

And, of course, scientists are constantly testing how their models perform as new data come in and trying to improve the fit of the models’ predictions to reality. The story of continuous improvements in weather modeling really drives this idea home. For most of human history, people could not travel faster than the weather. The only way to predict what might be coming was to keep careful track of past years to estimate when it might be warm enough to plant crops or dry enough to harvest wheat. Farmers’ almanacs were the best weather model available! Think about how often you check your phone to see whether the weekend will be sunny. Now imagine what it was like to have no warning of an oncoming blizzard or blistering heat wave. With the advent of the telegraph, it became possible to keep track of the weather hundreds of miles away — according to this wonderful story in The New Yorker, early telegraph operators noticed that if the lines were down (suggesting wet weather) to their west, they could expect rain in a few days. These days, of course, weather models are informed by tens of thousands of measurements on the ground, in the air, and from space, continuously updating and correcting for actual conditions on the ground. Today’s three-day weather forecasts are more accurate than 24-hour forecasts were just 40 years ago.

All this to say that models can be incredibly useful and vitally important. But that requires a long-term investment in data collection and analysis (see Spencer Weart’s recent discussion of the dedicated climate researchers at Vostok Station in Antarctica) and the same kind of competitive scrutiny that characterizes all of science.

When you think about models in this way, I think you’ll have no problem seeing through some of the more superficially reasonable-sounding criticisms you’ll hear about climate modeling, some of which have even been leveled by scientists who should know better. Climate models, like weather models, have been continuously tested and refined for more than a half century. Increasingly sophisticated and detailed measurements can now be incorporated and the models’ predictive capabilities have been matched up against reality for decades, proving, sadly, that early estimates of the impact on our climate of rising carbon dioxide levels were, if anything, rather more optimistic than alarmist. In short, the climate models developed and refined by international teams of scientists are sophisticated, complex, and under constant careful scrutiny.

Nevertheless, there are plenty of people who have, for one reason or another, decided that climate change is a hoax or that climate scientists are conformist sheep. These climate deniers are fond of suggesting that climate models are just examples of “garbage in, garbage out” that are designed to support the views of environmental alarmists. For example, in 2007, Freeman Dyson, a brilliant scientist and writer who passed away last year at the age of 96, and who absolutely should have known better, complained: “[Climate models] are full of fudge factors that are fitted to the existing climate, so the models more or less agree with the observed data. But there is no reason to believe that the same fudge factors would give the right behaviour in a world with different chemistry, for example in a world with increased CO2 in the atmosphere.”

Dyson’s argument is a classic case of exploiting the misconception that all models are just a series of guesses that can be manipulated to give whatever answer you want. While some models might be quick-and-dirty estimates that should be taken with a grain of salt and assessed with a gimlet eye for possible bias, that’s not the case for all models. If someone is going to take aim at international models developed cooperatively and successively refined by hundreds of scientists from dozens of disciplines, don’t let them get away with hand-waving about fudge factors. Everybody agrees that no model is perfect, but nobody should agree that no models can be trusted, and specific criticisms, not general skepticism, are necessary to challenge a specific model.

NCSE Executive Director Ann Reid
Short Bio

Ann Reid is a former Executive Director of NCSE.

reid@ncse.ngo