Nate Silver's 'Chance of Winning' Election Forecasts Are Complete Nonsense

By Alex Berezow, PhD — Nov 07, 2016
Nate Silver, statistician and election forecaster, told ABC News that election forecasts that gave Hillary Clinton a 99% of chance of winning didn't "pass a common sense test." That is certainly true. What he left unsaid -- possibly because it wouldn't be good for his career -- is that all election forecasts that provide a "chance of winning" don't pass the science test.
Credit: Shutterstock

Nate Silver, statistician and election forecaster, said on ABC News that election forecasts that give Hillary Clinton a 99% of chance of winning don't "pass a common sense test." That is certainly true. What he leaves unsaid, possibly because it wouldn't be good for his career, is that all election forecasts that provide a "chance of winning" don't pass the science test.

Earlier, we published an article explaining why there is no such thing as a scientific poll. In a nutshell, because polling relies on good but sometimes inaccurate assumptions, it is far more art than science. As we noted, "Tweaking [voter] turnout models is more akin to refining a cake recipe than doing a science experiment." Still, since American pollsters are good at their jobs, polls tend to be correct more often than not.

Recently, pollsters and pundits have tried to up their game. No longer content with providing polling data, they now want to try their hand at gambling, as well. It has become fashionable to report a candidate's "chance of winning." (ESPN does this, too. Last week, the network predicted that the Seattle Sounders had a 94% chance to advance to the semi-finals of the MLS Cup. I am grateful this prediction ended up being correct.)

However, these predictions are thoroughly unscientific. Why? Because it is impossible to test the model.

Let's use the soccer match as an example. The only way to know if ESPN's prediction that Seattle had a 94% chance of advancing to the semi-finals is accurate is to have Seattle and its opponent play the match 100 (or more) times. If Seattle advances 94 or so times, then the model has been demonstrated to be reasonably accurate. Of course, soccer doesn't work like that. There was only one game. Yes, the Sounders advanced, so the prediction was technically correct, but a sample size of one cannot test the model.

The exact same logic applies to elections. As of the writing of this article, Nate Silver gives Hillary Clinton an absurdly precise 70.3% chance of winning. (No, not 70.2% or 70.4%, but exactly 70.3%.) If she does indeed win on Election Day, that does not prove the model is correct. For Mr Silver's model to be proven correct, the election would need to be repeated at least 1,000 times, and Mrs Clinton would need to win about 703 times.

Even worse, Mr Silver's model can never be proven wrong. Even if he were to give Mrs Clinton a 99.9% chance of winning, and if she loses, Mr Silver can reply, "We didn't say she had a 100% chance of winning."

Any model that can never be proven right or wrong is, by definition, unscientific. Just like conversations with the late Miss Cleo, such political punditry should come with the disclaimer, "For entertainment purposes only."

Alex Berezow, PhD

Former Vice President of Scientific Communications

Dr. Alex Berezow is a PhD microbiologist, science writer, and public speaker who specializes in the debunking of junk science for the American Council on Science and Health. He is also a member of the USA Today Board of Contributors and a featured speaker for The Insight Bureau. Formerly, he was the founding editor of RealClearScience.

Recent articles by this author:
ACSH relies on donors like you. If you enjoy our work, please contribute.

Make your tax-deductible gift today!

 

 

Popular articles