Election Polls Were Wrong And Why RCP Is Better than Nate Silver

By Alex Berezow, PhD — Nov 09, 2016
Anyone remotely familiar with the scientific method understands that just like a ruler or a telescope, statistics is a tool. Scientists use the tool primarily for one purpose: To answer the question, "Is my data meaningful?" Properly used, statistics is one of science's most powerful tools. But used improperly, statistics can be highly misleading.
Credit: Shutterstock

The statement, "Statistics isn't science," is about as banal as, "The sky is blue," or, "Puppies are cute." Anyone remotely familiar with the scientific method understands that, just like a ruler or a telescope, statistics is a tool. Scientists use the tool primarily for one purpose: To answer the question, "Is my data meaningful?" Properly used, statistics is one of the most powerful tools in a scientists' tool belt. 

But improperly used, statistics can be highly misleading. If an astronomer only points his telescope at the sun, he won't be able to see the Milky Way behind it. Worse, he will arrive at a very twisted understanding of the universe; he will conclude that everything beyond Earth is orange and fiery. Similarly, if a statistician plugs the wrong numbers into an algorithm (or uses an algorithm that has faulty assumptions), it will spit out a result that gives an inaccurate picture of the data.

This seemingly obvious statement -- that statistics is a powerful tool but is not itself science -- apparently is wildly controversial. My recent article that explained why Nate Silver's "chance of winning" election forecasts were complete nonsense received pushback. 

For instance, Prof Matt Moran, a biologist at Hendrix College, provided some feedback* in the comments section:

His model consistently has predicted state results (I believe he has gotten 99 of the last 100 state results correct in the presidential race). That result (plus his many other predictions) shows that his model is highly predictive.

Yes, notwithstanding Tuesday night's results, polling is often quite accurate. But Prof Moran misses the point. Other completely unscientific methodologies are every bit as (in)accurate as Mr Silver's algorithm.

It should be noted that Mr Silver is not a pollster. He aggregates other organizations' polls, applies algorithmic "magic sauce," and then produces a forecast that predicts each candidate's chance of winning. In 2012, he correctly called all 50 states.

RealClearPolitics' Simple Model Is Superior to Nate Silver's Algorithm

That sounds impressive, until one considers that RealClearPolitics (my former employer), nailed 49 states. Their prediction only got Florida wrong. They don't use magic sauce; instead, they do a simple (though statistically incorrect) arithmetic average of polls. But it's good enough. Occam's Razor would advise us to accept the simple RCP model over Mr Silver's fancy model. If the more complex model does not yield substantially better results, then perhaps the added complexity is nothing more than smoke and mirrors.

Fast forward to 2016. Mr Silver predicted, with an absurdly precise 71.4% chance, that Mrs Clinton would take 302 electoral votes and beat Mr Trump by 3.6% in the popular vote. He was wildly incorrect. (As of the writing of this article, Mr Trump will win 306 electoral votes but will lose the popular vote by merely 0.2%.) Even worse, Mr Silver got several key states wrong: He predicted that Clinton would win Wisconsin, Michigan, Pennsylvania, North Carolina, and Florida. In reality, Trump swept all of them.

RealClearPolitics, with its simple model, was also wrong but much closer to the truth: It predicted Clinton wins 272-266 with a national margin of 3.3%. Furthermore, it correctly predicted that Trump would win NC and FL.

So Why Were the Polls Wrong?

An article in The Economist explains how pollsters got things so wrong. Basically, there are two primary sources of error. The first is under- or oversampling particular demographics, such as minorities or non-college-educated voters. The second is the turnout model, which attempts to decipher which potential voters will actually vote. Getting either or both of these wrong probably explains the systematic bias against Trump in the polls. (Buzzfeed also proffers five plausible hypotheses.)

Those who embrace the notion that polling really is science will say, "Next time, they'll adjust the turnout model and get it right." But there's no way of knowing how to tweak the model properly in advance. Understanding how polls went wrong in the previous election may or may not help polling in the next election. Pretend it is 2024. A moderate Republican Senator and a moderate Democrat former military officer face off. How will knowledge of the Trump electorate assist in predicting the 2024 electorate? It can't. It's back to guesswork all over again.

For that reason, polling isn't science. Each iteration of the model is tested only once, after which it is tweaked again and tested only once. But that is not sufficient. Fully testing a predictive model requires hundreds or thousands of observations, which would require that we run the election hundreds or thousands of times. Obviously, we can't do that.

The Huffington Post said that Mr Silver's analysis constituted "political punditry dressed up as sophisticated mathematical modeling." They are correct, but they have little room to talk; their forecast performed even worse than Mr Silver's.

The bottom line: When I was in graduate school, we reminded each other with an adage about the limits of models: "Garbage in, garbage out." Pollsters, statisticians, and pundits, take note.

*He also observed that I am "a poor excuse for a scientific journalist." Because Prof Moran is incapable of expressing himself like an adult, he is no longer allowed to play in our sandbox.

Alex Berezow, PhD

Former Vice President of Scientific Communications

Dr. Alex Berezow is a PhD microbiologist, science writer, and public speaker who specializes in the debunking of junk science for the American Council on Science and Health. He is also a member of the USA Today Board of Contributors and a featured speaker for The Insight Bureau. Formerly, he was the founding editor of RealClearScience.

Recent articles by this author:
ACSH relies on donors like you. If you enjoy our work, please contribute.

Make your tax-deductible gift today!

 

 

Popular articles