News and Insights
Four Questions to Ask When Assessing Election 2020 Polls: One Pollster’s Advice
December 17, 2019
You know the story by now: in the Summer and Fall of 2016, Hillary Clinton seemed to lead in nearly all the polls, yet she did not become the President of the United States. Late 2016 headlines included, “Hillary Clinton on Track for Electoral College Landslide,” “Survey Finds Hillary Clinton Has ‘More than 99% Chance’ of Winning Election Over Donald Trump,” and “Relax, Donald Trump Can’t Win.” So, as your skeptical uncle might have opined between bites of turkey over Thanksgiving dinner, maybe after 2016 we shouldn’t trust polls anymore.
Now, as a pollster, this is the type of statement about 2016 polling that you shouldn’t utter to me if you have the misfortune of being seated next to me on a cross-country flight. I will gladly fill that time and space with the ways the “Polls in 2016 Were Wrong” argument is sort of right (declining response rates, bad assumptions in turnout models), sort of wrong (many of the “miscalled” states were within an acceptable margin of error, Nate Silver’s aggregation probability model was defensible), and outright wrong (the national polls actually more or less nailed the exact margin of the popular vote, polls are a snapshot in time and voters broke late against Clinton.) So I’m going to resist the temptation to relitigate that supposed conventional wisdom re 2016. For now, let’s just agree: “it’s complicated.”
And, it is complicated. We’re in the midst of a barrage of Democratic Primary election polling, and we’ll soon be equally deluged with General Election polling. So if next August you’re looking at five polls that range from Trump +3 to Biden (or whoever ends up being the nominee) +8, how do you make sense of that data? How do you know if that poll that says the Democrats might actually win Texas this time, or the Republicans might actually win Minnesota this time, is legit? It can be very difficult to sort out, even for someone like myself who has been in the polling industry for more than a dozen years.
Here are four questions I often ask myself when I see a new poll released and am determining what I should make of it.
Question 1) Is this poll actually intending to measure who will win the Presidential Election? Or just something close?
There are a host of polling questions that you will commonly see cited to comment on the prospects of the candidates in 2020 – yet, when you think critically about the questions, they are somewhat meaningless. The most common meaningless question is, simply, “Who do you plan to vote for in 2020?” when asked nationwide. In 2016, the so-called “horserace question” was asked ad nauseam on a nationwide scale, but as President Al Gore and President Hillary Clinton can tell you, we do not elect our Presidents by popular vote, so we need to stop taking this rather broad measure of who is winning nationwide so seriously. Another example of a meaningless question is what I call the Generic Democrat question. You may have heard someone say something like, “Trump trails a Generic Democrat by +6 points.” However, the Democrats aren’t allowed to run a Generic Democrat; they do in fact have to run a Specific Democrat, who may fare differently. It’s not that these questions (and others like favorability, job approval, % who would re-elect vs. replace) don’t give us any useful information; it’s that they shouldn’t be substituted as actual analysis of who people are saying they will vote for.
Question 2) Is this poll representing the true electorate?
Contrary to popular belief, pollsters do not just take the first 1,000 people who respond to their polls and call that the data; that’s just the easy part. Pollsters then must weight the data in accordance with what they think the electorate will look like if they want to get an accurate picture. Very often, political pollsters will weight data by gender, race/ethnicity, education level, age, region, urbanity/rurality, and even (depending on the pollster; it is controversial) party identification.
In an experiment I love and keep coming back to, Nate Cohn of the New York Times fielded an election poll among likely voters in Florida in 2016, and gave the raw data to four different pollsters to interpret. Each of them – working off the exact same raw data – made different turnout assumptions, and came up with the following results: Clinton +4, Clinton +3, Clinton +1, Trump +1. This shows the importance of having a pollster who has both the methodology and the common sense to predict what electoral turnout in a certain state might look like. If you see a result that looks very different from what other pollsters are getting, chances are the pollster made some different assumptions about who would turn out to vote than the other pollsters.
Other considerations: is the poll sampling all Americans? Registered voters? Likely voters? Is it reaching those who are harder to reach, those who don’t have landlines, those who live in underrepresented areas or are parts of underrepresented demographic groups? If your goal is to gain insight into an election, you need to make sure the poll reflects those who will in fact cast a vote.
Question 3) Can I trust this pollster?
Are pollsters trustworthy? Pollsters are human and certainly have their own political opinions, but there isn’t a strong incentive for a pollster to just merely release a phony “good” result (whatever that means). And of course, as an ethical pollster myself who knows many other ethical pollsters, I’m confident that the vast majority of us do truly want to get it right for the sake of, well, getting it right.
That said, pollsters do vary in terms of their competency, sophistication, and personal scruples, so it can be a challenge to figure out whose word to believe when the results of multiple polls are in conflict. I usually start with the excellent and comprehensive FiveThirtyEight Pollster Ratings, which measure the overall accuracy and average partisan lean of polls based on their past performance. Even outfits that are openly Democratic or Republican are not necessarily untrustworthy, and despite those who complain about the partisan leans of CNN, MSNBC, and Fox News, their polling units tend to produce fairly stable, reliable, and defensible results.
Perhaps the best shorthand for whether you can trust a pollster is that pollster’s level of transparency. Is the pollster showing you the demographic makeup of their sample, when they fielded, how they worded questions, what percent were reached on cell phones, etc.? Generally, the less a pollster reveals for others to poke and prod and challenge, the more suspicious I become.
Question 4) Is this exciting poll simply an outlier?
At some point in the election cycle, there will be a poll that challenges the conventional wisdom, like a poll showing [Specific Democrat] up in Georgia, or showing Trump up in Colorado. But how do we know if that result reflects a change in the state of the race, or is just what is known as an outlier? When polling a representative sample, statistically one out of twenty polls will be wrong outside a margin of error – even when everything else in the polling process has been performed perfectly. However, that one wrong poll is also more likely to be covered and gain media visibility, because it is the poll deviating from the status quo, thus making it newsworthy. So when I see a surprising result, I always wait for other polls to confirm the same trends before assuming that it reflects an actual change in the state of the race.
Ultimately, in an environment where many pundits misquote or misunderstand polling, it becomes important to understand the difference between a good poll and a bad poll. Asking these questions helps me quickly assess how seriously to take the data I receive. Feel free to ask these too, especially if you are planning to go toe-to-toe with your skeptical uncle any time soon.