How Did The Pollsters Get It So Wrong
  • 7 min Read

In the most sincere way, regardless of where you sit in your party lines, we can all agree that no one saw this coming. Really, no one.

Not even the analysts that calculate and forecast the outcomes to these things saw this one. In fact, they predicted the exact opposite ending to this storyline. Which likely contributed to the whiplash more than half the country felt as they watched states change from blue to red. As the Lead Data Scientist at Accomplice, my team knows I have a thing for a strategic setup that I am more than likely certain to win, in both life and business. As I watched the election poll numbers roll in, I can tell you with no hesitation, about half way through—I was looking for my bottle of whiskey. After coming to grips with this firm reality, I was forced to ask the question—So what happened here guys? No really, what the actual hell happened? How could all the data be…this wrong

And when the most popular or loudest expressed idea is held by those in leadership, the opposing data can be a hard pill to swallow.

 

I’m not sure we know exactly what happened with the poll predictions, but here’s what I do know from being a researcher: methodology matters, divisive and emotionally-charged topics tend to render less than authentic responses, and the greater the uncertainty the more frequent the perception changes, even if the change appears to be minimal. The Trump vs Hillary campaign was divisive, no doubt. Each candidate had its fair share of voters who, frankly, didn’t know who they were going to vote for until the very last hour, or if they did, they didn’t want to share it. In all of America’s history, all of our ups and downs, we have never seen an election that was this exhausting, this brutal, this much mudslinging and this much marginalizing. As a result, it’s no wonder that we have polls that are ill-equipped to hone in on so much emotion, behavior, action or lack thereof. Moreover, the voices that this campaign mobilized are different than those voices that we saw in 2012 or 2008. The target sample moved — this means that polling the same group of voters in the same way that we saw in previous years will likely turn up null in void. Because of the shifting target and the vitriol nature of the campaign itself, it may be that those would-be, marginalized, pending voters were just not ready to weigh-in, and as a result, the pollsters missed them all together.

In terms of methodology, polls typically evaluate a single response, in a moment of time. But with such an emotional campaign on both sides of the fence, a single response, apparently, doesn’t cut it. In fact, the only poll that correctly predicted the 45th presidency consistently across the campaign utilized a new methodology: a time-series approach that polled respondents once a week. And instead of asking about their choice candidate, they asked respondents what percentage the likelihood of voting for each candidate was and what is the likelihood of actually voting. This allowed the respondents to express some level of uncertainty and the researchers to evaluate the shift in population choices overtime, across candidates (Daybreak Poll, 2016).

We can discuss how the information coming in may have been misguided, and we should, there’s no doubt the poll systems should be revisited to cater to the connected marketplace. But there’s something to be said for perception. Especially in a heated battle in which perception fueled the emotion that mobilized the action. This was an election that everyone had an opinion on. From the researcher to the media, presenting results that are as polarizing as the results observed on election night are subject to being interpreted by the listener, in the context of their own emotions and realities.

As I reviewed the commentary on the polling industry and the media, I began to realize that the issue was multi-faceted and obviously underestimated. It was likely some combination of target samples, the acrimonious nature of the campaign, the methodology used to actually poll the sample, and the personal bias of the audience. But the more I considered this problem, the more I recognized that the challenges pollsters faced this election term were not too far removed from challenges that every researcher comes across at one time or another.

In business, we conduct research on a number of problems. The data we present is used to direct product strategy, marketing and brand decisions, and overall business direction. But, the data is not always easy to hear. Sometimes, even after all of the controls have been put in place and the methodologies rigorously considered, the data presented is not the most popular nor does it confirm the most loudly expressed opinion. And when the most popular or loudest expressed idea is held by those in leadership, the opposing data can be a hard pill to swallow.

As observed in the election, striking research results can cause a reverberating discord in opinion and thought. Ensuring that the data is communicated in a way in which the audience understands the message and it’s context is the obligation of the researcher and only effective with an open-minded audience. Within this election process we generally saw neither.

Here’s the thing: as a Data Scientist, I love it when I know what’s coming. It’s less about being right, which I am not going to lie—I love, but it’s more about being able to prepare for an impending shift. To anticipate and recalibrate as necessary. And that goes for whether I’m banking on our next president or defining your growth strategy and target market.

We find value in the details of the context and the story across time.

At Accomplice, I can promise you this, we place little value on the loudest opinion or the most popular belief.  So whether Im analyzing your election results or your business strategy, this team is less worried about offending your preconceived notions and more concerned with whether or not you are prepared for impending shifts with the context required to succeed.

RELATED
  • 5 min Read