OpsLens > Politics > Trump Won, Hillary Lost – Why The Polls Were Wrong & Why They Are Doomed  

Trump Won, Hillary Lost – Why The Polls Were Wrong & Why They Are Doomed  

trump-clinton

By Rene Sotolongo:

We were told, almost on a daily basis, how Hillary was leading in the polls. We were, and still are told, that election polls are highly scientific and accurate. What a load of malarkey.

The results of “polling” are very simple to explain. Garbage in, garbage out. This is why the polling industry is doomed and why their forecasts are almost always wrong. Here is why:

Back in my college days, I took a course in statistics. My professor, on the first day of class, had this to say; “there are three types of lies in society today. White lies, overt lies, and statistics.”

Why did he say this? Because you can manipulate statistics (read polls) to say whatever you want them to say. You do this through “oversampling” (or “undersampling”) and “weighting.”

Now entire papers have been written on these subjects, but suffice to say, oversampling and undersampling are techniques used to “adjust” the class distribution of a given data set.

Whereas, weighting is a mathematical device used when performing a sum, integral, or average to give some elements more “weight” or influence on the result than other elements in the same set.

So what does any of this have to do with the polls? Everything.

You see, both techniques are used heavily in the polling industry. The reason we are given as to why they use these techniques is that they are trying to get the most “accurate” polling information. But in reality what they are doing is skewing the polls to indicate the desired result they want. (i.e., Hillary Clinton is leading in the polls.)

Now most news media outlets want you to believe that this is all normal polling practices and that there was no bias. But consider this:

Several media outlets (Fox News, Zero Hedge, Pew Research Center, Wiki Leaks) exposed the inherent bias in polling methodology weeks before the election… not to mention emails from Hillary’s campaign (Podesta emails) providing a blueprint on how to rig the polls through oversampling.

In general, according to several reports, the major polls oversampled Democrats anywhere from plus seven to plus twelve percent. This is significant because historically, the Democratic advantage of registered voters has only been three to four percent.

What this means is that the national polls routinely stacked the sample group with more Democrats than Republicans which would naturally increase the number of “voters” that would vote for Hillary in that sample group.

Bottom line, given the margin of error in most polls and the oversampling, any poll that indicated Hillary leading by five points should have instead been indicating that it was a statistical tie between the candidates.

What the Democrats were counting on was the record turnout they experienced when Obama ran for office. They figured wrong. The turnout actually returned to historical levels (a +4 Democrat variance). And significantly more independents turned out to vote as well.

Just think back on the constant media barrage of “polls” during the 60 days prior to the election. It was almost nonstop. Every poll indicated Hillary with a comfortable lead. So they were lying to themselves and to the American people. The liberals and propaganda machine (I mean news media outlets) were in fact repudiated at the voting booth. That  repudiation spells the end of the polling industry.

In fact, here are three fundamental reasons why the polling industry is in a fight for its life:

Fewer and fewer voters are responding to surveys.

Pew research has reported on this very issue.

The downward trend in response rates is driven by several factors. People are harder to contact for a survey now than in the past. That’s a consequence of busier lives and greater mobility, but also technology that makes it easier for people to ignore phone calls coming from unknown telephone numbers. The rising rate of outright refusals is likely driven by growing concerns about privacy and confidentiality, as well as perceptions that surveys are burdensome.

This creates a dilemma for pollsters. Pollsters are under enormous pressure to produce results in a very short period of time, especially prior to elections. But with people not responding it takes longer and longer to get any sort of statistical results that can be used. As a result:

Pollsters use risky shortcuts to get desired results.

The fact is that in order to produce any meaningful results (and save on costs) pollsters have to use “automated” systems or are forced to use smaller sample groups. Both of which lead to problems.

Pollsters realize that live contact between poll-takers and respondents is much more reliable than using an automated system.

Automated systems rely on keypad input or voice recognition software and don’t provide a means for respondents to explain or clarify their answers. So you tend to get skewed results.

Couple that with Federal Communication Commission restrictions on “robocalls” and it becomes an almost impossible task to get good enough responses to extrapolate conclusions.

So as an alternative, pollsters use smaller and smaller sample groups and then “weight” these results. However, this only works by ensuring the sample group does not deviate from known population demographics.

In other words, if your demographics indicate that your population is made up of 37% registered Democrats and 32% are registered Republicans, don’t use a sample group where you are polling 42% Democrats and 25% Republicans.

Unfortunately, this is exactly what pollsters do. They are not interested in the “numbers” only in a result that perpetuates their predisposed belief.

Polls always have a margin of error that is under reported.

The fact is that polls are subject to many sources of error, some that are well understood, and others that may not yet be recognized. You can reduce measurement errors, but they can never be eliminated.

As a result, today’s polls rely heavily on preconceived assumptions. But these assumptions do not, and cannot, work in every case and can result in vastly different results depending on who is doing the polling.

But your mainstream media would have you believe that this is all “scientific.” The reality is that saying that polls are “scientific” is just a way for mainstream media to give them more legitimacy than they deserve.

In fact, people often say that something is “scientific,” or that the overwhelming majority of scientists believe something, as a way to intimidate and silence opponents. And this may explain why many individuals do not answer polls honestly…further skewing results. The fear of being labeled or attacked is a prime motivator for deception. Think about it, if you were a Trump supporter working at a job where Trump supporters were vilified and asked to quit (a.k.a. GrubHub CEO asking Trump Supporters To Quit) would you be honest about your political affiliation?

Bottom Line: Polling is just a bunch of smoke and mirrors intended to persuade the public. Case in point: look at the fine print of any poll result. They most always publish what the margin of error is. So explain how a poll can claim Hillary is in the lead by 5 points and then show a margin of error of plus or minus 4? In any reality, a 1 point difference is a statistical tie.

In short, as reported in USA Today; “Donald Trump’s victory dealt a devastating blow to the credibility of the nation’s leading pollsters, calling into question their mathematical models, assumptions and survey methods.”

No one is going to pay for information that is inaccurate. Now take into consideration how wrong pollsters were regarding Brexit, Scottish Independence, and the 2016 Election, and you can start to see the cracks in the wall of an industry that is in its death spirals.

Rene C. Sotolongo is an OpsLens Contributor and a retired U.S. Navy Chief Petty Officer who served for over twenty years as an Information Systems official. Sotolongo also specialized in homeland security and counterterrorism.

Click here for reuse options!
Copyright 2016 OpsLens

Comments