Covering Trump and the 2020 U.S. Election (Part 2): Pre-election polls: It’s not how you ask, it’s who you ask

This is the second in LISPOP’s series of three blog posts examining important issues in the American presidential election. Here, Associate Professor of Political Science, Dr. Jason Roy examines some of the important issues related to contemporary polling. 

Pre-election polls: It’s not how you ask, it’s who you ask

By Jason Roy
Associate Professor, Department of Political Science, Wilfrid Laurier University

Love them or hate them, pre-election polls are a central part of modern election campaigns. In large part, the growth in pre-election polls reflects the increase in the number of polls being reported and the speed in which this information is gathered. But there have been more than a few instances where pre-election polls have differed from the election results. By some measures, the 2016 presidential election is one example. To understand the challenges contemporary pollsters face, it is important to consider how public opinion is measured and the importance of representative samples. Ultimately, it is this latter factor that dictates the accuracy of a poll.  

At one time, face-to-face interviews were considered the gold standard for conducting public opinion surveys. However, factors such as the cost, access to participants, and the time required to conduct in-home interviews have made this mode of survey administration much less common. These challenges are particularly relevant for pre-election campaign polls where cost and speed are key considerations.  With increased public access to telephones combined with the introduction of computer assisted telephone interviewing (CATI), administering surveys via the telephone offered an appealing alternative to the more costly and time-consuming face-to-face mode of delivery. Unlike in-person interviewing, telephone surveys allow researchers to reach large numbers of participants across a vast space in a relatively short time. With further advances (by some accounts) in technology, Interactive Voice Response (IVR) became another option for administering surveys via the telephone relatively quickly and cheaply. In this mode, pre-recorded messages / questions can be sent to individuals via the telephone, with the respondent answering the questions through voice response or by selecting a number on the telephone keypad (e.g. select 1 for “yes” or 9 for “no”).  The cost savings, with the elimination of human interviewers, combined with the speed that preferences could be measured make this mode of surveying an attractive option, especially for pre-election polling where collecting such preferences quickly and inexpensively are key. These factors (among others) are also drivers for the use of internet-based surveys. As with phone surveys, the internet provides researchers a platform through which they can reach a sizeable number of citizens across a geographically disperse area for a fraction of the cost of face-to-face interviews. With the additional benefit of providing a platform for content-rich designs, internet-based surveys have become one of the primary modes for administering surveys. In some ways, this newer mode of survey administration shares characteristics of more traditional mail-back surveys, at least in regard to the visualizations that may be included and the independence of the respondent to complete the survey on their own. However, traditional mail-back surveys pale in comparison to their successor when it comes to the time required to collect and record responses.  

Given the range of ways in which surveys can be administered, which is best for capturing vote preferences in pre-election polls? Answering this question brings to mind the maxim “Fast, Cheap or Good? Pick Two.” In this case, “good” is interchangeable with “accurate”. Regardless of the speed that the responses can be collected or the cost of conducting the survey, if the results are not accurate, we are just as well off (and maybe better off) without them. With elections, we have a unique opportunity to compare the accuracy of the preferences collected from a sample (polls) against the actual preferences of the larger population from which the sample was drawn. And when the polls are off, media is quick to point fingers at the pollsters who missed the mark. 

Are discrepancies between pre-election polls and election results a reflection of the survey mode? The short answer is no, at least not directly. All surveys, regardless of the method by which responses are collected, are only as good as their sample. This is the key to understanding the limitations of pre-election poll results. Surveys draw upon a sampling of individuals from a larger population in an effort to infer what the preferences would be if everyone in the population was surveyed. In technical terms, we attempt to infer the population parameter (e.g. actual vote preference) from the sample statistic (e.g. estimated vote preference). We can draw upon statistics to estimate a number of factors, including how large the sample should be, depending on how confident we want to be in the accuracy of our statistic, and the likelihood that our sample reflects the preferences of the population.  However, a fundamental assumption of inferential statistics is that the estimates are based on a random sample. And herein lies the problem, the sample. To generate a random sample, every member of the population (e.g. all eligible voters) must have an equal chance of being selected. This is often not the case, especially when “fast and cheap” are the driving factors. Even for studies with a much longer timeframe and larger budget than those conducted during an election campaign, obtaining a truly random sample is a challenge at best, and maybe even impossible in some cases. 

That said, not all survey modes are created equally when it comes to the response rate. This is one of the major critiques of IVR, for example, where the lack of human interaction may lead to a relatively high refusal rate.  With only a small percentage (in some instances, less than 10 percent) of those randomly selected completing the survey, it is unlikely that those who agree to participate are representative of the larger population from which they were selected. For some, the low response rate of IVR and traditional telephone-based surveys have led to increased emphasis on “representative” as opposed to random samples. This is likely to continue to be an issue at the forefront of survey research as pollsters adapt to the new realities of polling in the twenty-first century.    

As poll consumers, it is important to recognize that many of the polls reported during elections are based on non-random and/or unrepresentative samples. If the sample is not representative, then we should not expect the vote preferences of the sample to match that of the population. This is not to suggest that all polls are wrong, indeed history provides us with ample examples of poll results very similar to the actual election outcomes. However, the limitations of polls as mirrors into the future must be recognized. To the credit of many polling firms, these limitations (and innovative solutions) are driving efforts to provide the public with accurate insight into public opinion. While such efforts are encouraging, the fact remains that voters need to be aware of the challenges of relying upon pre-election surveys to predict voter preferences, preferences that will only truly be revealed on election day.  

Leave a comment

Your email address will not be published. Required fields are marked *