Polling these days is not easy
It’s no secret that I didn’t enjoy the outcome of the elections a few nights ago. However, one thing that did impress me was the success of a few polling companies at predicting the results. Ignoring Rasmussen and New England College, whose efforts are terribly slanted politically, some pollsters shone brightly. Andy Smith, who (in this longtime researcher’s opinion) was less than impressive two years ago, absolutely nailed it on November 4th. Unless you have ever tarried in the field of survey work and polling, you have no idea how difficult this job is. There are so many things that can go wrong when you’re attempting to compile a statistically significant representation of what a population (voters, travelers, etc.) is thinking. First, one needs to determine what the very definition of that population is. Next, one needs to identify a representative, valid sample to survey. This isn’t easily accomplished. Do it improperly and you’re stuck with a biased, worthless sample. One of my favorites is the survey company that stops people in the parking lot of a supermarket, then announces that shopping is a favorite activity of people in the region (duh!). The third element of the survey process is the most difficult: getting people to actually complete the survey. More than two decades ago, we used to expect a 90 percent participation rate when conducting an intercept survey (“excuse me sir, will you please answer a few questions about blah blah?”). Telephone surveys were regarded as a less expensive approach, but one that yielded a response rate in the 75 percent range. Mail surveys could expect 40-50 percent participation, depending upon how many letters were sent. Looking at the response rates today is enough to send a researcher straight to the Pepto Bismol, as the figures are closer to 10, 3, and 1 (optimistically). Worse still–suppose you actually get the responses–can the polling/survey team actually believe that they are representative of the population being studied? Often the answer to that question is ‘not so much’. This forces some research teams to throw caution–and reputation–to the wind. For others, the solution lies in Panels. Panels are carefully selected groups of people who are paid to respond to a contracted number of surveys during the year. Quality research houses, such as Ohio’s TNS, carefully vett their panelists, and scour their responses for inconsistencies. So, with all of this in mind, the fact that some pollsters were able to call the results of the election within a few percentage points is most impressive indeed. I may not have liked this recent message, but I do respect some of the messengers.