You carefully craft what you think is the perfect survey for the perfect target audience. You send the survey out and receive what you think is a representative sampling of your target audience.
Except it’s not. Biases have booby-trapped your survey.
Sometimes unforeseen biases can skew a survey dramatically. Here’s a quick rundown of the main biases to be mindful of when designing surveys and analyzing responses…
Coverage Bias
Basically not reaching all the people you wanted to reach.
When the Literary Digest polled its 2.3 million readers during the 1936 presidential election, the polls said Republican Alf Landon would win. (Who?) In fact, Democrat Franklin D. Roosevelt won. (Ah.)
What happened?
Most Americans who read the Literary Digest in 1936 tended to be well-off Republicans. Of course, they were leaning towards the Republican candidate. In effect, the magazine had only polled half the population.
Coverage bias is why telephone surveys didn’t work that well back in the day, because only wealthy people had phones. And it’s why the transition from landline phones to mobile phones has affected surveys—landline surveys don’t reach everyone anymore, especially not 20-somethings or 30-somethings.
Nonresponse Bias
Basically people not responding for a variety of reasons.
When Britons held the general election for parliament in 1992, the incumbent Conservative party was in the midst of a decline. The economy was down and scandals had hurt public opinion of the party.
Understandably, the polls showed Labour winning control of the British government. But after the election, Conservatives were still in power.
What happened?
It was embarrassing for Conservative party members to openly support their party before the election, so fewer participated in pre-election surveys. When they went to the polls, though, they voted Conservative as usual. It’s now called the Shy Tory Factor.
Response Bias
Basically either intentionally or unintentionally skewing survey results.
In the 1982 race for California governor, the polls predicted a victory for Democrat Tom Bradley, but Republican George Deukmejian ended up winning.
What happened?
It’s thought that white survey respondents said they’d vote for Bradley, a black man, to be politically correct, although they intended to vote for Deukmejian, a white man.
Time Magazine looked into it and found other examples throughout the 1980s and 1990s where black candidates were ahead in the polls until the election and then lost. Known as the Bradley Effect, it’s similar to the Shy Tory Factor, except respondents actually responded in this case.
Response bias can be unintentional or intentional.
Unintentional examples: A respondent begins a survey, their energy level high and their answers thoughtful—and then they fade and ‘phone in’ their later answers. A respondent is in a hurry and rushes their answers. A respondent misunderstands a question.
Intentional examples: A respondent tempers their responses on a controversial topic to provide more acceptable answers. A cynical pollster wanting to manufacture responses purposely uses extreme language in the questions or answers to water down the responses. A cynical respondent purposely chooses extreme answers to skew the argument one way or the other.