A Cautionary Tale Of How Badly Research Can Fail If We Don't Take Sample Seriously

 
ContributorAvatar_0003_Layer-3.png

By Rob Berger, Managing Director
 
manos-gkikas-439724-unsplash.jpg

“Something that is reliable is usually boring and we don’t think much of it. When the office building you are sitting in doesn’t suddenly become a splintered mass of glass, concrete and steel girders do we send a thank you note to the architects and engineers? Nope. We just take these things for granted and assume that we can count on the science behind them?” I wrote that, with my colleague Andrew Grenville, in our whitepaper (Still) Boringly Reliable. This paper provides evidence of the reliability of our market communities; Springboard America and Maru Voice Canada.

I was reminded last week of how far off the rails survey research can go when the sample design is flawed. Christopher Adams of the University of Manitoba, Paul Adams of Carleton University, and David Zussman of the University of Victoria released an excellent report on polling for the 2017 Calgary mayoral election. In that mayoral race, there were wildly erroneous polls, acrimonious accusations and misgivings that were papered over. This widely reported research disaster gave polling in Canada a black eye. How many blows like this can the industry survive before no one wants to respond to surveys? 

In the introduction to the report, the authors wrote: “Sometimes polls go wrong, even badly wrong. And when they do, they can damage the election process. Misinformation may skew the media’s depiction of the campaign and create a mistaken narrative in the minds of the voters — and even among the candidates and their campaigns. Polls like these are an impediment to democracy, not to mention an embarrassment to the public opinion industry.”

They continued, “According to these polls, contrary to the expectations of most careful observers of the Calgary political scene, Mayor Naheed Nenshi, the two-time incumbent, was trailing a relatively unknown businessman, Bill Smith, who had never before sought elected office. And Nenshi, according to these polls, was not just trailing Smith, but trailing him badly, at one point by nearly 17 percentage points.”

As it turned out, “Smith did not win by nine, or seventeen, or twelve percentage points, as [the] polls had suggested he might. In fact, Nenshi was returned by a comfortable margin of nearly eight percentage points – not just outside [the] claimed margin of error, but many multiples of it. In the meantime, campaigns had been rocked and voters misled.”

This cautionary tale should teach us a powerful lesson about the importance of using quality sample and scientific design. It should also underscore how we cannot take respondents for granted. We need to treat them like people, not sample. We need to think about why they respond, and what they dislike about surveys.

Sample is not a commodity. People do our surveys. We need to treat them with respect. In an interview for my colleague Andrew Grenville’s forthcoming book The Insights Revolution: Questioning Everything, Shawn Henry, Director of Camorra Research in New Zealand said: “Treat your respondents for what they are; they are your entire business. We are getting paid for our respondents’ thoughts and repackaging them. It’s not our insights that we are selling. It’s the insights derived from our respondents, our participants, who have given up their time and their views.”

The past week gave us reason to think seriously about how we approach what might seem to be pedestrian things. Sample should be reliable. We need to take reliability and representativeness seriously.

Let’s learn from this disaster and remember that quality comes first.

InsightsRob Berger