The Plea To Think Of Respondents As People Not Sample
By Rob Berger, Managing Director, Market Communities
As an industry, everything we do hinges on the information we get from people who answer our questions. But we tend to take them for granted, and think of them as just "sample." And we ask a lot of them. We ask them to give us their time, put up with our obscure and often boring questions, work their way through long grids, and plow through long, mobile-unfriendly surveys. And then after we're done taking from them, we say thanks and goodbye. We don't ask "was it as good for you as it was for me?"
The way we treat the people who do our surveys has lead to declining response rates. It is becoming harder to recruit people because they have had bad experiences in the past. So they vote with their feet and walk away. My colleague Andrew Grenville recently interviewed Annie Pettit, research consultant and author of People Aren’t Robots: A practical guide to the psychology and technique of questionnaire design. She said: “You can debate correlation versus causation forever, but response rates (and data quality) have plummeted because researchers failed to respect the human being…Our failures to design high-quality data collection tools eroded response rates.”
That’s a problem for us: if the average person doesn’t want to do our surveys, how good are the “insights” we get from those who are willing to put up with punishing surveys? What happens when the companies we work for make wrong decisions, based on our bad information? What does that mean for the insights industry? We should be worried, and taking action.
Of course, we’re not the only people to be concerned about this. Lisa Wilding-Brown of Innovate MR has provided a powerfully instructive analogy. She’s written “respondents are the polar ice caps of Market Research. We know they are important, but we aren’t doing much to reverse the damage of our daily abuse. Of course, we talk a LOT about the impact of long surveys, shrinking incentive budgets, hostage-taking routers, and price compression. However, at the end of the day, nothing is changing.”
We need to change. That’s why we want to share the research we’ve done with people who take our surveys. We’ll share the good, the bad and the ugly of their research experience. With their feedback, we can think about the need to change our surveys—and stop the steady shrinking of the “polar ice caps” that are the lifeblood of our business.
Names matter: they reveal how you approach things
But before we get to the voice of the people, let’s step back and think about how we approach the people who do our surveys. What you name things says a lot about your perspective, and your approach. We generally call the people who do our surveys “sample.”
Sample is defined as “a finite part of a statistical population whose properties are studied to gain information about the whole.” That’s a pretty clinical definition, but it is consistent with how the industry uses the term. We think of sample as a commodity—something you get more of when you have high drop-out rates, or a higher than expected number of disqualifications. But are the people who do our surveys just “a finite part of a statistical population”?
Sometimes we use a friendlier term: respondents. A respondent is defined as a “person who answers a request for information.” I like this definition because it calls out the fact that we are talking about people. But is still very much a research-centric phrase. It is about people doing something for us. I’d propose there is a lot of value in thinking differently about this. If we call them something else, we might change the way we approach them. Let’s call them people.
When you think of people, you think of friends and family, neighbors and colleagues. It’s harder to justify writing and fielding a study in which most people will be disqualified. Would you want to ask your neighbors to do that?
It’s more difficult to force people to answer a survey so long that you’d be embarrassed to ask your mother to do it. If you think about what your friends would say if you asked them to fill out a mobile-unfriendly grid with seven brands and 32 attributes, you’d be less likely to do it. It’s much easier to do these kinds of bad surveys when you’re just getting “sample.” So, let’s start thinking of them as people.
This article is the first in a series of three, in which we look at how we relate to people who do our surveys. They are based on a speech my colleague Andrew Grenville and I are giving at the MRIA conference in Vancouver. The next blog will be by Andrew and will be about why people do surveys (hint: it’s not about the money), and what they like about them—featuring video interviews. The final blog is about what people dislike about surveys and what we should do to ensure they have a better experience. This too features videos of people giving direct feedback.