In the cut-throat world of competitive business, fudging public opinion is standard practice. It’s quite true that statistics can be made to substantiate any lie a business wants to tell. You’ve probably heard that eight out of ten people prefer Product X until you’re blue in the face, and it certainly sounds impressive. But what does it actually mean? Which eight people are we talking about? Who were they? How were they selected? What frame of mind were they in when asked the question? Were these the only ten people ever surveyed by this company, or was this just one of many surveys, the rest of which gave a far less impressive result and mysteriously disappeared? Which product was Product X pitted against? Was it Product Y – the nation’s favourite, or Product Z – the cheapest rubbish on the market? And if in the absolute best case scenario two out of ten didn’t think much of Product X, doesn’t that mean, statistically speaking, there’s a 1 in 5 minimum chance you won’t think much of it either?
Well this of course is precisely why companies don’t like to reveal any information about their surveys, other than a simple, highly positive soundbyte, which they pass off as ‘the result’. So, we’ve all seen the so-called ‘results’ of commercial surveys. How do they work?…
It’s said that if you left an infinite number of monkeys to play about with typewriters, you’d eventually get the complete works of Shakespeare, purely by chance. Such an experiment, if successful, would not prove that monkeys are more intelligent than most human beings, but merely that if you keep on increasing and increasing the odds, you’ll eventually get an unnaturally impressive result. This is exactly what commercial surveys prove. They prove that if a business raises the odds to sufficient levels, somewhere along the line an unnaturally impressive survey result will become inevitable. But at the end of the complex process of planning, psychology, questioning, more questioning, manipulation and aggressive filtering, the business doesn’t claim to have proved that unnaturally high odds produce unnaturally impressive results. What the business claims to have proved, is that monkeys are more intelligent than most human beings.
Here are some of the tactics which can be used to make a commercial survey present a glowing picture of public opinion, when in reality, opinion is probably indifferent at best:
1. If you don’t like the answer, keep asking the question ‘til you do. There’s nothing to say that companies are tied to conducting just one survey and abiding by its results forever more. They can conduct a number of different surveys, bin the unfavourable ones, and use the one with the most favourable outcome. In reality, they’ve done one big survey with a poor or mediocre outcome, but by essentially dividing it up into smaller surveys they can conveniently choose which bits of it they want to count. Ever wondered why the small print on survey results often indicates that just 50 or so people were questioned, when the company has the wherewithal to easily collect thousands of opinions? Now you know.
2. If you’re surveying customer satisfaction, only survey satisfied customers. Sounds ridiculous, but satisfaction surveys really can be this biased. The very nature of customer satisfaction means that in order to take part in the survey the customer must be known to the company. And by conducting the survey at particular points of customer contact, the company can massively increase the prospect of a positive outcome. For example, if a survey about customer satisfaction is implemented at or very shortly after the conclusion of each sale, satisfaction will be inordinately high. Afterall, how many customers will be prepared to hand over hard cash if they feel dissatisfied? If that same survey was instead conducted six months after purchase, via a cold call to the customer’s home, at an inconvenient moment, how much less impressive do you imagine the survey results might be?
3. Remove the grey. Most people will want to give a conditional answer in a survey. Life is never black and white. It’s always a shade of grey. But restricting participants to a yes / no answer in a survey removes the grey, and forces them to ‘like’ something about which they actually have significant reservations. With a two-option choice of either ‘good’ or ‘bad’, for example, customers can’t say: “It’s okay”; they have to say: “It’s good” – because if something isn’t ‘bad’, and the only other option is ‘good’, it has to be ‘good’. In truth, it could be mediocre, average, boring, vapid, bland, or hopelessly uninspiring… but if it’s not bad, in a restricted choice of ‘good’ or ‘bad’ only , it has to be good. Opinion surveys are usually more sophisticated than this, of course, but by restricting the choice to a number of options which are hard for the customer to choose, plus one which is easy to choose, the company can get the precise answer it wants almost every time.
4. Offer an incentive. A company can offer an incentive for a customer to take part in a survey. For instance, the customer can be entered into a prize draw, to potentially win a highly desirable prize. Even though the incentive is in no way linked to the answers the customer gives, it can still have a profound effect on the outcome of the survey. A lot of customers will suspect that if they say negative things about the company, they’ll be excluded from the prize draw, and they’ll accordingly ensure their answers are positive. An incentive also hauls in people with no other motivation to take part in the survey. Usually, the people without any inherent motivation to give feedback will not really care about the survey itself, and can easily be led where the company wants to lead them in terms of their answers. At a company I worked for, we found that incentivising customer feedback with free entry to a prize draw improved the positivity of the feedback by over 20%.
5. Ask 'leading' questions. This is an extremely effective way to get ill-conceived answers from customers. The method uses short-term logic to virtually force a customer to say something they don’t really mean. Multiple questions are employed, with the key question coming at the end. For example, Question 1 might be: Are you satisfied with your purchase? The answer immediately after purchase will logically be yes. Question 2 might be: Did you feel that your goods were reasonably priced? Once again, the customer would not have paid the price if he or she regarded it as extortion, so the logical answer is yes. Question 3: Was our service of an acceptable standard? Again, unacceptable service would probably not result in a sale, so the answer is yes. At this point comes the critical final question: Do you intend to buy from us again? This is very different from the other questions, because it’s dependent on a lot of hypothetical factors. The best and most truthful answer is: “I’ll buy from whoever offers the best deal at the time I’m ready to buy in future”. However, the build-up questions make the customer feel that any answer other than “yes” to the final question would sound irrational. The final question is of course the one the company wants to use in its marketing campaign. “95% of customers said they intend to buy from us again”, sounds so much more impressive than “95% of customers said our service was of an acceptable standard”.
6. Exploit the ego. Most people like to be thought of as successful, and companies can play on this by wording questions so that the answers they want customers to give imply higher status. For example: "Did you buy this product because it's cheaper than the competition, or because you felt it was the best solution on the market to achieve your goals?" The reality is that most people would have bought because the product was the cheapest, but are they gonna admit that? Probably not. The majority will want to make it look like money was no object, and that they only bought the cheapest product because they considered it to be the best. Voila! The company gets a high volume of customers saying they think the product is the best on the market, when all people are really trying to do is avoid looking financially challenged.
There are many other subtle tricks companies can play when gathering their survey results, right down to their choice of location. Afterall, outside the discount market, most people will probably think £40 is expensive for a pair of jeans, whereas outside Harrods there’ll be a high incidence of people who think £40 is cheap. The same thing, different people - different perception, and most importantly, the potential for a different answer.
The bottom line is that you can’t trust a survey until you know exactly how, where, with whom, and at what point it was conducted. And since no company is likely to divulge that data, it can only be assumed that you can’t trust a survey, full stop.
Posted by: Bob Leggitt