Surveys are top of mind for me right now. We’re in the middle of a couple of survey projects, and surveys inevitably come up whenever we talk about loyalty, satisfaction, and self-service effectiveness. Doing customer service and support surveys is a topic that deserves its own book—as a matter of fact, it already has its own book, by Dr. Fred Van Bennekom, which I strongly recommend.
In this blog post, let me share just a few points that I think every customer service and support professional should know.
Keep relationship and transactional surveys separate. Fundamentally, there are two different kinds of customer surveys with very different purposes. Relationship surveys are taken by specific customers on a periodic basis, generally once a year, and seek to understand their overall perception of a company, its products, and services. (For a B2B business, the customers are generally the economic decisionmakers or key purchase influencers, not everyone in the company.) The survey is often managed by Marketing, although we’d prefer that the Services organization drove it. It’s about the company relationship as a whole, not one person, issue, or event. It’s where you get to ask the big-picture questions: do you trust this company? Is it responsive and easy to do business with? Would you recommend this company to a friend or business colleague? (This is the Net Promoter Score “ultimate question.”)
Transactional surveys are about a single interaction, full stop. They need to be extremely quick and easy, and only ask questions about the interaction. It’s not fair to ask about the company overall, or to try to calculate a Net Promoter Score. The person who had the interaction gets the survey, whether or not she is the ultimate decision maker. You should throttle transactional surveys so people don’t get hit too frequently, but once a quarter or twice a year is generally OK.
Transactional surveys let you know about your customer’s experience; relationship surveys let you know about your customer’s perceptions and intentions. Both are important, but don’t mix them up.
Take a stand against bias. “Bias” is survey jargon for the degree to which a survey’s answers don’t accurately represent the answers you’d get from all possible customers. There are many causes of bias—asking only web users or people who call you, for example, because they may not represent your customers as a whole. Or, bias can come from simply not asking enough people. But for most service organizations, the real danger is non-response bias—that is, the fact that the only people who take your surveys are the ones who really care, generally because they’re applauding…or apoplectic. The silent majority isn’t counted.
Non-response bias is a particular concern for “did this article help you” questions at the bottom of knowledgebase articles. In our experience, these usually get 0.2% to 2.0% response rates, and it’s easy to see that the other 99%, give or take, may be pretty different from their more opinionated brethren. Don’t extrapolate from the people who answer you.
Test your surveys. Just as you’d test software or a new web site, you need to test survey instruments before you roll them out. We’ve asked questions that were clear as the azure sky to us, and resulted in a “huh?” from customers…or, “well maybe you mean this, but maybe you mean that.” It’s best to fix this before you launch.
Consider picking up the phone. I know, I know—this Internet thing is going to take off. And we use web surveys all the time. But there’s just no substitute for actually talking with customers, asking not only the scripted questions but open ended ones, too. I think as an industry we don’t do nearly enough of this, and a “customer success on the web” survey is a great excuse to do it. Here’s a simple script:
- Thinking about the last time you came to our site, and thinking about the reason you came there, were you successful in accomplishing your goal?
- (If yes) If you hadn’t been successful, are you entitled to open a case with us, and would you have done so?
- (If no) Did you eventually open a case with us for that same reason?
- Is there anything else you’d like to tell us about your self-service experience with us?
For more on using surveys as part of estimating contact deflection, see Simple Techniques for Estimating Contact Deflection.
Ask fewer questions, and never ask if you’re not going to do something about it. Tell people you’re doing a survey, and it’s like you’re giving away free cookies or something—everyone has their hand in the jar. “Ask about the website.” “Ask about our RMA policy.” “Ask if they liked the hold music.”
Resist.
I’d like a transactional survey to be no more than three questions, and fit easily on one page. One question would be better. Five questions and you’re pushing your luck with me; show me seven questions and two pages, I’m outta here. Unless I’m really cranky, in which case I’ll respond but you won’t like it. Long transactional surveys make for low response rates and high bias.
Get only what you need. And if you think the answer is interesting, but not something you’re going to do something about—for example, if you’d like to know how people feel about your ending support for a product, but you’re going to end of support it no matter what? Don’t ask the question. Customers want to know that what they’re telling you matters—that you’ll actually take action based on what they say.
What “aha” moments have you had in doing your surveys? Please share in the comments below.
<shameless commerce> There are still slots available at the One Day Introduction to KCS immediately following TSW in Santa Clara, May 10. Our KCS Foundations Workshop in Plano is sold out, but we have openings in the Bay Area Foundations Workshop in July. </shameless commerce>