NPS: Friend or Foe?
The above can be a very polarizing question these days often met with both delight and despair. There are the NPS believers which are fully invested in the theory and think it holds the keys to business success. Then there are the naysayers who critique that NPS is inaccurate, often misused and inherently flawed. As someone who has had some decent skin in the game with this metric over my career I think they're both right.
The promise of NPS
NPS; short for “Net Promoter Score”, is a concept introduced back in 2003 by Fred Reichheld, a partner at Bain & Company. It was developed as a method to systematically measure customer loyalty and infer information about customer satisfaction and engagement. The sales pitch sounds great! Introducing a method to quantify inherently qualitative feelings would greatly benefit any low touch sales model. It provides the kind of insight traditionally only available to great sales reps with deep customer relationships. Armed with a method for gathering this type of customer insight, we can make better investment decisions and provide more appropriate support to accounts without having a large sales force.
How NPS Works
The basic premise of NPS is that you ask your customer base one simple quantitative question:
The user is then allowed to answer on a scale of 0 to 10. 0 being not likely at all to recommend and 10 being very likely to recommend. The respondents are then bucketed into 3 categories based on their answer: Detractor (0-6) , Passive (6,7) and Promoter (9,10).
Net Promoter Network has a great definition of these categories, which I've included below:
- Promoters (score 9-10) are loyal enthusiasts who will keep buying and refer others, fueling growth.
- Passives (score 7-8) are satisfied but unenthusiastic customers who are vulnerable to competitive offerings.
- Detractors (score 0-6) are unhappy customers who can damage your brand and impede growth through negative word-of-mouth.
The net promoter score is then just a simple calculation:
NPS = % Promoters - % Detractors
Therefore, if you have all promoters your NPS will be 100% and if you have all detractors your NPS will be -100%.
To put it in perspective, Apple has a NPS of 89. However, they have an intensely loyal following. The technology industry as a whole has an average NPS of 57.
The dream vs the reality
If the theory holds water, we have finally found a way to quantify the non-quantifiable. Thanks NPS! But, therein lies the issue: feelings, loyalty and satisfaction are all inherently qualitative and subjective properties. They are influenced by so many transient factors that there is no accepted scale by which to quantify the values. How do you ensure accuracy when gauging a customer’s loyalty? A person's loyalty will be entirely different depending on a wide variety of factors. To name a few: life cycle phase, most recent experience, circumstantial investment in the product and many more. Additionally, unrelated personal factors can have an effect on NPS. I know I can’t be the only one who gave a bad mark to NPS for the sheer fact of wrong place, wrong time. Further, there are very real issues with the actual calculation of the NPS score. Emily Robinson shared a fabulous article which outlines some of the problems. The article is called "Net Promoter Score Considered Harmful (and What UX Professionals Can Do About It)" and it's written by Jared M Spool. Please check it out.
So there are some flaws with NPS. How else can we try to measure customer feelings?
1) We could try using retention or account growth as a good approximation for satisfaction and loyalty. This is a good idea in theory. However, it heavily depends on the product and the account circumstance. The user could actually hate the product but feel too trapped to leave (until an alternative comes along).
2) We could try analyzing their user journey. Being able to track a customers progress is incredibly valuable. We could look at their actions and infer how they must be feeling based on what they do. But again that involves us projecting our desired user journey onto how they want to do business. For example; a user visits a documentation page over and over again with no action in between. We can assume they are confused. Maybe instead they continue to get pulled off their task because of external factors.
3) We could analyze text feedback to gauge satisfaction and loyalty. Again, though when we read through their written feedback, we are imposing our scale of satisfaction and loyalty on their words. Someone could give a lot of suggestions for improvement because they are invested in the company and want to help, or they could be intensely frustrated. They could provide minimal feedback because they have no issues, or because they are not invested.
Enough already! Bottom line, it's very hard to infer someones subjective position. To infer someone's loyalty and satisfaction based on their actions does not consider any factors which would be important to the individual consumer. We have to make assumptions on how they feel based on how we would feel.
So if we can't approximate their feelings for them, can't they just tell us?
And just like that NPS is invited back to the party.
HarnessIng the powers of NPS for good instead of evil
Okay, NPS is not perfect. Neither is any tool used to measure a persons subjective feelings. Does this mean we stop trying? No! Go ahead and use NPS if you have it. It's a good tool to have in your toolbox. The tool enables consumers to indicate how they feel on a scale that matters to them and supplament that with qualitative feedback! That is a great thing.
Just be reasonable with your usage of the metric. Understand it's limitations and try to account for them in how you use the data. I've outlined some basic guidelines below.
Guidelines for NPS use
Don't focus too much on a particular data point
For example, at a granular level we likely cannot definitively say “Sally is detractor and likely to leave” based on one survey response. The answers are too variable. But what we can do is take a page out the crowd wisdom methodology. The theory behind crowd wisdom is that when you ask a sample of the population a question, you tend to get a reasonably accurate answer. Wikipedia has an excellent explanation of crowd wisdom:
“A large group's aggregated answers to questions involving quantity estimation, general world knowledge, and spatial reasoning has generally been found to be as good as, and often better than, the answer given by any of the individuals within the group. An explanation for this phenomenon is that there is idiosyncratic noise associated with each individual judgment, and taking the average over a large number of responses will go some way toward canceling the effect of this noise"
There was a great example of this type of crowd wisdom in a Ted Talk given by Lior Zoref called “Mindsharing, the art of crowdsourcing everything”
In this talk, he had the entire audience guess the weight of an Ox. The likelihood that any one user in the crowd could accurately estimate this is low. However, when he surveyed the whole crowd he received responses as low as 300 and as high as 8000 lbs. The median guess was 1792 lbs and the actual weight was 1795 lbs.
As such, Sally doesn’t necessarily tell you a common net promoter score and she is not necessarily always a strong detractor. However, the overall score of the crowd should loosely reflect the reality of your user base’s loyalty for the given point of user journey they were measured in.
Don't put so much weight into the actual NPS score
If you don't like the actual NPS score calculations, you're not alone! There are a number of known holes with the opaque nature of the net promoter score. Why not also have a look at the average value, median value or distribution of values?
Hold all other variables equal and only measure variances.
When you have set the logic to gather your NPS: keep it steady. The 5 W's need to stay stable (who, what, when, where, and why) in order to assume that the changes to scores reflect changes in customer feelings vs changes in a the collection method.
Be careful when comparing across products
Resist the urge to compare to your scores to others. Certainly competitors are the most dangerous. Without any concept of their 5 W's you are flying blind. They could be collecting their scores after the user receives a free gift. Or, they could be cherry picking customers which they know to be satisfied.
Even cross product comparisons within the same company are dangerous. The products could have entirely different use cases and customer bases.
Ask a follow up question
Asking your consumers a follow up open ended question can provide excellent context for their answer. Questions such as "What is the most important reason for your answer?" can help to pinpoint influential factors and possibly identify next steps.
Hey, we’re all figuring this out together. If you find that NPS is working like a charm for your company then by all means keep going! Just be aware of its limitations and use the metric reasonably.
For what it is worth, I have personally performed in depth interviews with hundreds of customers. At the end of each interview I asked each customer the NPS question. There was rarely a time it shocked me.
Written by Laura Ellis