Tech companies have taken an unhealthy liking to the Net Promoter Score® (NPS), and are using it in silly ways.
The trouble seems to stem from tech folk not knowing the details of what NPS actually is. Sure, they’ve heard the term, and that bigger is better, and they might even have looked at the Wikipedia page before today. But how many have read the original article? And how many of those have looked at the academic literature on NPS and other approaches to measuring customer satisfaction, loyalty, and predicting purchase behaviour?
For those who don’t know what Net Promoter is, it’s a trademarked method created in 2003 by Frederick F. Reichheld, Bain & Company, and Satmetrix, and publicised by an article in Harvard Business Review entitled The One Number You Need To Grow. It suggests that there is a positive link between a high NPS and company revenue growth. The NPS is derived by asking customers a single question: “How likely is it that you would recommend our company/product/service to a friend or colleague?” and an answer is given on a scale from 0 (not at all likely) to 10 (extremely likely).
Those who answer 0-6 are called Detractors, 7-8 are Passives, and 9-10 are the Promoters from whom the score takes its name. The score is the percentage of Promoters minus the percentage of Detractors, so it can range from -100 to +100.
Here’s a simple and obvious criticism. Three companies all have an NPS of +40. If a higher score is better, these companies must all be equally good, right?
What if we know how the scores were calculated?
Do you still think these companies are equally good? Why, or why not?
NPS feels all science-ey and analytical, and many of the summaries imply that there’s lots of rigorous research behind it and that it’s generically good for a wide range of things.
But that’s not true.