Tech Companies Abuse NPS And Hope Customers Don’t Notice
Tech companies have taken an unhealthy liking to the Net Promoter Score® (NPS), and are using it in silly ways.
The trouble seems to stem from tech folk not knowing the details of what NPS actually is. Sure, they’ve heard the term, and that bigger is better, and they might even have looked at the Wikipedia page before today. But how many have read the original article? And how many of those have looked at the academic literature on NPS and other approaches to measuring customer satisfaction, loyalty, and predicting purchase behaviour?
For those who don’t know what Net Promoter is, it’s a trademarked method created in 2003 by Frederick F. Reichheld, Bain & Company, and Satmetrix, and publicised by an article in Harvard Business Review entitled The One Number You Need To Grow. It suggests that there is a positive link between a high NPS and company revenue growth. The NPS is derived by asking customers a single question: “How likely is it that you would recommend our company/product/service to a friend or colleague?” and an answer is given on a scale from 0 (not at all likely) to 10 (extremely likely).
Those who answer 0-6 are called Detractors, 7-8 are Passives, and 9-10 are the Promoters from whom the score takes its name. The score is the percentage of Promoters minus the percentage of Detractors, so it can range from -100 to +100.
Here’s a simple and obvious criticism. Three companies all have an NPS of +40. If a higher score is better, these companies must all be equally good, right?
What if we know how the scores were calculated?
Do you still think these companies are equally good? Why, or why not?
NPS feels all science-ey and analytical, and many of the summaries imply that there’s lots of rigorous research behind it and that it’s generically good for a wide range of things.
But that’s not true.
Let’s start with what the original article says NPS is actually about: The authors were trying to find a simple way to measure repurchase and referral behaviour, and it was limited to six industries: financial services, cable and telephony, personal computers, e-commerce, auto insurance, and Internet service providers. Note that these are all markets where the customer is an individual buying for themselves or their direct family, i.e. business-to-consumer or B2C markets. Not business-to-business markets.
Importantly, the article clearly says that the NPS approach is not the best thing to use in certain cases, and they may sound familiar to regular readers:
> The “would recommend” question wasn’t the best predictor of growth in every case. In a few situations, it was simply irrelevant. In database software or computer systems, for instance, senior executives select vendors, and top managers typically didn’t appear on the public e-mail lists we used to sample customers. Asking users of the system whether they would recommend the system to a friend or colleague seemed a little abstract, as they had no choice in the matter. In these cases, we found that the “sets the standard of excellence” or “deserves your loyalty” questions were more predictive.
We haven’t even had to go past the original source material before finding some problems.
Now this NPS stuff was published in HBR, which, though often fun to read, isn’t a peer-reviewed academic journal. What does the literature actually say about the NPS approach?
Well, not a whole lot really. And those who have looked into the NPS approach haven’t found it to be the single best method that it claims to be. Researchers have generally been unable to replicate the work of Reichheld and Satmetrix. Keiningham et al found in 2007 that “the assertion that recommend intention alone will suffice as a predictor of customers’ future loyalty behavior, however, is not supported.” Their research was pretty thorough, and clearly articulated the issues with NPS and compared it to other, well established alternatives and found NPS to be lacking.
NPS also doesn’t stack up for predicting firm revenues. Keiningham, Cooil, Andreassen and Aksoy also checked this in 2007, and found the assertions of Satmetrix and Reichheld wanting.
Keiningham et al also warned:
> “The consequences are the potential misallocation of resources due to flawed strategies that are guided by a myopic focus on customers’ recommend intentions.”
And that’s exactly what we’ve seen. Tech companies, and many others, become obsessed with NPS as if it’s the One True Way to measure things. And because what gets measured gets managed, these firms direct an inordinate amount of energy into improving their NPS scores.
And because NPS is clearly The Best Thing Ever, if our company NPS is bigger than yours, we must be better at satisfying our customers (even though that’s not what NPS measures, which should be obvious from the question itself).
But does NPS have any value at all?
Well yes, NPS does have some value.
It provides data on customer referral intention from the subset of existing customers who bother to respond to the survey question. This is a great place to begin, but a poor place to finish.
NPS is also fast and easy to understand, which is great for managers who are too busy to really spend time understanding their business in any depth. Complex thinking should be left to the “rockstars” hired for the engineering teams.
NPS also provides lazy marketers with a spurious point of difference to advertise because they have nothing else to make their company or its products stand out. Simple numerical comparisons, like price, make life much easier for customers.
This article first appeared in Forbes.com here.