Net Promoter Score is a type of survey used to measure customer loyalty. A lot of companies use NPS, so it’s seen as a good way to benchmark your performance against other companies.
This article is about how the inconsistencies in how NPS is implemented makes accurate benchmarking impossible.
I had been meaning for some time to write about the problems with the NPS. But then in December 2017, Jared M. Spool wrote an article criticising NPS pretty thoroughly. You can read it on his blog here.
I’m going to talk about an area that Jared Spool didn’t completely cover. But if you aren’t familiar with NPS, I’d recommend that you read Jared Spool’s article first.
How is it meant to be done?
Fred Reichheld, the inventor of NPS, wrote the question like this:
“How likely is it that you would recommend [company X] to a friend or colleague?”
Then you’re supposed to get the user to pick a number between 0 and 10, where 0 is labeled “not at all likely”, 5 is labeled “neutral” and 10 is labeled “extremely likely”.
What does it look like in practice?
Here are some examples of NPS questions I’ve been asked on websites and in emails:
This one is from 123-reg. The main thing to note here is that the question is slightly different. Instead of asking if you “would” recommend, this is more about whether you will recommend. The question is no longer hypothetical.
They also don’t have the “neutral” label for the number 5.
Typeform have chosen to use different end-labels. “Not likely” and “very likely” are less extreme than “not at all likely” and “extremely likely”.
This one from Coverwise has uses one of the original end-labels, but not the other one. Also, they’ve chosen to have the options run vertically, rather than horizontally.
Twitter has “extremely unlikely” as their 0 label, instead of “not at all likely”. Most NPS implementations have the numbers increasing from left to right, but this one has it the other way around.
7Geese have split the number scale in two.
This text from my GP surgery uses a fully labeled 5-point scale. It also adds a caveat about the scenario in which you would make the recommendation.
I found a screenshot of this one from Orbitz. Again, it specifies what kind of recommendation you might make. It also combines a colour scale with the number scale.
Inconsistencies make fair comparisons impossible
Here are some of the inconsistencies that are out there:
- How the question is phrased
- The number of options that are labeled
- The labels that are used
- The size of the scale
- The direction and layout of the scale
- Other indicators of how to interpret the scale (colour)
These factors can influence the responses that you receive. For example, there’s a well known left-side bias, so whatever is on the left side of your scale is more likely to be selected.
All of this doesn’t make much difference when you consider all the other flaws in NPS. A purity test for a broken system isn’t much use.
However, one of the arguments I’ve heard in favour of NPS is that, in spite of it’s flaws, it’s still useful for comparing the quality of your user experience with other companies, since so many companies use NPS.
These inconsistencies in how NPS is implemented show that this isn’t true. You can’t make a fair comparison when everyone is asking slightly different questions and presenting the options in different ways.
Also, there’s the issue of where you ask it
Jared Spool did cover this in his article, but it’s worth repeating briefly.
The other major inconsistency in how NPS is implemented is the circumstances in which the question is presented to users.
If you present it earlier or later on in someone’s journey, you’ll get different results. If you present it on a particular kind of page, you’ll get different results from other pages. The same goes for emails.
It comes down to who you’re asking and when you’re asking them. Where you ask people the question is a way of selecting which people will see it. For example, if you only show it after 20 page views, most of the people who are frustrated may have already left. And people will give different responses at different stages in their journey too.
This can come down to the random variations in how companies choose to implement NPS, but it can also be deliberate. Some companies reward their employees for better NPS scores, which can lead to them manipulating their NPS implementation to improve the score.
Conclusion: don’t bother with NPS
There are lots of reasons why NPS is rubbish. If you’re clinging on to it because you hope that it will help you to benchmark your performance against other companies, you should stop.