Lesson 1: Don’t use NPS as a single metric

A KAE customer experience masterclass

As discussed in our previous blog posts, Net Promoter Scores (NPS) are, and should continue to be, the start and end of all CX monitoring programs. There are many merits of NPS, and the metric is widely accepted as a good indication of future business growth.

However, there are some important issues to note. For one, NPS is subjective to a cultural bias; survey-takers in certain cultures are much more likely to rate 9 or 10 on a scale whilst others will rarely rate above 7 on a scale. So, is it fair to compare NPS in Mexico, where 8/10 is considered dissatisfactory service, to Sweden, where 6/10 is considered great service[1]? Doing so risks missing any potential issues in Mexico and underestimating the strength of your brand in Sweden.

NPS is also highly subjective to the context and setting in which it is asked. Consider the following;

A strategy team was tasked to improve NPS by 10 points. One of the team members suggested that it was not obvious to respondents what the numbers on the NPS scale really meant and so colour coded the NPS in the next CX survey they fielded

Before:

After:

After analysing the results, the team found that NPS had increased by 14 points. Success! The team all received their bonuses that year[2].

The example above illustrates the fallibility of NPS as a solo metric. Whilst colour-coding NPS scales may seem like an ‘easy-fix’, this won’t improve your customers’ experiences and will result in artificially boosted figures, which are of use to no-one.

So, is NPS too fallible to use? We think not. Similar biases exist for any Likert-scale type measure, so these are not problems exclusive to NPS. The solution is bringing in other metrics to create an index measure of customer experience. These metrics can, and should, include more than one of the following:

  • Operational data: Volume of spend, frequency of purchase
  • Customer data: Customer tenure, complaints history
  • Transactional data: Interaction with online platforms/apps, cart abandonment, use of referral codes

Similarly, customer experience surveys should, if possible, be asking more than just likelihood to recommend. In addition, ask “Have you previously spoken about this brand to a friend, family member or colleague in the last 12 months?” and if they have, ask if it was positive. Did they talk about customer experience, or rewards and benefits? These kinds of behavioural measures are less subject to bias – your memory of the last year is less fallible than your intentions for the next year.

The next step is to join all these metrics together. Adjust likelihood to recommend by previous recommendation. This begins to form a picture of at which point on a 10-point scale people actually become a promotor or a detractor. You can then use this information to adjust for cultural biases and other over-stated intentions. The result is a more accurate view of your brand advocacy across different markets and customer groups.


[1] Figures for illustrative purposes only, although agreement bias does tend to be very strong in Mexico and very weak in Sweden

[2] Qualtrics X4 event 2018, CX masterclass ‘Myths Misconceptions and Miscommunications in CX’ by Nan Russell