7 minutes read

You know that you should be tracking for your products and services. After all, if you don’t measure the UX, how do you know how its performing? How do you know if a design change is for the better, or for the worse? How can you justify investing in UX, if you can’t measure the benefits?

The thing is, there is an awful lot you could measure. The number of potential UX metrics is ever growing. CXPartners have listed a whopping 127 in their big list of UX KPIs and Metrics and as they quite rightly point out, it would be crazy to try to measure, let alone factor in all 127. So, which ones should you on? To help navigate this measurement maze, I’ve outlined UX metrics that I believe should be at the top of your list.

1. Satisfaction

Customer satisfaction is probably the best barometer of the quality of the user experience provided by a product or service. After all, a bad experience is unlikely to lead to a satisfied customer. You can ask users how satisfied they are with particular features, with their experience today and of course overall. Since in the real-world people are more likely to talk about their frustrations, rather than how satisfied they are, a good approach can be to ask users to rate their experience on a 5-point or 7-point scale, from very dissatisfied to very satisfied.

Use a 5-point or 7-point scale for capturing levels of satisfaction

Surveys are a great way to capture satisfaction ratings, along with feedback provided within an app, or when using a website. As we’ll see with most of the metrics it’s important to not only capture ratings, but the reasons behind ratings. A good way to do this is to prompt for an explanation if a user selects a low, or a high satisfaction rating.

Capture not just the level of satisfaction, but the reasons behind it

2. Recommendations

Like satisfaction, recommendations are also a great barometer of UX. It goes without saying that a user that has had a great experience, is more likely to recommend that product or service to someone else. It’s therefore no surprise that measuring the likelihood to recommend has become all the rage within the business world. This is now primarily via the Net Promoter Score (NPS).

NPS is very simple. Arguably too simple in fact, as we’ll soon see. The idea is that by asking the following single question, you can calculate an NPS score, which will tell you how loyal your customers are.

The standard 10-point Net Promoter Score (NPS) question

The idea sounds very alluring, but as Jared Spool points out in his excellent article, Net promoter score considered harmful (and what UX professionals can do about it) NPS is not the silver bullet that many people think it is. In fact, it can be a very dangerous and misleading bullet.

A recommendation does not necessarily mean a customer had a good user experience, they might just be a brand devotee. On the flip side, a good experience does not always lead to a recommendation. Take Ryanair for example, the biggest budget airline in Europe. Ryanair have really improved the UX of their websites and mobile apps over the last few years, and almost made flying with them bearable. However, even though I had a good experience flying with them last year, I’d never recommend them to a friend or colleague. I can’t yet bring myself to forgive Ryanair for the crappy way they’ve treated me, and other customers over the years. I would never recommend them, no matter how much they improved.

Asking users to rate their likelihood to recommend a product or service on a 10-point scale is also asking a lot of them. What’s the different between a rating of 2 and a rating of 3? What about a rating of 5 and a rating of 6? What if I’d recommend to a friend, but not a colleague? What if I’d recommend for one situation, but not another?

Worse still, NPS simply lumps customers in to one of three groups based on their response. Detractors (0-6), Passives (7-8) or Promoters (9-10).

NPS scoring system

An NPS score is calculated by subtracting the percentage of detractors from the percentage of promoters. It seems that neutrals don’t even matter. The problem is that this is a gross over simplification. A rating of 0 is much more alarming than a rating of 6, and yet NPS lumps the two together. It also means that changes to the UX are not always reflected in the NPS score. For example, moving users that previously rated their likelihood to recommend as 0 to 6 (a massive 6 point shift) would have no effect on the score because they’re still lumped in the detractors group.

For all these reasons, I’d echo Jared Spool by recommending that you don’t use the standard NPS question to track recommendations. Instead use a simpler questions such as the following.

Rather than NPS, use a simple 5-point scale to measure the likelihood to recommend

This gives users less options to worry about and makes it easier to interpret the results.

If organisationally you have to report NPS, then I’d recommend using charts and data visualisations to report the likelihood to recommendation across the scale, rather than just as an NPS score.

For established products and services, you might also consider asking users if they’ve previously recommended to a friend or colleague, as genuine recommendations are a better barometer than hypothetical ones.

Consider asking users if they’ve previously recommended a product

3. Usability

Usability might not be the differentiator that it once was, but it’s still hugely important to the UX of a product or service. Something that is hard to use, isn’t going to provide a great user experience.

A good way to capture overall usability is to ask users how they would describe a product, from extremely hard to use, to extremely easy to use.

Usability is a key UX metric

System Usability Scale (SUS) is a common way to measure usability. It consists of 10 questions that can be asked following usability testing, or to users that have used a product or service. Questions cover a range of usability areas and the order should ideally be randomised to reduce the chance of bias.

SUS can be used to measure the usability of a product

A template, such as the SUS template from usability.gov is best used to calculate the SUS score, as the scoring mechanism can be a little confusing. The score is on a scale from 1 to 100, with the higher the score, the better the usability rating.

A SUS score is most useful for benchmarking usability. This might be historically, such as in comparison to the product prior to a change, or against similar products and services. However, BE WARNED. Because a SUS score is out of 100 people often assume that it’s a percentage, thinking that a score of 50 must mean that 50% of users can use an interface. This is a common misconception. SUS scores are not percentages, they are relative scales, so be very careful when you present and communicate them.

4. Ratings

From Amazon to the Apple App store, ratings are everywhere online. There’s a good reason for this because ratings are a great way to judge the quality of a product or service. You might ask users to provide an overall rating, along with ratings for different features, or different aspects of a product or service.

You might ask users to rate different features

Stick with convention by using a 5-point scale, and if possible capture not just the ratings, but the reasons behind the ratings.

It’s a good idea to ask users to explain their ratings

5. User tasks

Tasks are at the heart of UX because a product that doesn’t support the user’s tasks, isn’t going to provide a very good user experience. Metrics for user tasks should be captured directly after a user has attempted a task,. This usually means following usability testing, or within a user’s session.

I’ll admit that I’ve cheated a little by collectively referring to ‘user tasks’ as a metric because in reality there are a number of user task metrics that you might focus on. These include:

  • Completion rate – The percentage of users who are able to successfully complete a task.
  • Error rate – The percentage of users who made an error or mistake during a task. For example, navigating to the wrong part of a website.
  • Average number of errors – The number of errors or mistakes users made on average during a task.
  • Time on task – The length of time it took users to complete a task. This is especially useful for measuring the potential impact on user productivity.
  • Ease of completion – The ease with which users were able to complete a task. The single ease question (SEQ) (see below) is a good way to capture this.
The Single Ease Question is a good way to capture the ease of completing a task

6. Product description

How would you describe a Ferrari sports car like the one below? Exciting? Thrilling? Attractive? How about a Toyota? Practical? Reliable? Dull? The words that we use to describe a product or service can be very telling, and are a great way window into the user experience provided.

How people describe a product, can be very telling

A great way to capture product descriptions is to use Microsoft’s product reaction cards. This involves asking users to pick up to 5 words from a long list of adjectives (such as those shown below) that they feel describe a product or service. You can use product reaction cards following usability testing, and of course with existing users. Take a look at my guide to Capturing user feedback with Microsoft’s product reaction cards for a complete how to guide.

Product reaction cards are a great way to find out how users view a product

Which UX metrics to use?

I’ve listed 6 key UX metrics to focus on, but don’t think that these are the only metrics you can use to evaluate the UX of a product or service. So, which metrics should you use? Well, as is so often the case with tricky questions, the answer I’m afraid is, “it depends”. It depends on your goals. It depends on what you can and can’t measure. It depends on what success looks like.

Using a framework is a great way to identify the right metrics to use. A very good framework to use is Google’s HEART framework. You should look to use a mixture of objective measures, such as conversion rate, sales, registrations, along with subjective measures, such as recommendations, satisfaction and ratings. Along with capturing the what, it’s also imperative that you capture the why. Knowing that user satisfaction has dipped over the last 30 days is important, knowing why it has dipped is even more important. A great way to capture this sort of qualitative information is through open text questions, and of course by going out and speaking to your users as frequently as possible.

Finally, be mindful that it’s outcomes that ultimately you should be striving for. Don’t take your eyes off the real prize, by chasing UX metrics. Yes, you should be looking to improve the UX of your product or service, but ultimately a business is going to be more interested in sales, or number of customers. As Ryan Singer of Basecamp reminds us, you should be focusing on the outcomes, the things that UX does that matter, not just the UX.

If people don’t listen to you when you advocate for UX, stop advocating for UX. Find the thing that UX does that matters and advocate for that. How does the change affect market fit, or conversion, or word of mouth, or cost, etc.

See also

Image credits

Aircraft dials photo by Mitchel Boot on Unsplash
Netpromoter diagram from CustomerGauge





Source link http://www.uxforthemasses.com/ux-metrics/

LEAVE A REPLY

Please enter your comment!
Please enter your name here