Should you create one user experience metric?

Tomer Sharon
5 min readFeb 25, 2020
Assuming a typical development lifecycle involves the four steps of having an idea, building it, launching it, and learning from users about it, above is how I generally organize the world of user research. This post is about metrics.

Generally speaking, the world of user research can be organized into three areas: tactical research, strategic research, and metrics.

  • Tactical: Research that happens (or is at least most effective) between designing something and building it with a goal to identify strengths and weaknesses of the thing, iterating, and improving it. For example, asking four people to complete tasks with a prototype for a new app.
  • Strategic: Research that happens at any point on the lifecycle of a product or service (yet is most meaningful prior to designing and building anything) with a goal to understand people, their behavior, and motivation, and uncover current their needs, problems, and workflows. For example, observing salespeople’s interactions with potential clients to identify pain points and opportunities.
  • Metrics: Quantitative research and data collection that happens after a product is launched with a goal to understand what is happening when people use it. For example, measuring user retention for core functionality of a product.

Discussion about the creation of an experience score for a product focuses on the metrics category and only refers to such research that happens after a product has launched. Important and relevant questions product teams have at that stage are:

  1. What is the experience we have created for our users? Is the experience good or bad?
  2. What are experience trends over time? Are we getting better or worse?
  3. How does the experience compare across a portfolio of products and services?

A common answer to all of these questions is creating an overall user experience metric. Since there is no one metric that provides a holistic understanding of the user experience, such score would be generated from multiple experience metrics using an agreed upon formula and weight per metric.

My experience throughout the years at Google, WeWork, Goldman Sachs, and in mentoring hundreds of startups globally has led me to a clear conclusion. So, should you create one user experience metric?

Yes

When I was Head of UX at WeWork I created such metric. We called it the Member Experience score, and we were eventually able to provide a consistently measured, ongoing experience score for every WeWork building, city, region, and even for all of WeWork. It included measurements of dozens of different aspects of what created an experience at WeWork (physical spaces, digital products, human-to-human interaction) and we applied multiple experience metrics to measure these aspects and features.

General Managers, Community Managers, and executives alike had a score at the tip of their hands that allowed them to become more intelligent about their customers, understand trends in the experience, evaluate planned changes, and respond to meaningful changes. The experience we created for our users was visible.

Ideal, right?

No

You cannot suspect Google for lacking usage data. When I was working there as a researcher on Google Search, we didn’t have an aggregated score. Instead, we had about 120 different metrics that told us a lot, and I mean a lot, about the user experience at any given point in time. For example, for search results we had two very telling metrics we called long and short clicks. Long clicks were clicks where search users clicked a result and stayed at that page for a long time or even never came back to the search results page. We thought that was a good thing because we believed users found what the wanted to find. Short clicks were ones where a more erratic behavior was demonstrated. People quickly went back and forth between results page and the links we gave them. We thought of that as a bad thing since they couldn’t find what they wanted to find hence kept looking. Later on, our qualitative research proved us completely wrong but that’s for a different article.

Long and short clicks as well as many other numbers fed multiple fruitful discussions and led to making better product decisions. I can’t remember that a single score was even considered and one can’t argue with the success of Google and its search product.

Ideal, right?

Maybe

Nobody at WeWork cared about the member experience metric. Creating a single aggregated score people consider valid, reliable, and meaningful meant a lot of people needed to agree on the formula that created the score. Imagine the product that is WeWork. There were hundreds of big and small features and details that created that experience. Each might have been represented by a team that cared about it. Everyone had an opinion about the weight their feature should have gotten in the formula. When their feature didn’t get that weight, teams didn’t trust the entire score. Discussions about the formula tended to be detailed and overly complicated which contributed to distrust and just fatigued people. Long story short, a single user experience metric sounded as a noble idea, yet reality brutally punched it in the face.

Not having such metric didn’t bother anyone at Google as well. That said, with 120 different usage metrics, anyone could have picked a number that was either stable, increasing, or decreasing that day and make a point about it in their next meeting. Somewhat chaotic and definitely does not answer most of the questions I specified above.

That said, you don’t hear about organizations developing and using a single user experience metric, do you? Do they do the same with other metrics such as KPIs? Have you ever heard of a single business score or maybe one operational or technical performance metric? Not really.

Where do we go from here?

My conclusion: A tight combination of a few metrics

My experience is that while there seems to be a strong need to quantitatively compare the experience across multiple products, it is extremely difficult to reach agreement on a single experience metric. Instead, and this is something I am experimenting with these days, I’ve put out an assumption that there are four experience metric that matter more than the rest and I am providing data visualization that allows senior leadership to compare their products using these metrics. The four metrics I picked are:

  • Satisfaction score: The mean user satisfaction with a product.
  • 7-day active (or any time unit that makes sense for a product): The ratio of users who used the product at least once in the last 7 days out of the total population of users.
  • Adoption rate: The ratio of users who used the product for the first time (in a given time frame) out of the total number of users.
  • Retention rate: The ratio of users who kept using the product in a given time frame out of the total number of users.

Note that all of these metrics are expressed in percentage as ratios to allow for a normalized cross-product comparison.

Creating a single or quadruple experience metric will make the experience any organization creates for its users visible. Yet, in order to track it over time, compare it across products, and most importantly, use it to drive positive change, one should heavily invest in establishing trust (in you and in the process of measurement), reach agreement, and gain buy-in and support. Otherwise, it doesn’t really matter what, how, and when you measure.

If you liked this article, check out:

You can also find me on Twitter if you’d like to follow more of my research thinking!

--

--

Tomer Sharon

Cofounder & CXO at anywell, author of Validating Product Ideas, It's Our Research, & Measuring User Happiness. Ex-Google, Ex-WeWork, Ex-Goldman Sachs. 2∞&→