The atomic unit of a research insight

Tomer Sharon
3 min readApr 8, 2016

--

When you conduct research with users, whether a usability test, interview, or field observation, you’re looking for answers to your research questions.

These research questions are knowledge gaps your team (or client) has identified. Most research findings and insights are usually communicated through a report. The nature of research with users (and of these reports) is that you always learn more than what you intended to learn. As a result many reports include unrelated topics, insights, and findings that could prove useful in the future.

In an organization where multiple research studies are conducted every week, month, or year by a variety of researchers, these reports pile up quickly. Then, when a team member, executive, or client asks a question about a certain topic (e.g., what do we know about how our users book conference rooms?), there’s a need to rely on the long-term memory of researchers who happened to conduct related studies. In other cases, people mine through long reports trying to understand what was a meaningful insight. However, in other cases, ‘non-researchers’ have observations about users every day that are not necessarily documented.

My conclusion is that a report is not the atomic unit of a research insight.

A nugget, as I like to call it, is an observation gathered through research. The idea is that every time we sample the member experience we can parse that into nuggets that are tagged for future use.

Here’s an example.

Let’s imagine a WeWork UX team member interviewed a WeWork member who decided to leave WeWork. Let’s also imagine that the member was interviewed and video recorded.

After the interview, a WeWork UX team member (not necessarily the one that conducted the interview) is making sense out of it and creating nuggets.

Here is an imaginary way of what we do with a research observation and how we tag it:

  • Title: Exit interview with Primary Member, John Smith
  • Directory: drive.google.com/open?id=jhfg54hg45hg54khg (this is a fictional URL)
  • Date: April 6, 2016
  • Source name: #BenjaminGadbaw #ChristopherKennedy
  • Source type: #UX
  • Sensemaker name: #BenjaminGadbaw
  • Media type: #Video
  • Research method: #Interview
  • Nugget (the observation): #Conference room sofas make conference rooms feel silly
  • Observation Directory: youtu.be/HFDS74h_7?t=1m50s (fictional URL, notice the time stamp, that’s a nugget. No need to watch the entire 1-hour interview)
  • Experience Vector (this is our experience “bottom-line” — did it improve our relationship with the member, worsen it, or have no effect?): #Negative
  • Magnitude: #Medium
  • Frequency: #High
  • Emotions: #Embarrassment #Amusement #Annoyance
  • Props: #ConferenceRoom #Couches #Chairs
  • Journey: #Membership
  • Characters: #PrimaryMember #Member #Client #Candidate

Imagine 1,000 such nuggets. Properly tagged, well defined, easily searched and found. Beats any report.

Benjamin Gadbaw and I created the atomic research approach and Polaris product and framework.

Related articles

Key Experience Indicators: How to decide what to measure?
The three most popular questions about Atomic Research
Foundations of atomic research
Continuous user research in 11.6 seconds
Measuring the WeWork Member Experience
Democratizing UX
Deciding what to work on

--

--

Tomer Sharon

Cofounder & CXO at anywell, author of Validating Product Ideas, It's Our Research, & Measuring User Happiness. Ex-Google, Ex-WeWork, Ex-Goldman Sachs. 2∞&→