📹 New! Remote User Testing - Get video + voice feedback on designs and prototypes
Read more
Research

What is Atomic UX Research?

A new way to organise UX knowledge in an infinitely powerful manner
What is Atomic UX Research?

In short Atomic Research is the concept of breaking UX knowledge down into its constituent parts:

The Atomic Research model — a funnel from data to conclusions, then around again

By breaking knowledge down like this allows for some extraordinary possibilities.

How It Started

Last year I was working for a FTSE 100 tech company. The issue we were trying to solve was how to store and distribute the UX learnings in a way that everyone in the business could use and benefit from them.

As it stood, the UX team, BAs and PMs would run experiments, write up what they learned and how they used that knowledge. These were normally produced as PDFs, Google Docs or Slide-decks and then were filed away in Google Drive.

That’s all fine until someone else came to work on a feature and needed to find out what we already knew, and it was hard to use those findings for another project.

Sound familiar?

We asked: “What if, instead of documents gathering dust in files and folders, our UX knowledge was in a searchable and shareable format?”

Easy right? Instead of putting our research into PDFs we put them into some kind of online repository, maybe a wiki of some kind?

I started researching the repositories out there for something that we could use to make our research taggable and searchable. There were a few systems that claimed to do this but it became obvious that these are all aimed at smaller companies doing small, self-contained projects. The categorisation and search just wasn’t up to dealing with large scale projects.

"By breaking knowledge doing into experiments, facts, insights and conclusions, allows for some extraordinary possibilities."

Research is often very specific to the area you are researching. This seems like an obvious and pointless statement. But it is important — say I ran some research and one of the outputs was that we learnt that green was much more effective on the call to action than red. That means it’s more effective on that very particular area, or for a certain persona… or both. It doesn’t mean we should change the colours of the whole UI.

The repositories out there either didn’t allow you to give proper provenance to the research or went the other way and gave you no way to discover and utilise research outside that tiny walled gardens that were no better than PDFs in a shared drive.

What we needed was the ability to:

I was talking to a colleague about this problem and — as UX designers are wont to do — we started talking about breaking what research is into simple bits. I have to give a lot of credit to this colleague David Yates, as I think it was he who started talking about how you could separate data from the insights.

He referred to Maslow’s Hierarchy of Needs. As we talked we realised we can break an item of knowledge into 3 or 4 parts. This idea of ‘lots of small signals leading to larger discoveries’ made me think of Atomic Design.

As we discussed how this could work, and the benefits of breaking down research like this I knew we had discovered something important.

So important it had been done before! Ever heard of the DIKW hierarchy (data, information, knowledge, and wisdom)? We’d accidentally invented an existing and well respected scientific data model that is at least 60 years old!

Still, that just confirmed to me that this was a good way to look at UX research — I feel going around saying ‘DIKW’ (which most people seem to pronounce ‘dickwee’) isn’t brilliant, also our model was slightly different, so I believe Atomic Research is a better term. When I use the comparison to Atomic Design people au fait with the method tend to get it.

I’ve been using the Atomic Research principle for nearly a year now and find it an incredibly useful way to think about product knowledge.

So what is Atomic Research?

Atomic Research in Practise

Atomic research in practice — how it looks with real knowledge.

Experiments — “We did this…”
The experiments from which we have sourced our facts.

Facts — “…and we found out this…”
From experiments, we can glean facts. Facts make no assumptions, they should never reflect your opinion only what was discovered or the sentiment of the users.

For example: 3 in 5 users didn’t understand the button label.

Insights — ”…which makes us think this…”
This is where you can interpret the facts you have discovered. One or more fact can connect to create an insight. Even if they come from other experiments. Some facts might disprove an insight.

For example: The language used on the buttons isn’t clear.

Conclusions — “…so we’ll do that.”
Conclusions are your recommendations for how to use the valuable insights you have gleaned from the facts. The more insights connecting to the conclusions, the more evidence you have to it’s value. This helps when prioritising work.

For example: Let’s add icons to the buttons.

Multiple sources mean better decisions

One of the first benefits I noticed from this method is how more than one fact can support or refute an insight and more than one insight can support or refute a conclusion.

The more facts that ultimately lead to a conclusion the more you can be confident about that route forward.

A fact can be understood in multiple ways and there could be several conclusions to draw from an insight. Therefore one fact can have many insights and an insight can have many conclusions.

It doesn’t matter as long as we are testing them, generating more evidence and proving which ones are correct.

When we have more evidence that can be linked up to prove or disprove an insight.

The best thing… This works across multiple experiments!

Because what we discovered is linked to, but not reliant on, how we discovered it — And that is linked to but not reliant on what we did next — it gives us the opportunity to use facts from several experiments to support a single insight. We can take insights from anywhere to create a conclusion. We can spot patterns of results from anywhere in an organisation to guide us in to the future.

It might be that the experiment that first led to an insight is long forgotten. No longer relevant. But evidence from other sources continue to support that insight, bolster it and enable it to remain as a truth.

The results are no longer held in this little bubble of a specific bit of research and I can give as much evidence as possible to make major decisions.

Research is no longer linear

Once we have come to a conclusion, that needs to be tested too.

Let’s say we have an insight that says people don’t understand our buttons. One conclusion might be to add icons to those buttons. I ran a user test that seemed to suggest that it aided comprehension now I want to run a split test on the live system. The data comes in and shows that in reality this didn’t work — Damn!

"By holding insights as separate and independent of their sources means they can be constantly re-tested and allowed to live and die by the evidence."

But the good news is I can use that data to disprove my conclusion, while leaving the insights that led to that conclusion intact. In fact, the failed tests data might weaken some insights but actually strengthen others — it might prove that another insight is the correct one.

It certainly helps us get a clearer picture of how to improve our products moving forwards.

Traditional reporting methods are stuck in a moment of time — “Our research told us this…” might be true when that document was written, but it’s unlikely to have been updated when that was discovered to be incorrect a few quarters later.

By holding insights as separate and independent of their sources means they can be constantly re-tested and allowed to live and die by the evidence.

This leads to what I think is the most important benefit:

Atomic research forces evidence-based thinking

I can’t create a conclusion if I don’t have insights that support it.

I can’t create insights without facts.

The more sources I have for each one, the more confident we can be about my conclusions.

Of course, I can cheat and say that a fact supports my insight (or just be misguided), but it will be obvious to anyone looking that it doesn’t.

Atomic Research gives provenance to my assertions.

Tools to practise Atomic Research

I’ve been using Atomic Research in my own work for nearly a year now.

For most of this time I’ve been doing this manually. Literally sticky notes on whiteboards with hand drawn lines. This is useful for playing with findings in a small way but is temporary and not very shareable.

A next step up is to use mind mapping tools such as draw.io — This is longer lasting but still very time consuming and massively limited.

A screenshot of the upcoming Glean.ly beta Atomic Research tool.

It was obvious that for this method to have real value it needs a proper tool.

"Atomic research forces evidence-based thinking"

I started working with developer David Barker, to help me build this out as a working tool and we’re hoping to release it publicly soon under the name Glean.ly.

In the meantime we’ve recruited some private beta testers to see how it works for large internal teams.

I need to give a shout out to Monzo, Moo.com and Turo for helping us. I am overjoyed that such big names and well respected UX teams see the benefit in what we are doing.

Find our more at Glean.ly

If you are also interested in more details then watch my talk about Atomic UX Research at UX Brighton (20 mins):

Further reading:

English

Atomic research in the European Commission — a UX case study
Foundations of Atomic Research, Tomer Sharon
Atomic Design and Atomic Research: could the combination be nuclear?

Espanol
Atomic UX Research: como armazenar e distribuir os aprendizados de UX
Video en español de Atomic UX Research

Originally posted on Daniel's Medium page.

UX Consultant, a passionate advocate of atomic UX research and Co- Founder of Glean.ly

Related Posts

Best practices on why is UX research important to design a successful product.

The other day, I saw a bad prototype. It came up during a design critique. We were going to start user testing that very prototype the next day. I wrote down my notes and mentioned it to the designer. The problems I saw were: Typos. Simple design flaws (different colors for the same hierarchical information). Information that made no sense… Read More →

My name is Olga and I’m an introvert. Well, sort of. On a good day, I’m perfectly able to engage in effortless small talk. But on a bad day, in the words of Marina, “I wanna stay inside all day, I want the world to go away”. And yet, I’m an UX researcher, which means I got myself a job… Read More →

Categories