Cumulative Research and the “Single Source of Truth”

How to build a self-aware company

Luke Kelly
7 min readNov 12, 2018

“What should we do now?”

It’s the question that every single company has to ask themselves. It’s the underlying agenda of everything from a board meeting to a design sprint kick-off, and the root of the bureaucratic meeting culture that can still plague big organisations. It undermines morale when left unanswered, and torpedoes innovation when the answer isn’t compelling.

Generating an answer can involve thousands of man hours spread across dozens of disciplines. It’s also the motivation behind every activity that can (even loosely) be termed “research”: the subtext of every survey; the north star of every lab test; and the real reason companies talk to their customers.

We recently tried to answer it at Shell, in the specific context of the software used to run our stations. The starting point was obvious enough — look at what research has been done already and go from there — but we quickly ran into a very familiar problem: how to combine all of these disparate fragments into something resembling a coherent whole?

Shell’s understanding of the oil business is vast. It goes back over a century, and the company today is the most recognisable oil brand worldwide for a reason: it knows the industry. In some ways you could even say it invented the industry. So why was it so hard to access and use this knowledge?

When we took a step back the answer was clear: all of this prior work had been conducted for a specific purpose — it was nobody’s responsibility to join it altogether. We had video ethnography from boutique research agencies; personas developed by global consultancies; feedback reports from territory managers in 40+ countries; sales, marketing, survey and analytics data. The answer to our question “what should we do now?” was undoubtedly in there somewhere. The hard part was finding it. We didn’t want to add yet another source of insight to the mix by doing our own research; we wanted to build on what had gone before.

The solution we came up with was to process everything we could get hold of into a “single source of truth”: a comprehensive database capable of structuring everything the company knew about this particular area. The idea was that if we could find the right “grammar” to link together all of this knowledge from all of these sources, then we’d already have a list of the most pressing issues — as well as an indication of where the gaps in our understanding were.

Taking partial inspiration from what Tomer Sharon has called Atomic Research, each insight in the database had to be actionable. Every insight should be able to help a total newcomer to the company answer the question “what should we do now?”. If something isn’t actionable, it’s not research.

As well as functioning as a discrete invitation to action, each insight had to make sense as a small piece of a larger whole. Each entry had to be cumulative: contributing to our overall understanding rather than just adding to the noise.

We developed these two principles into 5 key criteria that defined what an insight looked like. To start with the most basic, everything had to be:

Descriptive

To be actionable an insight has to be communicated with enough detail for someone to do something. This could mean it’s a video; it could mean it’s an audio clip; or it could just be a description in an easily-digestible format (As a… I need…). The key requirement is that it’s rich enough to be “self-contained”, meaning someone with the right skills could ideate an action without any other input. Once an insight is discrete in this way, it’s ready to be:

Contextualised

This means making sure each insight is relatable to all the others, as well as to the business itself. Categorising in this way when you have 20 insights preempts the main problem of having thousands: nothing can be actionable if it can’t be found. We use a system in which each insight is coded according to both the themes which naturally emerge over time (literacy; merchandising; reporting), as well as the level of Shell employee it directly affects (cashier; manager; global manager).

Working for a while on a particular theme allows a squad to build momentum and expertise on a related set of problems; working at a particular level lets us focus on whatever subset of customers who most need our help. When you combine the two your solutions can cascade value upwards — solve a problem for one employee in the right way, and you’ve also a generated a new metric and a new data-source for whoever is responsible for supervising them.

By itself however, this still isn’t enough to decide what we should be doing next. Our insights still need to be:

Prioritised

Everything a research team does should be attributable to a company’s bottom-line. That’s what prioritisation really is: assigning an insight with a monetary value so that they can be meaningfully compared. This figure might come from the number of calls made about it to a contact centre (and the accompanying cost); the time wasted by employees who could be doing other (more valuable) things; even the revenue lost in sales pitches which fail because of the issue described. Making this figure truly rigorous and compelling involves analytics experts, data scientists, quantitative researchers and several other disciplines besides. Getting it right however is essential for making an insight:

Trackable

Prioritising properly at the outset means we have a ready-made set of KPIs for whatever happens next. If the metrics we used to prioritise an insight are nudged in the right direction by the prototype it inspires, then we have the evidence we need to scale up a solution.

Tracking things properly is also the heart of another principle behind our approach: “tell us once”. One of the most frequent pushbacks against user research from big organisations is upsetting customers who are tired of continually raising the same issue. This happens because the interactions with those customers are siloed off into various projects and initiatives, rather than feeding into a single source of truth that everyone can work off. By hosting our library in the cloud, we’re able to not just record, curate, and share everything we’ve learned, we can track which insights are currently being actioned by a particular team — and which of these is moving the KPIs in the direction we want them to go. This is the key to making things:

Scalable

What works for 1 squad doesn’t necessarily work for 50 — but if you ask the 1st to work like the 50th then you can tackle that problem at the outset. Our hope is that our system can support everything from a fully-fledged dev team across a whole quarter to a single employee with an hour free on a Friday afternoon. If they’re all sourcing insights from the same shared space in which they store and track their solutions, then maybe this will open up whole new ways of working. Or maybe it will crash and burn and we’ll all get fired. One of the two.

Conclusion

Ultimately this is how we’re trying to build a truly self-aware company. The point of research is to make sure that what customers say results in a concrete action. We want the entire end-to-end process — from the first sprint kick-off to the board meeting in which successes are shared or failures learned from — to sing from the same hymn sheet.

If we get it right, we’ll have a picture of our company that gets more detailed every single time we have contact with our customers via any channel. Everyone from a territory manager to a customer service rep would become a researcher, with every customer interaction becoming a piece of research. All we need to do is make sure those interactions are recorded in a standardised manner that facilitates action.

And once we figure out how to do that, we’ll let you know…

Luke Kelly (Research), Xavier Akram (Design) and Orn Srimongkolkul (Product) work at Shell Digital Ventures in London

--

--

Luke Kelly

Founder of Filo.io, the UX Research Repository which lets you build a custom repository on top of our platform.