Learnings from using hypothesis backlogs: A follow up

Melanie Hambarsoomian
5 min readNov 18, 2018

After writing up about hypothesis backlogs, Ian Miller responded with an idea to do a follow-up. Here are some of my reflections.

People might not directly contribute, but this doesn’t mean engagement is low. It needs an owner.

I had intended this backlog to be something that anyone could contribute to. It turned out that I really needed to own it and add to it on others’ behalf. This wasn’t that people weren’t interested, lots of people wanted to understand what was in it, we prioritised it with Product and some wanted guidance on how to set up their own. But they didn’t create tickets in it.

It was different to what I intended but it was perfectly fine for me to act as the owner. I knew what was in the backlog so could aggregate information when something more relevant already existed. It needs a clear owner.

Disproven or parked hypotheses don’t need to disappear, but they can be grouped somewhere else.

Disproven hypotheses are still a really great thing. You’ve saved the business the cost of pursuing something which is either a) not a problem b) not the right solution for a proven problem c) not enough of a problem compared to other opportunities. These are success stories and should be celebrated. This is what discovery is about. Also, they may not be a priority now but they may become a priority later.

Anything that you’ve done enough discovery for to determine is not a priority for now can be put into another bucket. I used a different column in a Jira board. So when people say “why aren’t we doing that now” you can take them through the data that shows other things are a higher priority and you avoid redoing discovery for that same thing.

A hypothesis backlog’s value is realised in actually trying things, learning and delivering value. If you aren’t tackling hypotheses, you’ve got some other things to tackle.

Even if everyone is keen to work on the hypotheses, if they’re not prioritised or not worked on, this is a warning sign. There are a few things that could be blocking you:

  1. Tech debt. I’ve come across tech debt blocking simple design tweaks. In this case, the tech debt needs to be addressed properly before the hypothesis can be addressed.
  2. A design that is not scalable. You can’t address a problem about navigation if the rest of your UI needs a major overhaul. Design needs to be scalable. Multiple different band-aid fixes will lead to a Frankenstein UI and throwaway work. The UI needs to be thought through as a bigger overarching system. Yes there’ll be tweaks along the way but it has to scale. At some point, the backlog might be so big that it just points to a UI overhaul (I don’t mean reverting to a waterfall process, I mean having a new bigger vision).
  3. Inability to measure success. If as a team, you’re unable to collect the data necessary to prove / disprove a hypothesis then this means you have what I will call ‘data-debt’. You’re unable to measure what you want. Maybe it’s because you don’t have the right tools. Maybe the data is setup incorrectly. Maybe it’s impossible to get budget to get participants in for user testing. But you might need to fix this, like tech debt, before you can proceed.
  4. No safe to fail plan. If you can’t take risks and don’t have room to deal with things that don’t work, you might not be in a good position to test and learn.
  5. Not enough room to experiment. Maybe you are focussing so much on delivery or fire fighting or similar problems that you don’t have time to look ahead and test out new things.
  6. Other blockers: perhaps it’s too many stakeholders that need to sign off, strong legal restrictions or offline processes that need to be changed. Perhaps culture doesn’t welcome experimentation. One way to tackle this is to build the case for change in these areas.

There is also a point where you might have too many hypotheses and tackling them one by one won’t improve your product enough. There’s a tipping point where you’re just playing catch up (which actually means you’re just falling behind). This might be an indicator that the product is not in shape.

Once you have prioritised, figure out measures of success.

It doesn’t make sense to figure out measures of success for everything in the backlog, it makes sense to groom and refine the top priority things. But that in itself is an important step before and during design. If you can’t figure out how you will prove / disprove the hypothesis, it needs more refinement.

For example, say I tell someone that a particular vitamin supplement is going to help my health. If I have no measures of success, then why am I even taking it? Do I know its effect is proven because of clinical studies? Or because of tangible benefits that I am expecting to see in my own health?

In my experience, engineering teammates weren’t sure about the difference between a problem vs solution hypothesis.

There are different levels of hypotheses in terms of granularity and confidence. E.g:

Low level / solution: We believe the primary CTA should be at the bottom rather than top of the page to support the natural reading pattern people have and therefore to increase visibility

High level / strategic: We believe the app should be split into two to give targeted experiences (according to x research) and to increase performance (think when Foursquare split into Foursquare and Swarm).

The first one is something that once tested, an engineer can basically build. The second one is not necessarily first solved within the engineering realm. It needs wider research and discovery. And this research might not even be done with prototyping, it might be to deeper understand people’s needs.

There’s a difference. And that leads me to say…

Realise when something is actually a business case.

There are some hypotheses that are BIG. They need a business case in their own right. In a previous role, I proposed that business cases should be reviewed more than once a year. They should be made at ANY time in the year, because needs emerge all the time. Discovery happens all the time. Users and the market don’t follow a once-a-year planning cycle.

Your Product vision and strategy will help you refine which hypotheses are worth actually looking at.

The team needs to refer to the vision and strategy to help with prioritisation. This will be the guiding map. Without this, you run the risk of building on a product and not knowing where it’s going, not maximising on its USPs and delivering incremental value but not delivering BIG value. You want to pursue the hypotheses that are most aligned with that vision OR those that might be pushing the vision to be something bigger or something else.

One approach won’t suit all.

I had some colleagues ask me about setting up their own hypothesis backlog. I don’t think there’s a strict formula that dictates what tool to use and how to manage it. I think the approach depends on the team.

I’d love to hear about what others have learned or what hesitations they have.

--

--