Hypothesis-Based Planning

Philipp Giese
December 21, 2022
5 min read
Hypothesis-Based Planning

What are we going to do next? Lots of teams are asking this question every day. I've certainly been part of planning sessions of every shape and form. Be it just a quick check-in or a planning WEEK. They all had one thing in common: the plan rarely worked out.

The enemy of every plan

While planning something, we usually talk about what we want to achieve.

  • What problems have the most impact on our users?
  • Which parts of our architecture are holding us back the most?

This focus on what needs to happen is great. How come plans so often don't work out in the end? It is because we're unaware of the most crucial part of each plan: all the assumptions that go into it.

Not only do we make many assumptions about the problems we want to solve.

  • The problem exists
  • The problem needs a solution
  • Our users want to use our product to solve the problem
  • The solution fits into our product strategy
  • (there are probably more)

Add to that list all of your assumptions about how your code works and how you think it will work in the future. Most plans fail because we're treating our assumptions as if they were knowledge. When you treat an assumption as knowledge, it means you will never test it. And if we never test our assumptions or aren't even aware of them, they become the most significant risk to our plans. If one of the assumptions in the list above does not hold, then whatever you're doing will not matter. You might be working on the wrong thing, creating a solution your users don't want or adding bloat to your product because you make it do something it shouldn't.

Treat plans as a set of hypotheses and test them

We need to accept the reality that we're going to make many guesses. That's fine. A good guess is 1000 times better than being stuck in analysis paralysis. The important part is that everyone knows that we're guessing and that we take measures to get feedback as early as possible on whether our guess is correct. Working in small increments and deferring decisions can help with that.

If we continuously check whether our assumptions are correct, we are much more likely to end up with something that works. Might your initial plan change? Absolutely! And that's a good thing because we're not trying to blindly follow a plan but create value for our customers.

What and when to validate

Similar to deferring decisions, you don't want to constantly check every assumption. That's because not every assumption has the same impact. It would be best to focus on the assumptions that would derail you completely. How can you do this?

Start tracking your hypothesis

The first thing you need to do is track your hypothesis.

# Title

What is the assumption we're making, and why is that important?

## Impact

What impact on the plan would it have when this assumption is wrong?
The impact of an assumption can be anything from "none" to "need to cancel."

## How to disprove

What can we observe that would show us that our assumption is wrong?
Is there a particular user behavior, KPIs we can track, or something else?

## Who can validate the hypothesis

Which people can validate this hypothesis?
When no one can validate a hypothesis, this is a sign that you should not be doing what you're about to do.

You might be tracking work with issues, tickets, epics, or any other format you like. It doesn't matter which style you prefer. Once you've formulated your hypothesis, you should link them to your planning artifacts. The important part is to do this on every level. Items representing a broad road map will have many hypotheses linked to them, and individual tickets might end up with one or none. That's fine. By relating your work plan to your hypothesis, you've already created transparency for everyone who wants to know. This way, people you don't regularly interact with can comment on your hypothesis and contribute valuable feedback!

Rank hypothesis based on impact

There are a ton of ways to define impact. I choose the (what I believe) easiest one. For each hypothesis, count how many work items it relates to, and then sort that list descending. The hypothesis at the top will have the highest impact because much work depends on it. Should this hypothesis not hold, then a lot needs to change. Make sure you'll have clarity around this one as soon as possible.

By regularly testing your top hypothesis, you can decrease the risk of your overall development process. You'll spend more time on things that matter and less time on stuff that doesn't have an impact. That's great news!

You can also use the list of hypotheses to make it clear to outside stakeholders that you're going down a precarious path because you're working on something that can't be validated and has a high risk of failure. Use that information to your advantage!


What do you think about this approach? While it is very similar to what engineering teams around the globe already do, it also adds more transparency in an area that is often overlooked. I'd like to hear about your experience, so tweet at @philgiese on Twitter (as long as it still exists).

About the author

You can find information about me in the about section. I also like to give talks. If you'd like me to speak at your meetup or conference please don't hesitate to reach out via DM to @philgiese on Twitter.

Feedback

Did you like this article? Do you agree or disagree with something that I wrote? If so, then please drop me a line on Twitter

RSSPrivacy Policy
© 2024 Philipp Giese