Knowing when to launch: what is the balance between research vs release-and-learn?

Melanie Hambarsoomian
8 min readNov 25, 2018

You could spend until eternity validating an offering or feature. But there is a tipping point where this becomes detrimental. So what is the balance between building confidence vs getting something out there quickly? I was asked to run one of MOO’s Research Guild sessions on this topic while I was on the team and this is an extended summary. I know there’s loads of info out there on this topic, from people more qualified than me! Would love to hear your thoughts.

Examples of failures

  • No adoption / purchase
  • Adoption, but a very poor experience
  • Not performing to expected metrics (and also, were you using the metrics right to begin with?)
  • Feature isn’t findable
  • PR problem
  • Security issues

Keep in mind there’s a difference between:

  1. Creating something, it not working, learning fast and then able to make changes to respond.
  2. A big failure that has a high cost that was avoidable or didn’t have to be as high.
The product team and I monitoring some key metrics following a feature release. I joke… Creative commons image: https://bit.ly/2POOrOS

Potential causes of failure

  • Not solving the right thing or just no need
  • Too eager to release a feature or offering and skipping validation with actual people
  • The bigger picture no longer works e.g. feature bloat (hence testing in context)
  • Too little too late — your competitor is doing it better / faster / cheaper etc. Microsoft Zune vs iPod
  • Not enough research resulting in a problem

If research was done:

  • Not tested enough
  • Tested in the wrong part of the product dev lifecycle e.g. usability testing instead of demand testing

Here’s an interesting analysis of why Starbucks failed in Australia:

Costs of failure

  • Cost of building
  • Customer dissatisfaction / losing customers or conversions
  • Brand impact
  • Loss of money or budget not met
  • Cost to diagnose and fix problem
  • PR problems
  • Security or legal problems
  • Customer support costs

But there’s also a cost in launching something too late:

  • Cost of building
  • Release problems when a feature is sitting there (big releases = more risk of things going wrong and having to spend time fixing release-related issues)
  • It’s no longer relevant, falling behind competitors or was launched too late and you’ve lost market share
  • Other dependencies waiting on that feature

There are more serious forms of failure

Which I’ve not focussed on in this write-up. Many of us have the peace of mind of working on products where failure doesn’t impose a safety risk on people. The high-risk end of the spectrum includes emergency and medical products: the cost of failure is too high.

The 2002 Überlingen mid-air collision was a huge tragedy. New anti-collision technology was introduced and it worked accurately in a technical sense. It didn’t work in its broader context, due to a number of factors (including conflicting orders from the technology vs air traffic control, training, workload of air traffic control staff) causing two planes to collide with each other.

I’m assuming the software itself was tested to a very high grade. But the error was not with the software’s accuracy. Could more research within the broader context have helped with this to recommend more training prior to the crash?

Building confidence and lowering risk

PRO TIP: Add in memes and gifs to your article to identify yourself as a millennial, dilute the seriousness of your topic and contribute further to our shortening attention spans. Did I lose you?

Everything we do prior to launch is a way to build confidence and some of these activities can only build so much confidence e.g. user testing is a great way to make sure things are working right, but it won’t be the same as releasing a feature live. It also tells you about usability, but won’t tell you about demand. Hopefully you’re amongst a culture that understands that building confidence is not the same as a contract that promises 100% success.

So each of these can build different types of confidence (and you need to be clear on what each is for) but at some point, you gotta put it out there to really see how it does (or kill it if you do have enough confidence it’s not working).

Building confidence:
More appropriate for new propositions / new offerings

  • Talk to customers you want, not just those you have
  • Using the right methodology, e.g. demand testing rather than usability testing if you’re not sure if people will adopt the offering. This might mean not diving quickly into designs and wireframes, perhaps a storyboard is all you need. You might want to use something like a fake door test for measuring demand
  • Market sizing — is the market size big enough to warrant the investment?
  • Customer segmentation — are you validating the offering with the right segment / sub-segment?

Building confidence:
More appropriate for new features or products once proposition is validated

  • Knowing before launch what success and failure measures are e.g. thresholds on A/B test, adoption metrics, etc.
  • Having proper diagnostic and data tools e.g. Decibel, Google Analytics and some hypotheses about what you’re expecting to see and why
  • Input from your customer services team
  • Betas / closed-group studies
  • Launching a newer version of a UI alongside the old version and allowing people to opt-in to the newer version. This can be especially useful when users are really used to an interface and a sudden change could have a big impact on their tasks. It also helps you track adoption and plan when to switch off the old version
  • Dashboards with KPIs so the whole team can self-serve and are bought-in: I really believe the whole team, not just the Product Manager and Designer, should have visibility and care about the performance of the product. It’s everyone’s contribution that drives outcomes. Also, this is an important point because there are some metrics that aren’t feature-specific e.g. performance.
  • Make sure test is the right fidelity for what you need to find out (e.g. there are times when your prototype is too hi-fi and this can actually be a detriment to collecting data)
  • Safe-to-fail plan i.e. if something fails, what’s the backup plan? Rollback? Reprioritise so there’s time to tweak? This might be one particularly for times when you were happy to take more of a risk to save validation time up-front, so the outcome might be less predictable
    The first time I heard about safe-to-fail plans was through the talk below by Liz Keogh. It has a huge amount of food for thought, I really recommend it. She goes into a lot of depth about how being prepared for the unexpected frees you to do more experimenting.
“In the complex domain, cause and effect are only correlated in retrospect, and we cannot predict outcomes. We can see them and understand them in retrospect, but the complex domain is the domain of discovery and innovation. Expect the unexpected! Because of this, whatever we do in this space must be safe-to-fail. In this talk, we look at some different thinking tools which can help us to create these experiments, or probes… and to help us spot when we’re not doing so and probably should!”
  • Are your stakeholders on board with the agile process so that if you do learn and need to respond to that, they understand this is what sometimes happens with the leaner approach and that you can’t necessarily just move onto another feature? This is a whole other topic in itself about organisational culture and team structure…
  • Consider the entire journey of using the product — someone might fly through a form in usability testing but there might be repercussions later in the experience e.g. refunds, complaints, product itself etc. How do you marry up the full journey data? Make sure you’re testing the full journey, not just a page in isolation. No customer thinks about products. They experience the entire organisation end-to-end. And if you uncover something that’s impacting the experience and not directly in your control, e.g. pricing, shipping times, etc. you should make a case to the people who can, to change that for the better.
  • How big is the risk of what you want to do? The risk is not the same for launching a new button vs a pen designed for women 🤦🏻🤦🏻🤦🏻

It’s all a balance

Risks of releasing too quickly:

  • Teething issues
  • If half-baked, impact on customers
  • Cost to fail and learn is higher (compared to a prototype)
  • All the types of failure mentioned above

Risks of not releasing quickly enough:

  • Falling behind market
  • Not learning enough, not in the wild
  • Slows momentum for the team and stakeholders — can be demoralising

You’re never going to have 100% confidence when you launch.

What if you experience a massive fail?

I’m not experienced in this area, I’ve not been through or been accountable for a large-scale business failure. Unfortunately the times when it’s hardest to face what’s happened are when it is most important to face what’s happened.

This probably doesn’t happen as often due to organisational culture, politics, sensitivity and people leaving (knowledge lost).

More info

  • There’s a great organisation called Fuck up Nights that celebrates failure of any kind (I think the lovely Jane Austin introduced me to their work). They have meetups where people share failure stories and a free to download Fuckup Book that encourages you to fail, to face the fear and learn e.g. try and get fired from your job.
  • Sense & Respond is also an amazing book about continuously learning, it’s a great read. As described by Amazon:

In illuminating and instructive business examples, you’ll see organizations with distinctively new operating principles: shifting from managing outputs to what the authors call “outcome-focused management”; forming self-guided teams that can read and react to a fast-changing environment; creating a learning-all-the-time culture that can understand and respond to new customer behaviors and the data they generate; and finally, developing in everyone at the company the new universal skills of customer listening, assessment, and response.

In summary, some things to think about are…

  • Risks in releasing too early vs too late
  • How much confidence do you need to release?
  • How you are going to build confidence prior to release?
  • What is your safe-to-fail plan for post-release?
  • What lowers risk after release? e.g. letting users opt-in to a new beta version of your product
  • Are you using the right activities for what you need to find out?
  • What gaps are there in diagnostic tools that need to be fixed regardless of this release?
  • Does the culture around you empower you to experiment and learn and understand that just because you have confidence, doesn’t mean this is a guarantee that everything will be 100% successful. But gaining more and more confidence also has a detrimental impact on how quickly you are getting something out there. It’s a balance
  • Also don’t forget that the research and validation shouldn’t stop just because something’s released!

IRONY: Am I ready to publish? Not 100% ready but realised I am ready enough that I want to get this out there and see what feedback I get. My release plan for an article is normally: get feedback from people kind enough to read my draft via the draft link, then soft-launch the article and then if I feel ok about it share on social media.

What have I missed? What experiences have you had? It’d be great to hear from you.

--

--