When deciding which ideas to back, we constantly see organisations clutching onto the belief that a solid business case is the best approach to demonstrate an idea’s merits.
But using a business case as the only success indicator is super risky and is actually the least effective way when it comes to breakthrough or disruptive innovation (yes, flipping a coin might be more accurate).
The thing is – a business case always stacks up! Think about it … have you ever seen an unfavourable business case?
Secondly, the accuracy of business cases become questionable when innovation is on the table. This is especially true of disruptive innovation where ‘unknowns’ increase exponentially.
A truly ground-breaking innovation may involve expanding into areas a business hasn’t had any experience in, or may target new customers whose response to the innovation is uncertain.
In reality, a business case for an innovation is just a collection of assumptions. If these assumptions are left untested and, heaven forbid, perceived as true, then you are increasing your risk of launching a failure.
The alternative to the good old (un)trusty, assumption-riddled business case happens to be my second-most favourite word at work – experiments!
As you know, Inventium is passionate about science, and experimenting with prototypes is exactly what you need to do to test whether your assumptions and hypotheses about why you believe a particular innovation will be successful are in fact true.
We love Eric Ries’ website The Lean Startup, where he applies lean principles to creating prototypes to quickly and cheaply test hypotheses with customers.
His idea of the ultimate prototype is a minimum viable product (or MVP). An MVP can be in the form of a product, a process, or a service that allows you to collect the most amount of validated learning from your customers with the least amount of effort.
Validated learning is the empirical demonstration that the hypotheses you hold about why the innovation will be of value to customers and how it will achieve growth in the market are proven to be either correct or incorrect.
The beauty of using MVPs to uncover learnings is that, even when you find out your hypotheses are not supported, they don’t signal failure. Rather, this provides invaluable insight into exactly how you need to iterate the innovation to increase its chances of success.
So, if you want to lower your risks of launching a failure, and do this as quickly and as cheaply as possible, then conduct multiple experiments using MVPs.