Innovation risk: How to make smarter decisions

Innovation risk: How to make smarter decisions

New products and services are created to enable people to perform tasks better than they previously could, or to do things that they couldn’t before. But innovations also carry risks. Just how risky an innovation proves to be depends in great measure on the choices people make in using it.

Ask yourself this: If you had to drive from Boston to New York in a snowstorm, would you feel safer in a car with four-wheel drive or two-wheel drive? Chances are you’d choose four-wheel drive. But if you were to look at accident statistics, you’d find that the advent of four-wheel drive hasn’t done much to lower the rate of passenger accidents per passenger mile on snowy days. That might lead you to conclude that the innovation hasn’t made driving in the snow any safer.

Of course, what has happened is not that the innovation has failed to make us safer but that people have changed their driving habits because they feel safer. More people are venturing out in the snow than used to be the case, and they are probably driving less carefully as well. If the riskiness of an innovation depends on the choices people make, it follows that the more informed and conscious their choices are, the lower the risk will be. But as companies and policymakers think through the consequences of an innovation – how it will change the trade-offs people make and their behaviour – they must be mindful of the limitations of the models on which people base their decisions about how to use the innovation.

The bottom line is that all innovations change the trade-off between risk and return. To minimize risk and unintended consequences, users, companies and policy makers alike need to understand how to make informed choices when it comes to new products and services. In particular, they should respect five rules of thumb.

Recognise that you need a model

When you adopt a new product or technology, your decision about risk and return is informed by what cognitive scientists call a mental model. In the case of driving to New York in the snow, you might think, I can’t control all the risks associated with making the trip, but I can choose the type of car I drive and the speed at which I drive it. A simple mental model for assessing trade-offs between risk and performance, therefore, might be represented by a graph that plots safety against type of car and speed.

Of course, this model is a gross simplification. The relationship between safety and speed will depend on other variables – the weather and road conditions, the volume of traffic, the speed of other cars on the road – many of which are out of your control. To make the right choices, you have to understand precisely the relationship among all these variables and your choice of speed.

It seems reasonable, then, to suppose that the more factors your model incorporates, the better your assessment will be of the risks you incur in deciding whether and how to adopt a particular innovation. But it is precisely when you start to feel comfortable in your assessments that you need to really watch out.

Acknowledge your model’s limitations

In building and using models, it is critical to understand the difference between an incorrect model and an incomplete one. The distinction between incorrectness and incompleteness is an important one for scientists. As they develop models that describe our world and allow us to make predictions, they reject and stop using those that they find to be incorrect, whether through formal analysis of their workings or through testing underlying assumptions.

Those that survive are regarded as incomplete, rather than wrong, and therefore improvable. In general, until some fundamental violation of math in a model is detected or some error in the assumptions currently being fed into it is unearthed, the logical course is to refine rather than reject it.

Expect the unexpected

Even with the best effort and ingenuity, some factors that could go into a model will be overlooked. This is particularly the case when an innovation interacts with other changes in the environment that in and of themselves are unrelated and thus not recognised as risk factors.

The 2007-09 financial crisis provides a good example of such unintended consequences. Innovations in the real estate mortgage market that significantly lowered transaction costs made it easy for people not only to buy houses but also to refinance or increase their mortgages. People could readily replace equity in their property with debt, freeing up money to buy other desirable goods and services. There’s nothing inherently wrong in doing this, of course; it’s a matter of personal choice.

The intended consequence of the mortgage-lending innovations was to increase the availability of this low-cost choice. But there was also an unintended consequence: Because two other, individually benign economic trends – declining interest rates and steadily rising house prices – coincided with the changes in lending, an unusually large number of homeowners were motivated to refinance their mortgages at the same time, extracting equity from their houses and replacing it with low-interest, long-term debt. Because of the convergence of the three conditions, homeowners in the United States refinanced on an enormous scale for most of the decade preceding the financial crisis. The result was that many of them faced the same exposure to the risk of a decline in house prices at the same time, creating a systemic risk.

Understand the user and the use

Let’s assume that you have built a model that is fundamentally correct. Let’s also assume that it is more complete than other existing models. There is still no guarantee that it will work well for you. A model’s utility depends not just on the model itself but on who is using it and what they are using it for.

A model is also unreliable if the person using it doesn’t understand it or its limitations. When you think about who uses models and for what, you often must rethink what qualifies people for a particular job. A more-complete but more-complicated model may carry greater risks than a cruder one if the user is not qualified.

Check the infrastructure

Finally, we need to recognise that the benefits and risks of innovation are in large measure determined not by the choices people make about how to use it but by the infrastructure into which it is introduced. The pace of innovation in some industries is very high, but so is the rate of failure. It is therefore infeasible to change the infrastructure to accommodate every innovation.

The reality is that changes in infrastructure usually lag changes in products and services, and that imbalance can be a major source of risk. Complicating the risks from imbalance between product and service innovation and infrastructure innovation is the fact that products and services continue evolving after they are launched, and this evolution is not independent of the infrastructure. The risk of this kind of dynamic is that it becomes very difficult to identify exactly what changes in the infrastructure are needed. Even if you could make changes to an infrastructure to coincide with a new product’s launch, you might find that within a very short time those changes have become irrelevant because the product is now being sold by different people through different channels to different users who need it for different purposes.

Robert C. Merton is a professor of finance at the MIT Sloan School of Management and a professor emeritus at Harvard University. He was a recipient of the 1997 Alfred Nobel Memorial Prize in Economic Sciences.

Trending

COMMENTS

Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments