To answer the most vexing innovation and research questions, crowds are becoming the partner of choice. But despite a growing list of success stories, only a few companies use crowds effectively – or much at all.
Managers remain understandably cautious. Pushing problems out to a vast group of strangers seems risky and even unnatural, particularly to organisations built on internal innovation. These concerns are all reasonable, but excluding crowdsourcing from the corporate innovation tool kit means losing an opportunity. The main reason companies resist crowds is that managers don’t clearly understand what kinds of problems a crowd really can handle better and how to manage the process.
Having determined that you face a challenge your company cannot or should not solve on its own, you must figure out how to actually work with the crowd. Crowdsourcing generally takes one of four distinct forms, each best suited to a specific kind of challenge.
The most straightforward way to engage a crowd is to create a contest. The sponsor identifies a specific problem, offers a cash prize and broadcasts an invitation to submit solutions.
Contests work well when it’s not obvious what combination of skills or even which technical approach will lead to the best solution for a problem. Running a contest is akin to running a series of independent experiments in which, ideally, we can see some variation in outcomes. Therefore, of the four forms of crowdsourcing, contests are most useful for problems that would benefit from experimentation and multiple solutions.
We have learned that contests are most effective when the problem is complex or novel or has no established best-practice approaches. Contests are also useful for solving design problems, in which creativity and subjectivity influence the evaluation of solutions.
There are, of course, management challenges in running a crowdsourcing contest. First is identifying a problem important enough to warrant dedicated experimentation. The problem must then be “extracted” from the organisation – translated or generalised in order to be immediately understandable to large numbers of outside solvers. It must also be “abstracted” to avoid revealing company-specific details. That may involve breaking it down into multiple sub-problems and contests. And finally, the contest must be structured to yield solutions the organisation can feasibly implement.
In June of 1998, IBM shocked the global software industry by announcing that it intended to abandon its internal development efforts on web server infrastructure and instead join forces with Apache, a nascent online community of webmasters and technologists.
The Apache community was aggregating diverse inputs from its global membership to rapidly deliver a full-featured – and free – product that far outperformed any commercial offering. Two years later IBM announced a three-year, $1 billion initiative to support the Linux open-source operating system and put more than 700 engineers to work with hundreds of open-source communities to jointly create a range of software products.
In teaming up with a collaborative community, IBM recognised a twofold advantage: The Apache community was made up of customers who knew the software’s deficits and who had the skills to fix them. With so many collaborators at work, each individual was free to attack his or her particular problem with the software and not worry about the rest of the components. As individuals solved their problems, their solutions were integrated into the steadily improving software. IBM reasoned that the crowd was beating it at the software game, so it would do better to join forces and reap profits through complementary assets such as hardware and services.
Like contests, collaborative communities have a long and rich history. They were critical to the development of Bessemer steel, blast furnaces, Cornish pumping engines and large-scale silk production. But whereas contests separate contributions and maximise diverse experiments, communities are organised to marshal the outputs of multiple contributors and aggregate them into a coherent and value-creating whole – much as traditional companies do. And like companies, communities must first assess what should be included in the final aggregation and then accomplish that through a combination of technology and process.
Collaborative communities work best when participants can accumulate and recombine ideas, sharing information freely.
The third type of crowd-powered innovation enables a market for goods or services to be built on your core product or technology, effectively transforming that product into a platform that generates complementary innovations. Unlike contests or communities, complementors provide solutions to many different problems rather than just one. The opportunity lies in the sheer volume of solutions.
The variety of complementary goods does more than generate revenue. It can expand demand for the product itself, by making it more useful. Increased demand, in turn, can prompt an increase in the supply of complementary innovations, and pretty soon you have a nice set of network effects. To be sure, crowds aren’t always the best way to create complementary products. They make sense only when a great number and variety of complements is important.
The first challenge to using the crowd as a complementor is providing access to the functions and information in the core product. That is accomplished through technological interfaces or hooks that enable external developers to create complementary innovations in a frictionless way. If you are exposing your technology and assets to outsiders, you must make sure they’re protected. Unlike contests, which can carefully control the exposure of assets to elicit a single, narrow solution, complementor platforms must give outsiders more-flexible access to develop a wide range of solutions.
Labour markets match buyers and sellers of services and employ conventional contracting for services rendered.
These are not platforms that a company would want to build itself but, rather, third-party intermediaries. Instead of matching workers to jobs within companies for long-term employment, these highly flexible platforms serve as spot markets, matching skills to tasks. They often perform on-demand matching to give immediate support at an unprecedented scale.
Critical to the success of these flexible spot markets is the growing sophistication of their technology infrastructure and platform design, which allow transactions to be effectively governed. Spot labour markets work when you know what kind of solution you are looking for and what an appropriate solver looks like. Because spot labour markets must identify qualified workers before the fact and collect meaningful performance data, they organise projects and participants in familiar categories. Such standardisation makes it easier to evaluate workers’ skills and productivity, make good matches and set expectations on all sides.
The platforms themselves go even further to help ensure high-quality matches by measuring the skills and capabilities of workers and needs of employers, collecting abundant data on performance and feedback, and then allowing these data to be used in future matches. Particularly suited to labour markets are repetitive tasks that require human intelligence but for which it would be difficult and expensive to hire full-time employees. Spot labour markets should be less a radical departure from than an extension of current hiring and outsourcing practices. Like outsourcing, they give companies flexibility and access to a greater variety and depth of skills.
The management challenges in exploiting spot labour markets are minor compared with those in other forms of crowdsourcing. The biggest concern may be identifying which tasks to farm out and who within your organisation should manage them.
Kevin J. Boudreau is an assistant professor of strategy and entrepreneurship at London Business School and a research fellow at Harvard’s Institute for Quantitative Social Science. Karim R. Lakhani is a professor of business administration at Harvard Business School and the principal investigator of the Harvard-NASA Tournament Lab at the Institute for Quantitative Social Science.