To pinpoint a place and time that the first glints of the management century appeared on the horizon, consider Chicago, May 1886. There, Henry R Towne, a co-founder of the Yale Lock Manufacturing Company, asserted in a speech that “the management of works has become a matter of such great and far-reaching importance as perhaps to justify its classification also as one of the modern arts.”
During the century that followed, management as we know it would come into being and shape the world in which we work. Three eras punctuate the period from the 1880s until today. In the first, the years until World War II, aspirations to scientific exactitude gave wings to the ambitions of a new, self-proclaimed managerial elite.
The late 1940s until about 1980 was managerialism’s era of good feelings, its apogee of self-confidence and widespread public support.
The third and ongoing era has been marked by a kind of retreat – into specialisation, servitude to market forces and declining moral ambitions. But it has also been an era of global triumph, measured by agreement on certain key ideas, steadily improving productivity, the worldwide march of the MBA degree and a general elevation of expectations about how workers should be treated.
The age of ‘scientific management’
In the last two decades of the 19th century, the US was shifting – uneasily – from a loosely-connected world of small towns, small businesses and agriculture to an industrialsed network of cities, factories and large companies linked by rail. A rising middle class was professionalising and mounting a progressive push against corrupt political bosses and the finance capitalists, who were busy consolidating industries such as oil and steel.
Progressives claimed special wisdom rooted in science and captured in processes. Frederick Taylor, who wrote that “the best management is a true science, resting upon clearly defined laws, rules, and principles”, clearly counted himself in their camp. The publication, in 1911, of Taylor’s Principles of Scientific Management set off a century-long quest for the right balance between the “things of production” and the “humanity of production”, as the Englishman Oliver Sheldon put it in 1923. Or, as some would have it, between the “numbers people” and the “people people”. It’s the key tension that has defined management thinking.
Beginning with Concept of the Corporation (1946) and continuing through The Practice of Management (1954) and Managing for Results (1964), Peter Drucker laid out a vision of the corporation as a social institution in which the capacity and potential of everyone involved were to be respected.
The overall thrust of post-war managerial thinkers was to elevate the “humanity of production”. Workers will be most productive, the reasoning went, if they’re respected and if managers rely on them to motivate themselves and solve problems on their own. Not that the old order went down without a fight. After researching General Motors, Drucker persuaded rising GM executive Charlie Wilson to propose a set of reforms including greater autonomy for plant managers and what we’d call “worker empowerment” today. Two forces killed the idea. One was the rest of GM management, including CEO Alfred P Sloan. The other was the United Auto Workers, in the person of Walter Reuther, who wanted no blurring of the line between management and labour.
More-enlightened managerial attitudes combined with other forces – a democratisation of American society following World War II, an explosion of deferred demand for economic goods – to usher in two decades of good spirits and seeming contentment with corporations and their conduct. The number of strikes and other job actions dropped precipitously from pre-war levels seen just after the war; union membership, as a percentage of the workforce, peaked and then began the long, slow decline that continues to this day. (Managerial solicitude was probably stimulated by an unemployment rate that fell below 3% in 1953.)
In addition to its more-enlightened attitudes toward employees, the post-war period brought a heightened sense of what managers could accomplish. “Management has to manage,” Drucker wrote. “And managing is not just passive, adaptive behaviour.” Managers had to take charge; they should be “attempting to change the economic environment … constantly pushing back the limitations of economic circumstances on the enterprise’s freedom of action.”
The era of nervous globalism
Get SmartCompany FREE to your inbox every weekday
After two decades without serious recession, the oil shocks of the 1970s and an accompanying economic malaise put paid to the notion of managerialism triumphant. A 1966 Harris poll had found 55% of Americans voicing “a great deal of confidence” in the leaders of large companies. By 1975 the percentage had dropped to 15.
Multiple new forces confronted American executives, unleashing heightened competition and eventually disrupting the relative amity that had prevailed among business, labour and government.
Technology, especially computer technology, steadily increased the calculating power available to the numbers people, in the form of the integrated circuit (late 1950s), the minicomputer (mid 1960s), the microprocessor (early 1970s), and then the microcomputer (mid 1970s), soon to morph into the ubiquitous PC.
During this period of intense change, the purpose of strategy, and indeed of corporate management, took on new clarity: It was to create wealth for shareholders. To be sure, that idea had always been around, dating back to the buccaneering financiers of the 19th century. But during management’s era of good feelings, a more inclusive notion had taken root in some quarters.
Management thinkers responded to the new pressures besetting corporations by sharpening their focus. As the economy quickened and the deal-making and excitement on Wall Street mounted, more people sought to join the ranks of management – or at least to obtain the entry credential, the MBA. Some 26,000 MBAs had been awarded in the United States in 1970; by 1985 the number was up to 67,000.
Across the board, both in the corporate world and in academics, the numbers people seemed to be winning, bringing greater quantitative precision to increasingly specialised domains of expertise. But they weren’t necessarily winning the hearts of the wider managerial population.
Advocates for the humanity of production, meanwhile, pursued a blurrier line. Strategy at least had a fairly clear paradigm and set of frameworks for successive generations of thinkers to build on. Champions of shareholder value gloried in their single yardstick, the stock price, as the measure of all things.
If the thinking on the human side coalesced at all, it was around two themes: leadership and innovation.
Over the last two decades of the 20th century, business schools revised their mission from “educating general managers” to “helping leaders develop”. Unfortunately, despite some inspiring writing on how leaders differ from managers, no consensus has formed on exactly what constitutes a leader or how those exalted beings come to exist.
Innovation is where satisfying the fierce demands of the market depends, as never before, on eliciting the best from the humanity of production. No one yet appears to have been able to automate the invention of the new or to come up with machine-replicable substitutes for the spark of human imagination. Perhaps the biggest managerial challenge facing the 21st-century company will be finding ways to free that spark, resident in employees, from the organisation’s tidal pull to keep doing the same old things.
Walter Kiechel III is a former editorial director of Harvard Business Publishing, a former managing editor of Fortune, and the author of ‘The Lords of Strategy’.