By the outbreak of the First World War, the big company had become a defining institution in American society: the motor of one of the most rapid periods of economic growth in history; a dominating figure in political life; and a decisive actor in transforming America from a society of “island communities” into a homogenous national community. Thanks largely to its embrace of this extraordinary institution, the American century was under way.
Different forms of company continued to sprout around the world. We have discussed Britain’s family firms and Japan’s zaibatsu; a longer book could have dwelled on the charms of France’s huge utility companies or northern Italy’s networks of small businesses. Even in America, the economy was upset by the discontinuities of war, recession, and the New Deal, not to mention continuous technological changes that provided opportunities for smaller companies to leap forward and for old giants to trip up. Who remembers Central Leather, the Nevada Consolidated Group, or Cudahy Packing?1
All the same, the most remarkable thing about the sixty years after the First World War was continuity—particularly the continued success of big American business. A list of America’s biggest companies in 1970 would have seemed fairly familiar to J. P. Morgan, who died in 1913. Yet, this very predictability, this sameness, was itself the result of one important innovation, introduced in the 1920s: the multidivisional firm.
The multidivisional firm was an important innovation by itself, because it professionalized the big company and set its dominant structure. But it was also important because it became the template for “managerialism.” If the archetypical figure of the Gilded Age was the robber baron, his successor was the professional manager—a more tedious character, perhaps, but one who turned out to be surprisingly controversial. In the 1940s, left-wing writers like the lapsed-Trotskyite James Burnham argued that a new managerial ruling class had stealthily obliterated the difference between capitalism and socialism; in the 1980s, corporate raiders said much the same thing.
In the first two decades of the twentieth century, a silent takeover began: the gradual separation of ownership from control. The robber barons may have kept the big strategic decisions in their own hands, but they couldn’t personally oversee every detail of their gigantic business empires. And they couldn’t find the management skills that they needed among their immediate families, who anyway found more amusing things to do: Digby Baltzell writes acidly about “the divorcing John Jacob Astor III (three wives), Cornelius Vanderbilt, Jr. (five wives), Tommy Manville (nine wives) or the Topping brothers (ten wives between them).”2 So the company founders turned to a new class of professional managers.
The likes of King Gillette, William Wrigley, H. J. Heinz, and John D. Rockefeller hired hordes of black-coated managers to bring order to their chaotic empires. America’s great cities were redesigned to provide these managers with a home—the new vertical filing cabinets known as skyscrapers. In 1908, the Singer Company built the world’s tallest building in New York to house some of these managers (it was 612 feet high), only to be outbuilt eighteen months later by Metropolitan Life (700 feet), which was then trumped in its turn by the Woolworth Building (792 feet).
The inhabitants of these towers began by doing the boring work of coordinating the flow of materials from suppliers to eventual customers. But soon their organizational skills—Singer’s mastery of door-to-door selling—became decisive competitive advantages in themselves. And, gradually, these “Company Men” began to make the big strategic decisions as well. Every merger required the central management staff to rationalize the acquired business. Every robber baron’s death freed their hands. Every share issue dispersed ownership: the number of ordinary shareholders rose from 2 million in 1920 to 10 million in 1930.
This was the background to the multidivisional firm that Alfred Sloan (1875–1966) pioneered at General Motors. Like many other young companies, GM was caught out by the recession of 1920. The company’s founder, William Durant (1861–1947), whom Sloan later described as “a great man with a great weakness—he could create, but not administer,” controlled almost all of the company’s activities, supported by a rudimentary staff. GM was saved by Pierre du Pont (1870–1954), who bought 37 percent of the struggling carmaker. He in turn picked Sloan, a young engineer who was then managing GM’s parts and accessories units, to redesign the organization from top to bottom.
Sloan, who became GM’s president in 1923, was the prototypical organization man, the first manager to be famous for just that. “Management has been my specialization,” he wrote flatly in his autobiography.3 Du Pont and Sloan decided that the company’s activities were too disparate to be run by a single central authority. Instead, they decided to treat its various units—its car, truck, parts, and accessory businesses—as autonomous divisions. Each division was defined by the market that it served, which in the case of cars was determined by a “price pyramid”: Cadillac for the rich, Oldsmobile for the comfortable but discreet, Buick for the striving, Pontiac for the poor but proud, and Chevrolet for the plebs. By providing a car “for every purse and purpose,” the pyramid allowed GM to retain customers for their whole lives.4 It also ameliorated the economic cycle. In boom times, like the late 1920s, GM could boost profits with high-end products; in busts, like the 1930s, it could rely on Chevys.
Yet, if Sloanism was built on decentralization, it was controlled decentralization. The divisions were marshaled together to use their joint-buying clout to secure cheaper prices for everything from steel to stationery. And Sloan and Du Pont created a powerful general office, packed full of numbers men, to oversee this elaborate structure, making sure, for example, that the divisions treated franchised salesmen correctly. Divisional managers looked after market share; the general executives monitored their performance, allocating more resources to the highest achievers. At the top, a ten-man executive committee, headed by Du Pont and Sloan, set a centralized corporate strategy.
The beauty of Sloanism was that the structure of a company could be expanded easily: if research came up with a new product, a new division could be set up. “I do not regard size as a barrier,” Sloan wrote. “To me it is only a problem of management.” Above all, the multidivisional firm was designed, in Sloan’s words, “as an objective organization, as distinguished from the type that get lost in the subjectivity of personalities.” In other words, it was not Henry Ford.
Ford’s determination to administer his huge empire himself pushed it toward disaster. He ignored both the new science of market segmentation and the wider discipline of management theory. (He let it be known that anyone found with an organization chart, however sketchily drawn, would be sacked on the spot.)5 He deliberately engineered a destructive conflict between his son and one of his most powerful lieutenants, drove many of his most talented managers out of the company, and refused to put even the most elementary management controls in place. One department calculated its costs by weighing a pile of invoices; the firm was hit with a $50 million tax surcharge for excess profits during the Second World War because no one had filed the necessary forms for war contractors.6 By 1929, Ford’s share of the market had fallen to 31 percent while General Motors’s had risen from 17 percent to 32.3 percent.7
There was an irony in the inventor of the assembly line being himself outorganized. As one historian, Thomas McCraw, puts it, “What Ford did for physical machines, Sloan did for human beings.”8 The multidivisional structure, which was progressively adopted by many of America’s marquee names, including General Electric, United States Rubber, Standard Oil, and U.S. Steel, was an ideal tool for managing growth. The Du Pont Company, for instance, initially diversified haphazardly into a succession of promising new products, including paints, dyes, film, and chemicals. But it overloaded its centralized management system—so much so that the only bit to make money was its old explosives business. Once it copied GM’s example, and began to create separate divisions to manage its various businesses, the new entities began to make money too. By 1939, explosives accounted for less than 10 percent of its income.
Du Pont also illustrated another advantage of Sloan’s system: it institutionalized innovation by making it the responsibility of specific people. Du Pont poured money into research, supporting not just specialized laboratories in its various divisions but also a central laboratory, known as “Purity Hall,” which concentrated on fundamental research. By 1947, 58 percent of Du Pont’s sales came from products that had been introduced during the previous twenty years.9
Even companies that were less directly influenced by Sloan embraced his creed of professional management. In 1927, Coca-Cola’s researchers began a three-year study of fifteen thousand places where the drink was sold in order to work out things like the exact ratio between sales volume and the flow of people past their product. Similar scientific studies, under the research-obsessed Robert Woodruff (1889–1985), led not just to bottled Coke being sold in garages, but to strict rules about the color of trucks (red) and the sort of girls to put in ads (a brunette if there was only one girl in the picture). Sales duly soared.
Over at Procter & Gamble, the company also plowed a fortune into ever more professional marketing, inadvertently ruining modern culture by creating the soap opera (as the radio dramas sponsored by the firm came to be called). On May 13, 1931, an uppity P&G recruit named Neil McElroy broke the in-house prohibition on memos of more than one page, producing a three-page suggestion for the company to appoint a specific team to manage each particular brand. “Brand management” provided a way for consumer-goods firms to mimic Sloan’s multidivisional structure.10
Such discipline became even more essential during the 1930s. By July 1932, the Dow Index, which had stood at 386.10 on September 3, 1929, had fallen to 40.56. Industrial output fell by a third. In the Depression, consumers were only willing to part with their surplus cash for genuine novelties (or apparent ones: by the late 1930s, Procter alone was spending $15 million a year on advertising). Yet, throughout this turmoil, the big Sloanist firms held on to their positions. With the barriers to entry in most businesses still high, they were rarely threatened by young upstarts; the main danger was of a neighboring giant diversifying systematically into their territory. The only way a multidivisional firm could get beaten was by another multidivisional firm.
Behind this success sat a new culture of management. In the late nineteenth century, business education consisted of little more than teaching bookkeeping and secretarial skills. Only the University of Pennsylvania’s Wharton School, founded in 1881, offered stronger stuff. But business schools began to spread. Harvard Business School opened its doors in 1908, the same year that the Model T started rolling off the assembly line. By 1914, Harvard was offering courses in marketing, corporate finance, and even business policy.
Management thinkers also began to follow the trail blazed by Frederick Taylor (1856–1915). Arthur D. Little was the first of a new class of management consultants, soon followed by James McKinsey, who set up shop in 1926, three years after the American Management Association was founded. Even politicians joined the craze: Herbert Hoover tried to apply scientific management to government.
From the very first, these management thinkers offered contradictory advice. A rival “humanist” school, including Mary Parker Follett (1868–1933) and Elton Mayo (1880–1949), challenged Taylor’s dominant “rationalist school,” arguing that the key to long-term success lay in treating workers well. In 1927, a group of behavioral scientists, including Mayo, began an epic ten-year study of Western Electric’s Hawthorne Works in Chicago (which among other things proved that turning lights on and off improved productivity).
Yet, even these softer thinkers were still apostles of the new management religion. “Management not bankers nor stockholders is the fundamental element in industry,” claimed Follett. “It is good management that draws credit, that draws workers, that draws customers. Whatever changes should come, whether industry is owned by capitalists, or by the state, or by the workers, it will always have to be managed. Management is the permanent function of business.”
Follett’s claim might be taken as a tribal manifesto for one of the unsung heroes of the twentieth century. Company Man has not had a good press. Sinclair Lewis pilloried him as Babbitt (1922), the epitome of self-satisfied philistinism. In Coming Up for Air(1939), George Orwell portrayed him as little more than a wage slave—“never free except when he’s fast asleep and dreaming that he’s got the boss down the bottom of a well and is bunging lumps of coal at him.” Yet he helped to change companies the world over.
As early as 1920, Company Man’s character had been formed by two things: professional standards and corporate loyalty. Company Man was defined by his credentials rather than by his lineage (like the upper classes) or his collective muscle (like the workers). He was part of a professional caste that adopted Frederick Taylor’s motto that there was “the one best way” for organizing work and sneered at rough-hewn entrepreneurs for not knowing it.
But this class solidarity was balanced by loyalty to his employer. The first rule at Standard Oil, according to one contemporary, was that every employee must “wear the ‘Standard Oil’ collar. This collar is riveted on to each one as he is taken into ‘the band,’ and can only be removed with the head of the wearer.”11
Thomas Watson, the salesman who created the modern IBM in 1924, built his organization out of Company Men.12 He located the firm in a small town, Endicott, in New York State, the better to lay down the law. IBM men wore a uniform of dark suit and white shirt, refrained from strong drink, sang the praises of the founder in a company song, and competed for membership in the 100 percent club, an elite club open to only the most successful salesmen. They could even listen to an IBM symphony commissioned by the founder. On IBM Day, in 1940, some ten thousand IBM-ers converged on the New York World’s Fair in special trains. Watson argued that such loyalty “saves the daily wear and tear of making daily decisions as to what is best to do.”
This paternalism went much lower than the officer class. Modern debates about shareholder capitalism often obscure the fact that many of the best Anglo-Saxon companies have happily shouldered social obligations without much prompting from government. Procter & Gamble pioneered disability and retirement pensions (in 1915), the eight-hour day (in 1918) and, most important of all, guaranteed work for at least forty-eight weeks a year (in the 1920s). During the Depression, the company kept layoffs to the minimum and the company’s boss, Red Deupee, cut his own salary in half and stopped his annual bonus. Henry Ford, who fumed that when he hired a pair of hands he got a human being as well, became a cult figure around the world by paying his workers $5 a day—well above the market rate. Henry Heinz paid for education in citizenship for his employees.
THREE DEBATES THAT DEFINED THE COMPANY
As the company’s role in society deepened, so did the debate about that role. Three works published in the 1930s and 1940s asked fundamental questions about this awkward institution: Why did companies exist? Whom were they run for? And what about the workers?
The most basic of these three works began as a lecture in 1932 to a group of Dundee students by a twenty-one-year-old economist just back from a tour of American industry. Five years later, Ronald Coase published his ideas in a paper in Economica called “The Nature of the Firm.” Coase tried to explain why the economy had moved beyond individuals selling goods and services to each other. The answer, he argued, had to do with the imperfections of the market and particularly to do with transaction costs—the costs sole traders might incur in getting the best deal and coordinating processes such as manufacturing and marketing.
The history of the company since 1850 validated Coase’s point. General Motors, for instance, reaped enormous economies of scale by bundling together plenty of transactions that had previously been done independently. The costs of, say, trying to negotiate each bit of steel that was needed for a car would have been prohibitive. Yet, GM still dealt with independent franchisees. Coase nicely quotes an earlier British economist, Dennis Robertson (1890–1963), who talked about the relationship between “conscious” firms to the “unconscious” market as being like “lumps of butter coagulating in a pail of buttermilk.” GM might have been a huge chunk of butter, but it was still within a liquid churn.
The second book, The Modern Corporation and Private Property, by Adolf Berle and Gardiner Means, published in 1932, outlined the distribution of corporate wealth in America. Like the Pujo committee that had harried Morgan twenty years earlier, Berle and Means found plenty of evidence of great concentration: the top two hundred companies accounted for half the total assets; AT&T alone controlled more assets than the twenty poorest states. But the new oligopolies were owned not by robber barons but by 10 million ordinary shareholders. Carnegie’s gibe about “anybody’s business becoming nobody’s business” had come true.
Companies were supposed to be run in their owners’ interest. In 1916, the Michigan Supreme Court had famously ruled (in a case that two minority shareholders, the Dodge Brothers, had brought against Henry Ford) that “a business corporation is organized and carried on primarily for the profit of the stockholders.” Berle and Means argued that the passivity of these millions of shareholders had frozen “absolute power in the corporate managements.” In economic terms, the interest of the agent was separate from that of the principal. Of course, managers had often been uppity people, inclined to know best. (Asked to slow down by the onboard merchants, one Dutch East India Company captain, Jacob van Heemskerck, barked back, “When we risk our lives, the Lords of the Company may damn well risk their ships.”)13 And, of course, theorists had always considered the separation of ownership from control. But Berle and Means were the first to identify corporate governance as a practical problem.
Henceforth, rather than worrying about monopolistic entrepreneurs squeezing out smaller businesses, the authorities increasingly looked for ways to protect small investors from the power of unfettered managers. In 1933, the New York Stock Exchange finally required proper accounts for listed companies. The Securities Acts of 1933 and 1934 placed the fiduciary responsibility for reporting accurate information firmly with directors. Roosevelt created the Securities and Exchange Commission in part as a weapon against the bankers who he thought bore much of the blame for the recession. (He also established a flotilla of regulatory agencies to police companies, bringing trucking firms, airlines, and utilities under federal direction.)
The last book was about General Motors itself. In 1942, Sloan’s attention was caught by Peter Drucker’s The Future of Industrial Man (1942), which argued that companies had a social dimension as well as an economic purpose. Sloan invited the Viennese exile, still at the time regarded as something of a misfit who didn’t know whether he was a political theorist or an economist, to analyze GM. The result was The Concept of the Corporation, published in 1946.
The book, one of the best managerial tomes ever written, roams freely, worrying, for instance, about both the percentage of Victorian Englishmen who were gentlemen (a minute fraction, in Drucker’s view) and the efficiency of Russian industrial management. The book had plenty of positive things to say about Sloan. Drucker argued that big was beautiful, and that GM’s decentralized structure was the key to its success.14 Indeed, his commendation persuaded countless firms to follow suit.
But there was a sting in the tail. The Concept of the Corporation made a passionate plea for GM to treat workers as a resource rather than just as a cost. In “the assembly-line mentality,” warned Drucker, workers were valued purely in terms of how closely they resembled machines.15 In fact, the most valuable thing about workers was not their hands, but their brains. The importance of empowering workers became more important when Drucker identified a new class of “knowledge workers” (as he dubbed them in 1959). These were lessons that Japanese managers (who read Drucker’s work assiduously) learned rather more quickly than GM. The carmaker’s attempt at talking to its workers came down to suggesting they write an essay, “My Job and Why I Like It.”
One sign of the success of managerial capitalism is the way that it co-opted its state equivalent after 1945. During the Second World War, governments tightened their grip on business. In Germany, Krupp and I. G. Farben became adjuncts of the Nazi war machine. In America, the federal government bought as much as half of the products of both industry and agriculture. Wartime governments everywhere ordered management and labor to collaborate in order to boost productivity and prevent the strikes that had marred the 1930s.
This relationship continued after the war, though under different guises on each side of the Atlantic. In America, big government remained an important ally of big business, frequently drafting businesspeople (United States secretaries of defense included Neil McElroy, the P&G memo writer who later became the firm’s boss; Charles Wilson, Sloan’s successor at GM; and Bob McNamara from Ford). The Cold War saw the creation of what Dwight Eisenhower dubbed “the military industrial complex.” Some of the biggest companies in the country—such as “the Generals”—relied heavily on the Pentagon. Even smaller companies sent lobbyists to Washington to drum up contracts and shape regulation. Nevertheless, the government remained a customer, a policeman, and an ally, not an owner.
That was not the case in Western Europe, where postwar governments systematically nationalized companies that controlled the “commanding heights” of the economy: heavy industry, communications, infrastructure. In many countries, one in five workers were employed by nationalized companies. Their founders liked to claim that they were creating a new form of socialist company. Herbert Morrison, the architect of Britain’s postwar nationalization, argued that “the public corporation must be more than a capitalist business, the be-all and end-all of which is profits and dividends. Its board and its officers must regard themselves as the high custodians of the public interest.”
Yet, the prophets of nationalization shared the Sloanist belief in managerialism and gigantism. Politicians like Morrison claimed that they could manage things better than the anarchic market, with stricter controls and forward planning. Family firms were too small to survive in a world dominated by the economies of scale and scope. Nationalized companies would be big enough to capture economies of scale, to mobilize resources, and adopt new technology. They would be run by trained professionals rather than bumbling amateurs. Instead of just making their new fiefdoms into government departments, most nationalizers stuck to the corporate model. “These are going to be public corporations, business concerns,” explained Morrison; “they will buy the necessary brains and technical skills and give them their heads.”
European and Asian governments poured resources into “national champions”—companies that were either owned by the state or closely aligned to it. Italy’s national oil company, ENI, for example, rapidly developed into a sprawling conglomerate that included some thirty-six businesses, dabbling in everything from crude oil to hotels. Even when they were not steering contracts toward these corporate pets, politicians found other ways to protect them. Heavy government regulation and intervention made it hard for newcomers to break into the status quo. Many countries saw the creation of a revolving door between big companies and big government. France’s énarques, the elite bureaucrats from the École Nationale d’Administration, glided between the “private” and public sectors with well-oiled ease.
Seen in this light, state-owned companies were less threats to Sloanism than heavy-handed compliments to it. Continuity remained the order of the day, all the more so because American firms continued to extend their domination over both America and the world. Even with the introduction of the state firms, there was little turnover in the world’s top two hundred companies until the 1970s. Obviously, Germany’s and Japan’s suicidal predilection for world domination made life much easier for America’s commercial juggernauts after the Second World War. Yet, even within the American market, big firms continued to reap the rewards of organizational innovation.16
Between 1947 and 1968, the share of American corporate assets owned by the two hundred largest industrial companies rose steadily from 47.2 percent to 60.9 percent. Banks added branches and consolidated smaller divisions. Hotels, restaurants, rent-a-car services, spread their networks across the land, helped by the national highway system. The booming information-technology sector produced several new firms (such as Xerox, Texas Instruments, and Raytheon) that made it into the super league. But older firms hung around, too, such as General Telephone and Electric (GTE), Motorola, Clark Equipment, Honeywell, and, of course, IBM.17
As for oversight from Wall Street, the insurers, pension funds, and individual investors (whose numbers drifted down to 6 million in 1952 but then rose to 25 million by 1965) seemed happy to leave managers well enough alone. That may be because many of the shareholders were managers themselves, but dividend yields were taxed more harshly than capital gains. Rather than having to return their cash to their owners, postwar managers were free to invest it—thus making their firms even bigger and themselves even less reliant on shareholders to finance investment. About two-thirds of the almost $300 billion that nonfinancial companies raised between 1945 and 1970 came from internal sources.18
Firms also became, if anything, more bureaucratic and introspective. Decentralization became a job-creation machine for managers: by the 1960s, GE had amassed 190 separate departments, each with its own budget, and 43 strategic business units. The ubiquitous Peter Drucker temporarily shelved all his humanistic ideas about empowering workers to invent “management by objectives,” an approach that dominated “strategic thinking” for decades to come. In The Practice of Management (1954), he emphasized that both companies and managers needed clear objectives, and that the best way to achieve those objectives was to translate long-term strategy into short-term goals. In particular, he believed that a firm should have an elite group of general managers determining strategy and setting objectives for more specialized managers: “Organization structure must be designed so as to make possible the achievement of the objectives of the business five, ten, fifteen years hence.”
ORGANIZATION MAN AND AMERICAN BENEVOLENCE
The security and predictability that American managers enjoyed in the 1950s and 1960s fostered something very akin to German stakeholder capitalism. Not only did companies enjoy close relations with the government (“What is good for General Motors is good for America”), they also spread their spoils among their various stakeholders. “The job of management,” declared Frank Abrams, the chairman of Standard Oil of New Jersey, in a 1951 speech that was typical of the time, “is to maintain an equitable and working balance among the claims of the various directly interested groups … stockholders, employees, customers, and the public at large.”19 In The New Industrial State (1967), John Kenneth Galbraith argued that the United States was run by a quasi-benevolent oligopoly. A handful of big companies—the big three car companies and the big five steel companies, for example—planned the economy in the name of stability. They provided their blue-collar workers with lifetime employment and solid pensions, enjoyed fairly good relations with giant trade unions (around 40 percent of the workforce was unionized by 1960), and generally acted as good corporate citizens.
The most conspicuous beneficiaries were the managers. The 1950s and 1960s was the heyday of Company Man—or Organization Man, as he was then known.20 He relished the traditions of office life: the assiduous secretaries (or office wives), the water cooler chatter, the convivial Christmas parties. He spent more time in the office than at home—which might well be situated in a bedroom suburb an hour’s commute away—and often ended up leaving his wife for his secretary. He measured his life in terms of movement up the company hierarchy—a bigger office, a better parking space, a key to the executive washroom, and, finally, to cap it all, membership in the firm’s quarter-century club.
Company Man’s innate conformity began to worry a string of authors in the 1950s and 1960s. In The Lonely Crowd (1950), David Riesman noted that far too many Company Men were “other directed” rather than “inner directed”—more interested in the good opinion of their colleagues than in following their inner compass. In The Organization Man (1956), William H. Whyte worried that this emphasis on fitting in was stifling entrepreneurialism. (He quoted one IBM man proudly proclaiming that “the training makes our men interchangeable.”) In The Status Seekers (1959), Vance Packard showed how big companies devised intricate measures of status, from the size of offices to the horsepower of company cars—and how Company Men, like mice in some dismal scientific experiment, spent their lives scurrying around the treadmill and pressing the right buttons.
Yet, the mood among America’s mice was still pretty triumphant. Their creed, after all, was being accepted well beyond the confines of Main Street. At home, America’s bosses ran the government. Bob McNamara’s Whiz Kids moved all too effortlessly from managing Ford to running the Vietnam War. Abroad, American companies conquered one European market after another. In a hugely popular book, The American Challenge (1968), Jean-Jacques Servan-Schreiber argued that the European Common Market (which was then in its ninth year) was basically an American organization. For this plucky Gallic resistance fighter, the problem was not America’s financial power or technological brilliance: “on the contrary, it is something quite new and considerably more serious—the extension to Europe of an organization that is still a mystery to us.”
“Mystery” was probably not quite the right word. The Europeans were determined to learn from the Americans. By 1970, more than half of Britain’s hundred biggest industrial companies had turned to consultants from McKinsey to reorganize themselves; and a growing number of companies had adopted the multidivisional form that McKinsey and others championed. There were exceptions, of course, to accepting the American way, but they seemed only partial ones. Japanese and German firms stuck to their more formal version of cooperative capitalism, but they also imported parts of the multidivisional structure; and their domestic economies were dominated by reassuringly big businesses.
The other element that underlined the supremacy of managerial capitalism was that the most conspicuous private-sector alternative to the multidivisional firm in the 1960s—the diversified conglomerate—was actually based on a slightly warped version of two Sloanist credos. Conglomerates like Gulf & Western (“Engulf & Devour”) and ITT might have been cocky upstarts, driven by short-term stock-market gains rather than long-term planning. But, first, they believed that size mattered: that was one reason they kept on buying everything in sight. And, second, they were über-managerialists; their management skills, they believed, could master any sort of unrelated businesses, be it, in LTV’s case, meatpacking and steel or, in Gulf & Western’s, sugar refining and films.
The 1960s conglomerates arose partly by gobbling up the divisions that other companies did not want, and partly through hostile takeovers, often using their own highly rated shares. In both cases, they were helped by generous accounting rules and greedy investors (greedy not just for higher returns but also for something a bit more exciting than the steady growth of companies like GM). By 1973, fifteen of the top two hundred American manufacturing companies were conglomerates. But by then the bloom was off the rose. For all their frantic buying and selling, the conglomerates failed to deliver the returns shareholders expected. Shareholders consequently marked down their value, which in turn restricted their ability to take over more firms.
The Sloanist structure survived the assault fairly easily. But it should have heeded the warning. The manager-dominated company was in danger.