Printer Friendly

Innovation and enterprise growth: it took courage, focus and commitment at all levels for IBM to develop its next-generation product lines.

During the mid-1990s, IBM faced the enormous challenge of renewing its mainframe product line for transactions processing in order to win ebusiness applications for large-scale, open-systems computing. To accomplish this renewal, IBM would have to revisit all the fundamental aspects of its business: how it segmented markets and prioritized user needs, how it designed its computers, its branding strategies, the organization of product development, and even its business models. The changes that IBM has since made in these areas are profound.

In addition to describing these changes, this first of two articles generalizes the IBM turnaround into a management framework that others can use. The framework focuses on the differences between developing breakthrough technologies with unknown uses, generational product line renewal that applies proven technologies to established uses, and bold initiatives to leverage platforms to new target users and new product uses. Understanding these bold initiatives is essential for generating organic enterprise growth.

Growing an enterprise organically and substantially is the great challenge facing most companies. What must management do to achieve such growth? How does management need to rethink its market and product strategy, and its R&D organization? Who has to champion and drive the growth forward, and can R&D itself take a leadership role in what are profoundly business issues? How does a company renew itself when many of its potential innovations are impeded by the inertia, organizational infrastructure and arrogance of many of its own managers, accustomed to being traditional market leaders?

All these questions are asked from the belief that sustained, organic enterprise growth comes best when a firm leverages its core skills and technologies to new market applications--not just once, but year after year.

Note how different this statement is from the typical focus of technological innovation: creating breakthrough technologies that make completely new things possible. Having developed a successful business with core product lines, many R&D executives think about allocating resources to the creation of new, breakthrough technologies that will serve as the basis of entirely new product lines and even new businesses within the corporation.

However, creating and commercializing a breakthrough technology can often take more than a decade to commercialize. The concept of Global Positioning Systems, for example, was brainstormed in a 1973 Department of Defense meeting as a foolproof method for satellite navigation. But this would not have happened had Aerospace Corporation not started early GPS development in 1961, work which in turn would not have occurred had people not started working on portable atomic clocks during the mid-1950s. While the first operational GPS satellite was launched in 1978, the full 24-satellite capability was not in place until 1993 (1). Since then, GPS has been rapidly applied to trucking, commercial fishing, surveying, personal navigation on foot, on the water, in the car, and in many other applications.

Technical Innovation and Enterprise Growth

Leveraging core skills and technologies to new market applications is much different than making breakthrough discoveries. As with GPS, leveraging proven technology to new market applications tends to be much faster and more focused in terms of market research and new product development than creating breakthrough technologies. It is this dimension of innovation that we address in this article.

Figure 1 illustrates this discussion with a diagram that we use to understand the linkage between technological innovation and enterprise growth. It shows a generic technology life cycle curve plotted against cumulative industry sales for products based on that technology (on the X axis) and the passage of time (Y axis).

[FIGURE 1 OMITTED]

Each phase of the framework offers a different set of market, technological, organizational, and financial challenges. In the first phase, Technological Discontinuities, innovators develop new technology with the potential to disrupt current product and/or service designs and solutions. Innovators also try to find a single compelling application for their new technology. This proof of concept, often done with lead users, serves to validate a vision for the future and is antecedent to building a viable business. Von Hippel's "lead user" and Leonard and Rayport's "empathic design" are two powerful concepts that apply directly to this phase of innovation (2).

As exciting as breakthrough science can be, however, potential paradigm shifts need a very long time to take hold in target markets. We noted above that GPS took 20 years to achieve widespread commercialization. It has taken PCs more than 20 years to achieve 20 percent market penetration in the United States, and cell phones, about 15 years. The discovery of a new drug is still largely approached as an R&D effort that integrates both basic science and product development. The average time and cost for new products introduced in the last several years is now about 13 years and $800 million per product. One might say that the biotech industry still lives in the first phase of our model.

In the second, Market Expansion, phase, the corporation forcefully applies established core technologies to a focused set of market applications. Derrick Abel suggests in a simple yet powerful way that a market application is the intersection between specific groups of users and specific uses for a particular product or service technology (3). At some point, every great manufacturing company has found itself in a position to expand its market and has risen to the challenge. To build market share, the firm develops robust product architectures, creates a stream of products based on these architectures, and scales its manufacturing and distribution assets accordingly.

Within their evolving product lines, successful firms develop common subsystems that become the enabling product or process platforms of the corporation. The common engines in Honda's cars and light trucks, the motors in Black and Decker's power tools, and the diaper conversion lines of Procter & Gamble serving young and old, are classic examples. This is the central concept of work on modularity, scalable product line architectures, and subsystem platforms (4,5). At the same time, the company continues to bring new users into the product category by powerful marketing communications and enjoys the growing momentum from the diffusion of its technology.

The third phase in the framework, Enterprise Growth, has its own distinct challenges. This is the phase where the company must adapt its core technologies for new market applications, i.e., new users and new uses. Consider, for example, a hypothetical GPS systems manufacturer that has gained leading market share in an industrial segment such as tracking fleets of trucks shipping goods across the country. To grow, management adopts a strategy of building tracking systems for fishing fleets. The company must first identify the different types of these new target users and understand the specific conditions under which its technology will be applied. It must then develop the new product line architecture and seek complementary innovators who, for example, have navigational maps that can be loaded onto the handheld devices. New manufacturing processes must also be capitalized to handle new form factors and weatherproof materials. The company must also develop new channels to the market and, perhaps, a new brand. It may also decide that it is best to incorporate its systems as part of a service for which customers pay a monthly fee, i.e., a nontraditional business model.

Each one of these aspects can be challenging and must be carefully managed. Little has been written on how to address them, which is the purpose of this article.

Corporate Choices

Two distinct choices face corporations striving for market leadership in a target market segment. One choice is to re-engage in core technology discovery mode, e.g. to go "left" on Figure 1. This requires the allocation of substantial resources to discovering breakthrough technologies and a few compelling applications for them. Many large corporations have "skunk works" R&D projects focused on commercializing technological discontinuities. A second choice is to go "right" on Figure 1, to leverage firm competencies to new market applications.

These choices are by no means mutually exclusive. Successful large firms tend to allocate resources to all three phases of the framework. However, we find that the third phase, enterprise growth through leveraging technology to new market applications, is often under-prioritized and poorly resourced by senior management.

The recent history of IBM demonstrates the benefits of investing in a strategy of enterprise growth. In the pages that follow, we examine what we believe corporate historians will come to see as the greatest corporate turn-around of the second half of the 20th century. It is a case study that is both rich in lessons and whose outcome at the time of the turnaround was anything but certain (6).

A Company in Crisis

By 1993, IBM was falling hard. It had completely missed or ignored the client-server technology cycle of the mid-1980s, and by the early 1990s its traditional batch mainframe was suffering painfully. Engineering professionals who had worked for so many years in IBM's research and product development centers saw their careers in IBM coming to an abrupt end. Having spent decades working at the forefront of large-scale computing, they felt adrift in the sea of technological change.

Launched in the late 1950s, IBM's mainframes, operating systems and databases became the cornerstone of data processing activities for the world's largest corporations over the ensuing decades. Over the next 40 years, technological advances and the sweat of thousands of engineers led to successive generations of increasingly powerful million-dollar-plus mainframes. The mainframe became the king of computing, focused on transactions processing in large corporations (7).

Breathtaking Advances

Technological advances in computing over the following decades were breathtaking. By 1995, the effective price per million of instructions per second (MIPS) was only 1 percent of its 1970 price per MIPS. By the turn of the millennium, the approximate cost per MIPS in a high-end mainframe had reached a tenth of one percent of its 1970 level. IBM's earliest mainframes were priced at about $256 million per MIPS in the late 1950s; by 1970, about $2 million per MIPS; by 1980, $350,000 per MIPS; by 1990, $100,000 per MIPS, and by 1995, less than $20,000 per MIPS. Today, IBM's latest systems are priced at about $2500 per MIPS.

For most of that 40-year period, IBM had led the race to increase throughput and remain competitive in terms of price. Serious competitors, such as Amdahl, Hitachi and Fujitsu, never seriously challenged IBM's dominance until the early 1990s. It owned the transactions processing application space in industries that ranged from financial services to transportation to manufacturing. By the late 1980s, IBM's S/390 division accounted for more than $10 billion in annual revenue (8). More than 30,000 employees were working on the product line in one way or another. That division alone was one of the largest manufacturing companies in the United States. Everyone within IBM felt that the S/390 division remained the corporation's heart and soul.

By the late 1980s, however, chinks had begun to appear in IBM's armor in the mainframe business and for the corporation as a whole. Large customers were asking for features the company did not provide in its mainframe products. These features included the Internet and Intranet-style networking, as well as applications software for on-line business and collaboration among individuals and groups.

Client-server computing offered a much more elegant and flexible solution for sharing programs and data between large and small computers. Programs and data could be seamlessly downloaded from central computers onto any other computer on a local area network and run on those computers. By the early 1990s, Fortune 500 companies were adopting Unix and Windows-based client-server solutions in a wholesale manner, directly threatening the traditional mainframe solution. Independent software companies, such as SAP and Oracle, offered client-server applications that covered nearly all of the major operating functions across the enterprise. Within only several years, by the beginning of the 1990s, the chinks in IBM's armor became cracks, and the cracks, fissures.

There was no technological reason why IBM could have not included these newer technologies in its systems. Perhaps complacency had set in; the S/390 division had been too dominant for too long. IBM did not believe in client-server architectures and did not invest accordingly (9). Also clear in hindsight is that IBM's organization, processes and approaches to new product development were too focused on incremental innovation for existing product line architectures.

Competition from Amdahl and Hitachi

IBM had two major competitors in the mainframe arena, Amdahl and Hitachi. While Amdahl never surpassed IBM in raw performance, it was selling similar machines at a 20 percent discount relative to IBM's prices. Its share hovered between 4 and 7 percent. Hitachi, on the other hand, was working hard to beat IBM in performance with a high-end bipolar mainframe. This challenge was occurring just as IBM was abandoning its own bipolar technology for CMOS-based designs as a method of responding to changes in the development economics of these platforms (10).

Hitachi's decision had the predictable consequence that its new bipolar mainframe was faster than prior versions, while IBM's new CMOS designs were, unfortunately, initially substantially slower. In 1993, Hitachi introduced a system called "Skyline" that contained a bipolar processor that made it about twice as fast as IBM's current bipolar mainframe. IBM's large customers began buying Hitachi's computers for machine-intensive transaction processing applications at the large airlines, financial services firms, and retailers. Within several years, Hitachi took 9 percent market share, dropping IBM's share to about 75 percent. This translated into almost a billion dollars of new annual revenue for Hitachi, and a billion dollars less for IBM. Hitachi had successfully attacked IBM "from above."

At the same time, IBM was being ravaged "from below" by Unix server vendors. Theirs was a nontraditional high-end computing solution that was scaling up into IBM's core market space. This threat was even more formidable than Hitachi's more traditional attack. Vendors were packaging multiprocessor versions of RISC (reduced instruction set computer)-based workstations into "small mainframes" for client-server applications that were becoming pervasive during the 1990s in corporations.

It was only a matter of time before the "small" became "mid-sized" and "mid-sized" "high-end." By the end of the 1990s, Sun Microsystems was offering a RISC-based machine that many observers felt was the fastest mainframe on the market for commercial client-server applications. The most popular of these applications were the enterprise resource planning systems (ERP) offered by SAP, Oracle and Baan. Just as these ERP systems enjoyed the distributed computing features of Unix, they had to struggle along to work through "Unix emulators" running within IBM's own proprietary operating systems.

No Place To Hide

Hit from above, hit again from below, IBM had no place to hide. The company needed much more than just a new product architecture; its problems were systemic. Management had to fundamentally change how it went about the business of designing, engineering, manufacturing, and marketing its products. Moreover, it was going to have to make these changes without frightening the installed customer base.

How deep was the abyss? It is easy for outsiders to forget the extent of IBM's problems at that time. In 1990, IBM reported net earnings of approximately $6 billion. A year later, it reported a small net loss. In 1992, the loss approached $7 billion. By 1993, losses exceeded $8 billion! IBM was bleeding as much if not more money than the annual revenue of many other computer manufacturers.

All this spelled disaster for IBM. If the question was asked, "What was wrong?" the answer would have to be, "Just about everything." And yet, there was a cadre of insiders, and one key newcomer, who were committed to saving the company.

The Turnaround Begins

Technological innovation required that IBM first become innovative in how it established its market strategy. This included market segmentation, the grouping of customers, customer research, and the prioritization of user needs. The obvious lesson here from IBM is that if your company's core product lines are stagnating, look first to how customer needs have changed and then observe needs in new emerging markets. Only then can you develop effective solutions.

Contrary, perhaps, to conventional wisdom, we believe that market segmentation and customer research are an R&D manager's fundamental responsibility within the context of next-generation product line development. It is clearly essential for leveraging existing core technologies and product platforms to new market applications, because one must know the what, who and why of those applications before any leveraging, i.e., new product development, can be done effectively. IBM's market insights--developed by staff representing all major functions in the S/390 division--became the design drivers for its new hardware and software architectures.

An important first step in IBM's turnaround was how the company segmented its target users. Traditionally, management segmented customers in terms of the products they used, rather than the solutions they needed. In other words, IBM segmented its markets by customers that used mainframes versus those that used AS400s versus those that used workstations or PCs. That customers need things other than hardware, such as applications software, was largely an afterthought.

The old market strategy also ignored the reality that large customers using all of these products were running enterprise-wide software across them and needed more effective integration. The S/390 division's target segment, for example, contained Fortune 400 corporations that needed ever-larger, faster computers for high-volume transactions processing. In many ways, IBM's traditional view of its customers was a market segment of "one." This blinded developers to the subtleties of different requirements across industries and the applications occurring both within corporations and between them.

Confronting Emerging Needs

1993 was the year of the great abyss for IBM. By 1995, management had shifted its approach to a vertical industry segment orientation, including finance, manufacturing, retail, and healthcare. Further, multifunctional teams were studying and developing solution requirements for customers within these segments. In years past, IBM had assumed that on-line transactions processing was the only requirement and that the answer to this was simply to make new machines bigger and faster. Now, fighting for its very existence, IBM had to consider emerging needs. Teams studied data mining and business-to-business electronic commerce as carefully as transactions processing. They developed rich layers of detail within each vertical market pertaining to how different users in different industries could apply new server technology to solve business problems.

These market insights then drove two distinct generations of product line change. Figure 2 shows the massive configuration of the traditional mainframe, known internally as the H series. The first phase of IBM's turnaround came with the development of the G series, its first CMOS-based high-end server. The focus of the G series was to stop attacks from high-end competitors Hitachi and Amdahl, and to recapture market share in traditional transactions processing. The G was a much more cost-effective machine than H, but it still ran IBM's traditional operating systems software.

[FIGURE 2 OMITTED]

Then came the next phase of the renewal in which IBM developed and marketed the zSeries. The focus here was to take on IBM's low-end competitors, such as Sun Microsystems, with a much more powerful CMOS microprocessor, a dynamically scalable product architecture, and new, open systems software. "Open systems" represented a host of new market applications for IBM.

The old H series bipolar architecture had actually been introduced in 1990. Customers would know products based on that architecture as the 9021 family. This machine and its predecessors commanded three-quarters of the market for transactions processing during the early 1990s. It combined complex mechanical, electrical and software engineering. High-voltage electronics were placed in immediate proximity to chilled water cooling systems. IBM's old mainframes were massive, 1,000 square feet in size, delivered to customers on large tractor trailer trucks. The computer frames would be fork-lifted into the facility. Six to eight people would put the frames together, connect the cables and power supplies, and install the plumbing, which required a raised floor. The old mainframe was one of the world's most complex machines (Figure 2).

Much of this complexity was because IBM used bipolar processors in its mainframes. Bipolar chip design uses continuous electrical current, which requires circuitry large enough to handle the heat generated. Circuits that were too small could not dissipate the heat and would "fry." These bipolar chips also required far more energy than alternative technologies (such as the CMOS processors found in PCs and workstations). To deal with the heat created by these processors, IBM developed cooling systems in which a copper piston was positioned on the back of each chip, held in place with a retaining housing and cooled by chilled water flowing through a cold plate.

The Need for CMOS

By 1993, the year of the $8 billion loss, the writing was on the wall. The investment in bipolar semiconductor fabrication had been massive. Yet, if the only use of that investment was for the S/390 mainframe, the costs of advancing the bipolar processor technology would soon exceed the price bearable by even IBM's largest customers. It was a classic case of a technological discontinuity--in this case CMOS--obsoleting the dominant mainframe architecture based on older technology (11). IBM was advancing its own bipolar processor speed by 18 percent a year in the early 1990s, whereas CMOS speed was surging ahead 50 percent per year. Management at IBM believed that that it would have to double the power of the servers once every several years to keep up with applications requirements.

Therefore, the heart of the change from H to G was to replace bipolar processors with CMOS processors. Like our GPS example at the beginning of this article, CMOS technology was almost 20 years old by the time IBM applied it to its mainframes. In order to hold a logical state (on/off) within a bipolar logic circuit on a processor, current is always running through the circuit. Within a CMOS circuit, a burst of current is needed to switch the logical state, but the state is maintained by a much smaller current. This affects power consumption, cooling and size significantly. The difference in footprint between the H and G series was dramatic.

Organizational Conflict

The transition from H to G also witnessed the organizational struggles often faced by R&D groups when confronted with new market demands and new technologies.

In IBM, there were two different CMOS design strategies, one proposed by the established "insiders" and the other by a distant lab. The mainframe R&D group in Poughkeepsie, New York, wanted to build a new CMOS mainframe that would provide all the power (450 MIPS) of the current H6. An alternative plan was to build a new CMOS mainframe whose development would be far less risky and shorter. In this second strategy, the first CMOS machine would be followed quickly by upgrades to increase performance. Within three years, the new machines would "catch up" to H6 in terms of raw performance. IBM's research lab in Boeblingen, Germany, a lab with substantial experience in CMOS circuit design, was the advocate of this second strategy. IBM's senior management went with the German plan. Technical risk and time to market were driving concerns.

The traditional engineering power brokers in the S/390 division were stunned--they would be forced to implement a design not of their own creation! The realization soon dawned on all parties, however, that there was much work to be done and little time to do it. The company had to develop entirely new processors, electronic buses instead of cables, fans instead of plumbing, new electronics and housing, and new computer diagnostics. Every subsystem, and every interface between those subsystems, was going to have to be changed. Meanwhile, downsizing was in full force across all of IBM.

The first CMOS-based mainframe, internally called the G1, came out 18 months later. It was physically much smaller. A standard configuration of the H6 had a footprint of approximately 1,000 square feet, while G series' footprint was only 30 square feet! The G1 also used only 13 percent of the electricity of the H6. These were order-of-magnitude improvements.

Not all was positive, however. Where the "old" H6 machines hit 450 MIPS, the "new" G1 could deliver only 100 MIPS! Figure 3 provides the actual performance numbers of multiprocessor configurations across the three generations of high-end server technology described in this article, i.e., the H, G and zSeries. The bars on the chart for H6 and G1 constitute the performance gap that IBM was forced to suffer to effect a rapid transition from bipolar to CMOS microprocessor technology.

[FIGURE 3 OMITTED]

Starting Slowly

The technology roadmaps made by S/390 management back in 1993 predicted the results shown in Figure 3. Management realized that the performance improvements with bipolar architecture were leveling off, and that while the CMOS architecture would start slowly, it would quickly scale in terms of performance far beyond the capabilities of the old architecture. This is exactly what happened by 1998 with the G5.

Imagine the fortitude of IBM management during this transition period, bringing a new product to market that was four times slower than its predecessor! However unusual, it was a necessity. IBM, as a corporation, did not have the luxury of waiting for its engineers to develop a 400 MIPS G machine as the first new product line release.

Modular design helped save the day. To deal with the 100 MIPS problem, engineers in Poughkeepsie designed a method of tightly coupling and sharing loads across separate G computers. A customer could achieve the power of an old H6 with four coupled G1s. This coupling technology, called Sysplex, went a long way toward preserving the installed base during the transition from H to G.

The risks cannot be minimized, however. Had G development been seriously delayed, or had Sysplex simply not worked, IBM would have lost many customers. Hitachi was attacking from above and Sun Microsystems from below. If one considers that IBM Global Services was also focused on large mainframe users, that division, too, would have suffered greatly. Today, when Global Services accounts for more than half of IBM's revenues, the development of the G series CMOS architecture can be seen as nothing less than a "bet the company" project.

It worked. While the G design was at first slow compared to the H, its architecture had a clear path to add more processors and cache memory. As shown in Figure 3, subsequent models offered accelerating performance. By 1997, IBM reversed its loss of market share to its traditional mainframe competitors in transactions processing, Hitachi and Amdahl, and was poised to take on the Unix vendors in client-server computing (12).

Strategic Entrepreneurship

Complacency became a mindset of the past. While the G series was picking up steam, IBM had already set its sights on its new competitors and new market applications. This is the challenge presented by the third phase of our management framework, where firms leverage core technologies to new market applications. It is strategic entrepreneurship.

As the G computers were taking back market share, IBM's management had clear motivations to embark on next-generation product development. The price of high-end hardware created a value question. By 1998, IBM's large G series servers ranged in price from $250,000 to $3 million per machine. Sun Microsystems' servers, by comparison, ranged from $50,000 to $1 million per machine. Growth rates for machines over $1 million were in the single digits; growth rates for machines under $50,000 were in the double digits.

Additionally, Sun's most powerful computers were fast, reliable and used applications and software development tools from the Unix world. IBM's own estimates were that between 1997 and 2000, 80 percent of ebusiness computing procurements by large corporations consisted of servers from Sun, storage systems from EMC, networking routers from Cisco, and database software from Oracle. In 1993, Sun Microsystems reported $4.7 billion in revenue; by 2000, its sales had tripled to $15.7 billion. Compaq's sales of $7.1 billion in 1993 had grown five-fold to $38.5 billion in 2000! Hewlett-Packard reported that $20.3 billion in sales in 1993 had grown to $48.7 billion by 2000. Directly challenging IBM, Hewlett-Packard used its wealth to buy Compaq for $25 billion. Throughout the 1990s, these companies were reaping the rewards of Web-centric, client-server computing, while IBM, from a product standpoint, was not.

The uncompromising reality was that the business model for large systems manufacturers had radically restructured. At the beginning of the 1990s, 80 percent of a large corporation's information technology budget was allocated to hardware procurement and maintenance, with the remaining 20 percent allocated to software and services. By the turn of the millennium, IBM found Fortune 400 customers spending only 10 percent of their IT budgets on hardware. Corporations were placing a high priority on integrating their divisions, applications and customers. Client-server computing architectures, database warehousing, and Web-centric distributed computing were the means to accomplish these ends. Centralized transactions processing was on maintenance mode.

Facing a Web-centric, Client-Server World

Senior management was committed to parallel development, building a next-generation architecture even while building products on the current generation architecture. Just one year after introducing its first G series computer, IBM began planning its next generation of high-end servers. In 1995, executives embarked on a mission to address Web-centric, client--server computing. At that time, they understood that to win the ebusiness market would require new levels of scalability and interoperability. Both demands would obsolete the G architecture. These demands would also modify IBM's software strategy. Hardware, software and even business models would have to change.

A division executive, Linda Sanford, assembled the "best and brightest" from across the company to develop a strategy for this new world. The team she assembled was truly multifunctional, consisting of mid-level managers with proven records of accomplishment in technology and product line development. These were the thought leaders in their respective fields within IBM, yet not one had achieved executive rank in the company. In fact, an executive on that task force might have imposed dated approaches on the team. "Turf" was set aside to create a robust growth plan for IBM's server business, focused on next-generation hardware. The task force also focused on a universal Unix as the foundation for systems software, database, communications, and applications software. IBM later implemented that Unix vision as a Linux solution.

Senior management named the group the ES2000 team, representing "enterprise systems in the year 2000." The ES2000 team completed its work in just three months, delivering its plan to senior management in the Spring of 1995. In many ways, IBM has been executing to that plan ever since.

A New Dimension of Computing

The new hardware architecture mapped out in that plan came to market in 2000 as the z900. In striking contrast to the earlier transition from H to G, a standard z900 systems configuration delivered 2,600 MIPS, which was 1,000 MIPS faster than the last version of the G series. This speed was achieved by using an internally developed 64-bit RISC-based CMOS processor, called "Blueflame" within IBM.

The "z" in the product line name was intended to suggest a new dimension of computing. The X and Y dimensions in the old world were performance and price. The new Z dimension was focused on the dynamic management and coordination of all the subsystems in the computing architecture, a form of dynamic multiplexing between processors, cache memory and input/out channels. The subsystem that would deliver this capability within the z architecture was called the Intelligent Resource Director. This capability had been implemented earlier in the old H series mainframes during the early 1990s but had been temporarily set aside due to the time cycle demands on the G series. Now, IBM reintroduced an enhanced version of dynamic load balancing into the new zSeries architecture. This allows a brokerage firm, for example to achieve 15X peak performance over standard utilization without interruption, avoiding widescale systems failure. For large customers, this feature is critical. It delivers operational scalability.

At the same time, IBM totally changed the software architecture of its new servers. Traditionally, nearly all software running on large IBM machines was IBM-made. The heart of this software was IBM's proprietary operating systems. As the leader in online transactions processing applications, IBM's operating systems excelled in running "batch jobs" programmed in COBOL. The company had tried to accommodate the large client-server application vendors with a Unix emulator within its proprietary operating systems. Management knew, however, that this was untenable due to the myriad code replacements that vendors would have to make every time they or IBM came up with a new version of software.

The demand to run Web commerce applications was even more troublesome. These applications came in two major flavors; business to business (B to B) supply chain ordering and fulfillment systems, and business to consumer (B to C) Web store systems. The language and tools of these applications--Java and others--were foreign to the traditional mainframe world. Yet, IBM could no longer ignore that Fortune 400 accounts were spending heavily on developing ebusiness solutions on other manufacturers' machines, networks and programs.

Introducing Linux

With the ES2000 business plan in hand, division executives took the bold step of introducing Linux as a native operating system offering in 1999 as part of the G6. Even though IBM had tremendous Unix expertise in its workstation division in Austin, Texas, there were simply too many versions of Unix available during the late 1990s to allow true cross-machine portability of software. Executives felt that IBM needed to make its own mark. Linux was "new." It also offered an opportunity to excel in an arena not yet dominated by anyone else.

IBM's German lab was again hard at work in stealth mode, making Linux run on the zSeries Blueflame processor. When the lab's senior managers made their Linux work known to others, IBM executives on both sides of the ocean were excited. Dave Carlucci, who was the general manager of S/390 division, insisted that all of IBM commit itself to making Linux work across all of its servers.

IBM then made the strategic decision not to get into the business of selling and supporting Linux, even though Linux would be the cornerstone of its future success. There were a number of Linux software companies already in existence; IBM decided to help them prosper. This illustrates the kind of change in business models that a company must often consider when it goes after new market applications.

IBM made its own database system (DB2) operational on Linux, and applauded when Sybase, SAP and Oracle did the same for their own products. By the end of 2002, there were several thousand off-the-shelf commercial software products available for Linux, and therefore available for IBM's servers. At the same time, IBM invested heavily in making integrative "middleware" to connect IBM applications with non-IBM applications. The goal was to make this software, called Websphere, the bridge between new-world applications and traditional transactions processing systems.

After the first zSeries was launched, development continued to advance. IBM introduced the z900 Turbo in 2001, and with it broke the 3,000 MIPS barrier. By 2004, IBM achieved 9,000 MIPS and a 32-processor configuration with the zSeries 990. Reflecting a "Better, Best" product strategy, the company also introduced at the same time the zSeries 890, a 1,360 MIPS, 4-processor machine for less-computing-intensive applications. In addition to being faster, both these new machines could be supplemented with special "application assist processors" (known as zAAPs) that provide performance assists for Java instructions dispatched to them by the operating system, while conventional processing is being handled by central processors. This is part of IBM's drive toward ever more powerful integration of diverse workloads.

Creating a Platform-Centric Organization

In many ways, IBM's turnaround is a classic example of customer-driven innovation spanning successive generations of a product line. H became outdated and caused loss of market share, G stopped the bleeding with an architecture that utilized more efficient technology, and the zSeries expanded upon this base to retake share with superior features and performance.

If the story of corporate renewal were to end only with innovation in market understanding and technological innovation, then any firm that developed a robust market strategy and employed great engineers could succeed. However, we know that is not the case. Robust strategy is for naught without strong execution. Organization and communication processes enable innovative firms to deliver new products in a timely and efficient manner. IBM also had to change its organization and development processes to survive the 1990s. These changes were dramatic, wide-ranging and difficult for many to accept. It is these structural innovations to organization and process that will be the focus of our next article in Research-Technology Management.

References and Notes

(1.) For a history of the development of the core technologies and subsequent application of GPS, see: Daniel Kleppner, Beyond Discovery: The Path from Research to Human Benefit, National Academy of Science, April 1997. An electronic version of this paper may be found at: www.beyonddiscovery.org under the Technology Articles section. You will find the histories of a number of breakthrough technologies on this Web site.

(2.) Von Hippel, E. The Sources of Innovation. Oxford, England: Oxford University Press, 1988. Leonard, D. and Rayport, J. "Spark Innovation Through Empathic Design." Harvard Business Review, Nov.-Dec. 1997, pp. 102-113.

(3.) Abell, D. Defining the Business. Englewood Cliffs: Prentice-Hall, 1980.

(4.) Meyer, M. and Lehnerd, A. The Power of Product Platforms. New York: The Free Press, 1997. This book examines platform concepts and methods for physical assembled products and software.

(5.) Meyer, M. H. and Dhaval, D. "Managing Platform Architectures and Manufacturing Processes for Non-Assembled Products." Journal of Product Innovation Management, No. 10:2002, pp. 277-293. This article examined the meaning and impact of modularity in chemicals, paper products and semiconductors.

(6.) While current and/or former IBM employees have been interviewed and have shared their personal opinions, it should be noted that they were speaking as individuals on their own behalf and that this account does not represent an official IBM view of the events and matters discussed herein.

(7.) Brooks, F. The Mythical Man Month, Anniversary Edition: Essay's on Software Development. Reading. MA: Addison-Wesley, 1995.

(8.) IBM, MVS, VTAM, DB2, S/390, AS/400, RS6000, and zSeries are registered trademarks of IBM. Solaris and Java are trademarks of Sun Microsystems.

(9.) Leonard, D. The Wellsprings of Knowledge: Building and Sustaining Sources of Innovation. Boston: Harvard Business School Press, 1998. This book describes how core competencies can become organizational rigidities, reflecting the earlier work of Abernathy and Wayne. "Limits of the learning curve." Harvard Business Review. Sept./Oct. 1974, pp. 109-118.

(10.) CMOS integrated circuits use a metal-oxide-semiconductor transistor that allows them to consume energy only in switching transitions, and virtually no energy in the wait state. In contrast, the first microelectronic circuits used bipolar junction transistors (invented in 1947), where there is continuous energy consumption during both the switching and while waiting for a circuit switch. First applied during the 1960s, CMOS' energy efficiency now makes it the preferred type of circuit for the vast majority of integrated circuit applications.

(11.) Utterback, J. Mastering the Dynamics of Innovation. Boston: Harvard Business School Press, 1994. This book contains many examples of firms that struggled to incorporate technological discontinuities into their core product lines.

(12.) Rao, G. S., Gregg, T. A., Price, C., Rao, C. L., and Repka, S. "IBM S/390 Parallel Enterprise Servers G3 and G4." IBM Journal of Research and Development, Volume 41, Numbers 4/5, 1997.

(13.) The authors thank Prof. John Friar of Northeastern University for helping to develop this framework.

Marc Meyer is director of the High Technology MBA Programs and Sarmanian Professor of Entrepreneurial Studies at Northeastern University in Boston, Massachusetts. He was a 2002 recipient of the Maurice Holland Award from the Industrial Research Institute for the best article published in RTM in 2001 ("Make Platform Innovation Drive Enterprise Growth," co-authored with Paul C. Mugge). Meyer is writing a new book on strategies and methods for leveraging platforms to new market applications (forthcoming from Oxford University Press.) He holds an A.B. from Harvard, and an M.Sc. and Ph.D. from MIT in the fields of business and technology management, mhm@neu.edu.

Mark Anzani is the vice president of zSeries Hardware Products at IBM in Poughkeepsie, New York. He is responsible for the development and launch of the hardware products within the zSeries portfolio, comprising IBM's large-scale servers. He holds a degree in electrical engineering from the University of Bath (United Kingdom).

George Walsh is the vice president of On Demand Systems Environment at IBM in Poughkeepsie, New York. He is responsible for delivering IT solutions that assist customers in creating efficient, flexible and resilient IT infrastructures. He is also responsible for managing the Advanced eBusiness Council, which IBM uses to understand future customer requirements and initiate technology and product plans.
COPYRIGHT 2005 Industrial Research Institute Inc.
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2005 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:International Business Machines Corp.
Comment:Innovation and enterprise growth: it took courage, focus and commitment at all levels for IBM to develop its next-generation product lines.(International Business Machines Corp.)
Author:Meyer, Marc H.; Anzani, Mark; Walsh, George
Publication:Research-Technology Management
Geographic Code:1USA
Date:Jul 1, 2005
Words:6830
Previous Article:Molecular sieve zeolites: an industrial research success story: the discovery and development of zeolites and molecular sieves depended on many...
Next Article:Industrializing academic knowledge in Taiwan: the attitudes of Taiwan's universities toward transferring and commercializing academic knowledge have...
Topics:

Terms of use | Privacy policy | Copyright © 2020 Farlex, Inc. | Feedback | For webmasters