Printer Friendly

The meaning of monopoly: antitrust analysis in high-technology industries.

I. Introduction

The centennial of the Sherman Act passed with considerable commentary coupled with self-congratulation by antitrust lawyers and economists. However, not everyone expressed comfort that the analytic approaches of mainstream competition policy were really up to the task of properly understanding and analyzing competition and monopoly in high-technology industries. Jorde and Teece predicted that "antitrust policy in the 1990's--will be shaped more by concerns about innovation and competitiveness than in any other period in recent history"(1) and worried aloud that "the analytic lenses still commonly employed today in antitrust analysis were more suitable in a world where competition was less global, where innovation was less of a multinational phenomenon, and where time to market was less critical."(2) Recommended was a retooling of the analytics to recognize the new competitive landscape, because of the high risk of policy error associated with an antitrust regime that proceeds with a "highly stylized, static, and inaccurate view of the nature of competition."(3) But the call for increased scholarship(4) has been largely unanswered, and the antitrust agencies appear to be moving forward in uncharted territory assisted only by a meager amount of scholarly research on innovation and competition.(5) Other than recent work on increasing returns and network externalities, antitrust economics and jurisprudence displays at best limited recognition of the nature of competition in high-technology industries, how high-technology industries evolve, the nature and sources of economic rents, and the implications for public policy.

This is not to suggest that this article will come up with all the answers, although we will present some new conceptual approaches and analytical methods that we believe are helpful. Rather, we flag large zones of ignorance. Our focus will be primarily on monopoly and monopolization issues. While our framework is relevant to the analysis of mergers and acquisitions, we do not explicitly consider policy toward mergers and acquisitions in this article.(6)

Based on our assessment of the state of the art in antitrust economics, we respectfully suggest that the lawyers and economists in both the private and public sectors recognize the severe limitations of existing analytical tools. Unless they do so, we are extremely concerned that the antitrust agencies in particular will end up taking actions that could harm competition in the computer software industry as well as in other high-technology industries. The opportunities for the agencies to harm competition are far greater than their opportunities to improve competition in sectors where there is rapid innovation,(7) given the poor (though nevertheless improved) state of society's understanding of the economics of innovation.

II. Characteristics of industries experiencing rapid technological change

A. The episodic nature of competitive disruption

1. THE PHENOMENA Competition in high-technology industries is fierce, frequently characterized by incremental innovation, punctuated by major paradigm shifts. These shifts frequently cause incumbents' positions to be completely overturned. Andy Grove, CEO of Intel, has referred to these as major inflexion points. Their frequency means that business risk is high. As James Utterback has observed,(8) competition is not unlike the game of Chutes and Ladders. A player may arrive at the bottom of a ladder, and then rapidly ascend to a higher level and obtain higher stakes. The converse is also true, and bad luck and special circumstance can cause one to suddenly fall. In high-technology industries, a firm not fully alert to changing circumstances can find itself in this predicament.

Sensing and then seizing the opportunities and threats afforded by these major shifts is critical to a firm's survival and subsequent prosperity. Such action requires that the firm possess considerable entrepreneurial capacity in its top management. Established firms often find it difficult to change their trajectory, as they are apt to approach discontinuities and conflicting corporate interests with compromises. "Bridging a technological discontinuity by having one foot in the past and the other in the future may be a viable solution in the short run, but the potential success of hybrid strategies is diluted from the outset compared to rivals with a single focus."(9)

The opening up of a new technological regime--what Utterback calls a "technological ensemble"--affords opportunities for new entrants. Success for the incumbent is difficult, but not impossible. It requires that the incumbent be able to monitor and respond to new customer demands and technological opportunities, forming new alliances and relationships as appropriate. Elsewhere we refer to this as the firm's dynamic capabilities; such capabilities are critical to success in rapidly changing high-technology contexts.(10)

However, even if a firm is entrepreneurial, change can be either competency-enhancing (the new regime increases the demand for the competence, or competence-destroying (the new regime diminishes the demand for the competence). Dynamic capabilities thus will not necessarily suffice to help a firm transition from the old to the new. But they can certainly help. Incumbents have to be willing and able to abandon the old and embrace the new. It's frequently a property of organization that constituents inside the firm protect the old far too long, thereby compromising their commitment to the new. Only the best managed firms make these transitions with alacrity; the frequent failure of incumbents thus opens up considerable opportunities to newcomers. Sometimes the inability of firms in an industry to bridge a discontinuity is extreme. When Henderson and Clark studied the semiconductor photolithographic alignment equipment industry over five product generations, they found that no firm that led in one generation of product figured prominently in the next.(11) The reasons were as much organizational as technological, as innovation tends to destroy the usefulness of the information processing procedures inside the firm. Examples of discontinuities identified by Utterback (not all of them high tech) are presented in table 1. In Utterback's empirical work, in only one-fourth of the cases studied (forty-six in total) did existing competitors either introduce radical innovation or were able to initiate them quickly and survive as major players in the market.
Table 1
Product and Process Discontinuities in Four Industries

Industry                Discontinuities

Typewriters             Manual to electric; to dedicated word
                        processors; to personal computers
Lighting                Oil lamps to gas; to incandescent lamps;
                        to fluorescent lamps
Plate glassmaking       Crown glass to cast glass through many
                        changes in process architecture; to float
                        process glass
Ice and refrigeration   Harvested natural ice to mechanically
                        made ice; to refrigeration; to asceptic
                        packaging
Imaging                 Daguerreotype to tintype; to wet plate
                        photography to dry plate; to roll film;
                        to electronic imaging; to digital
                        electronic imaging


SOURCE: JAMES UTTERBACK, MASTERING THE DYNAMICS OF INNOVATION (1996), at 201.

Discontinuities are relatively frequent in the computer industry. Indeed, as table 2 indicates, major paradigm shifts have occurred in each of the last four decades. The paradigms are more disruptive the more they require organizational linkages (internally and externally) to be reconfigured. In computing, for instance, each new computing paradigm had introduced a significant new distribution channel.(12)
Table 2
Technological Discontinuities in Electronic Computing

Time Frame             1960s            1970s

Computing paradigm     Mainframe        Minicomputer

Computer platform      Mainframe        Minicomputer

Technology advance     Batch            Interactive
                       processing       processing
Computing              Single vendor    Single vendor
environment
Application source     In-house and     In-house and
                       third party(*)   third party(*)
Primary applications   Enterprise       Enterprise

Computer buyers        Corporate MIS    Corporate MIS

New distribution       Direct sales     OEMs/VARs
channel

Time Frame             1980s                1990s

Computing paradigm     Personal             Enterprise network
                       computing
Computer platform      Personal computer/   Mainframe, minicomputer
                       workstation          PC/workstation, LANs
Technology advance     Desktop              Desktop access to all
                       processing           computer resources on
                                            the network
Computing              Single vendor        Multiple vendors
environment
Application source     Shrink-wrapped       In-house and
                                            third party(*)
Primary applications   Personal             Enterprise and
                       productivity         information, E-commerce
Computer buyers        Departments/         Corporate MIS and
                       end users            departments/end users
New distribution       Retail               Systems integrators,
channel                                     internets and intranets


(*) Third Party = Independent software vendor or system integrator.

SOURCE: ROBERTSON, STEPHENS & COMPANY, ENTERPRISE SOFTWARE APPLICATIONS IN THE '90's (June 2, 1992).

We do not mean to imply that each paradigm is hermetically sealed from the other. Indeed, there may be underlying trends that accompany the transition from one paradigm to another. For instance, computers, originally designed for number crunching and applied to "computing" tasks for nearly 50 years, are being increasingly used for communicating.(13) Moreover, the Internet is transforming computing because it is, by design, a force of decentralization. As Bill Joy, a legendary computer technologist and cofounder of Sun pointed out, "the Internet defies being controlled by any one entity, it doesn't discriminate. There are no wrong types of computers or software for the Net, as long as they follow some very basic communications rules."(14)

With the paradigm shifts, there are significant risks, not only for incumbents who don't recognize the significance of the shift, but also to newcomers that may have a good technological foundation in the new paradigm, but lack the relevant complementary assets needed to compete. Commenting on recent paradigm shifts in computing, one analyst wrote:
   We expect to see the same type of carnage on the current software industry
   "playing field" as we did over the past few years among companies who woke
   up too late (or not at all) to the realities of the shift from legacy
   systems to enterprise network computing. We also believe this shift will
   have an even greater impact on existing software vendors than did the shift
   to enterprise network computing from legacy systems.... getting there this
   time is only part of the challenge--global network computing entails both
   new business models for software pricing and new distribution.(15)


A paradigm shift of this nature devalues the assets of incumbents, particularly distribution assets, triggering new competition. The race will be won by the intelligent and the swift.

2. IMPLICATION FOR ANTITRUST The paradigmatic nature of industrial and technological evolution, with waves of creative destruction occurring episodically, suggests an antitrust enforcement regime that is not hair trigger in its operation. While each wave of creative destruction is by no means predictable as to timing and strength, antitrust authorities need to be cognizant of the self-corrective nature of dominance engendered by regime shifts. This is true even when there are significant network externalities and installed base effects. Except for the intelligent and the swift, market dominance is likely to be transitory, as regime shifts dramatically lower entry costs. When incumbents survive such shifts, it's usually a good indicator of their competitive fettle.

The utter destruction that can be wrought by firms embracing the new paradigm means that competition in high-technology industries is often orders of magnitude stronger than in mature industries, and the risk associated with operating in high-technology environments is correspondingly high.(16) Accordingly, one should expect to see far higher margins in high-technology businesses to compensate for this risk and the up-front investment in R&D.(17)

Moreover, antitrust intervention is likely to be rendered both unnecessary and undesirable, except in the most unusual of circumstances. One reason is that paradigm shifts periodically enable new entrants to upset the existing order--something rather rare in mature industries. A second reason is that efforts to hobble the winner in one round of innovations will be seen as diminishing the returns available from competing in such high-risk environments, thereby diverting resources to other sectors of the economy displaying less risk and affording less innovation. A third reason is that even highly expert antitrust agencies are unlikely to understand industry dynamics very well at all.

There are also significant implications with respect to the feasibility and the profitability of anticompetitive conduct. It is not unusual for economists, particularly those with well-honed theoretical capabilities, to posit complex predation strategies for which financial viability requires recoupment in some future period. These strategies, often Byzantine in their complexity, require a high degree of predictability with respect to the future structure of the industry to make them profitable in an expectational sense. The greater the likelihood of a paradigm shift, the less the expected profitability of predatory strategies relying on recoupment in a future period. While this need not establish the impossibility of predation and other forms of anticompetitive behavior, in many instances it will suggest that other factors will be better candidates to explain the conduct at hand. Furthermore, litigation lags are likely to be long compared to the speed at which competitive forces reshape the industry, possibly rendering antitrust action at best superfluous, and at worst damaging to competition.

B. Increasing returns and network externalities

1. THE PHENOMENON OF INCREASING RETURNS Contemporary textbook understandings of how markets operate and how firms compete has been derived from the work of economists such as Chamberlain, Mason, and Bain. These frameworks frequently assume diminishing returns, and assign industry participants identical production functions (implying the use of identical technologies by all competitors) where marginal costs increase. Industry equilibrium with numerous participants arise because marginal costs curves slope upward, thereby exhausting scale advantages at the level of the firm, and making room for multiple industry participants. This theory was useful for understanding 18th-century English farms and 19th-century Scottish factories, and even some 20th-century American manufacturers. However, major deficiencies in this view of the world have been apparent for some time--it is a caricature of the firm. Moreover, knowledge is certainly not shared ubiquitously and transferred at zero cost.(18)

In this century, developed economies have undergone a transformation from largely raw material processing and manufacturing activities to the processing of information and the development, application, and transfer of new knowledge. As a consequence, diminishing return activities have become increasingly replaced by activities characterized by increasing returns. The phenomena of increasing returns is usually paramount in knowledge-based industries. With increasing returns, that which is ahead tends to stay ahead, until interception by a major paradigm shift. Mechanisms of positive feedback reinforce the winners and challenge the losers. Whatever the reason one gets ahead--acumen, chance, clever strategy--increasing returns amplifies the advantage. With increasing returns, the market at least for a while tilts in favor of the provider that gets out in front. Such a firm need not be the pioneering one, and need not have the best product.

The economics of increasing returns suggest different corporate strategies. In winner-take-all or winner-take-the-lion's-share contexts, there is heightened payoff associated with getting the timing right (one can be too early or too late), and organizing sufficient resources once opportunity opens up. Very often, competition is like a high stakes game of musical chairs. Being well positioned when standards gel is essential. The associated styles of competition are, as Brian Arthur(19) points out, more like casino gambling. Strategy involves choosing what games to play, as well as playing with skill. Multimedia, Web services, voice recognition, mobile (software) agents, electronic commerce are all technological/market plays where the rules are not set, the identity of the players poorly appreciated, and the payoffs' matrix murky at best. Rewards go to those good at sensing and seizing opportunities.

Seizing opportunities frequently involves identifying and combining the relevant complementary assets needed to support the business. Superior technology alone is rarely enough upon which to build competitive advantage. The winners are the entrepreneurs with the cognitive and managerial skills to discern the shape of the play, and act upon it. Recognizing strategic errors and adjusting accordingly is a critical part of becoming and remaining successful.

In this environment, there is little payoff to penny pinching, and high payoff to rapidly sensing and then seizing opportunities. This is what we refer to here and elsewhere(20) as dynamic capabilities. Dynamic capabilities are most likely to be resident in firms that are highly entrepreneurial, with flat hierarchies, a clear vision, high-powered incentives, and high autonomy (to insure responsiveness). The firm must be able to effectively navigate quick turns, as Microsoft did once Gates recognized the importance of the Internet. Cost minimization and static optimization provide only minor advantages. Plans must be made and junked with alacrity. Companies must constantly transform and retransform. A "mission critical" orientation is essential.

2. IMPLICATIONS FOR ANTITRUST If one believes in the inexorable march of increasing returns, one would predict monopoly as the eventual industry structure.(21) However, if monopolization is inevitable, then the main basis for criticizing an outcome is that the market anointed the wrong monopolist. Accordingly, even total faith in increasing returns as a governing economic principle does not necessarily lead to any clear path of antitrust intervention. It is only if one could in an ex ante sense pick the "good" potential monopolists from the "bad" ones that antitrust intervention would appear to have obvious benefit. But there is no analytical apparatus to guide the government in anointing the more benign.

There are other factors that soften the concern anyway. First, even if increasing returns do lead to the elimination of competitors who use a particular supply technology, it need not establish a monopoly if there are competing products available from suppliers who use alternative technologies. Even if the competing technologies themselves display increasing returns, the outcome is duopoly or oligopoly, not monopoly.

Furthermore, as discussed earlier, technological paradigms are eventually overturned. In some cases, it may be relevant to ask whether an incumbent's actions are designed to delay or prevent a paradigm shift that would be competency destroying. In our view, such conduct is unlikely to be effective and could not prevent a paradigm shift that offers significant improvement. The vacuum tube manufacturers could not have stemmed the tide of the transistor, no matter how hard they might have tried. In software, access to complementary assets and an installed base could not block the newcomer if the newcomer has a truly revolutionary product. Intermediate cases are of course more difficult. In general, however, efforts by incumbent firms to block rather than embrace the new paradigm are extremely risky, as failure results in annihilation because it is likely to prevent incumbents from transitioning from the old to the new. Technological change external to the industry or business at hand can thus more often than not undo dominance should it arise. Accordingly, any dominance is likely to be temporary--certainly more so than in a less technologically dynamic context. The arrival of a new technological ensemble epoch/ paradigm is of course unpredictable; but we do note that in the computer industry, new competing paradigms seem to have emerged each decade, certainly faster than the legal system can typically identify, analyze, and then litigate major antitrust issues.

Finally, if there are increasing returns because of large up-front development costs for new "add on" features, it may well be efficient to provide these to all customers, because the marginal cost of doing so will be low. This means that the "tying" and "bundling" of certain features is likely to be highly efficient, thereby upsetting specialist suppliers of such items.

C. Network effects

1. THE PHENOMENON Another characteristic often found in conjunction with increasing returns, but analytically distinct from it, is network effects.(22) Increasing returns is a production-side characteristic describing the increase in output as all inputs are scaled up. Network effects are a demand-side phenomenon associated with value to the customer. Network effects result in markets where customer's valuation of a particular product is enhanced when it is employed in a system. For instance, the value of a telephone network to an individual user increases with the number of other individuals who are connected to that network. Similarly, the value of a computer platform may increase with the number of users because users share files and different files may not be compatible with different platforms. Again, with more users on a given platform, any individual user is more likely to be able to share files with another user. Network effects lead to positive feedback. The more users on a system, the more valuable it is. Network effects are a form of demand-side economies of scope. With feedback, the strong get stronger and the weak weaker.

Network effects can also be indirect. Users frequently invest in complementary goods. Given economies of scale in the manufacture of complementary goods, the more users of a given platform, the more complementary goods that will likely be supplied to that platform. This will lower the cost or increase the value of the platform.

Network effects can thus lead the market to select one platform (or standard) because the benefits of having one platform are greater than the costs from less diversity in platform offerings. A major emphasis of the literature regarding network effects has been whether markets will choose the right platform (or standard) and whether new "improved" platforms will be able to supplant the existing platform.

2. IMPLICATIONS FOR ANTITRUST When network effects are strong, they constitute an important dimension of industry structure, or at least the structure of demand. The degree of compatibility and the strength of network effects shapes the nature of competition, industry evolution, and paths of innovation. In the presence of strong network effects, there are good theoretical reasons to believe that "optimal" platforms and "optimal" transitioning may not be achieved by the market (although it is not clear what can be done to change this result).

It is not at all obvious that antitrust intervention can improve matters. This is because antitrust policy cannot realistically aspire to produce "optimal" outcomes, where "optimality" is measured against some theoretically defined efficiency or consumer welfare criteria. Rather, antitrust can only aspire toward helping guarantee outcomes from a competitive process, even if the outcome is not the theoretically most appealing.(23) The presence of network effects may result in the incumbent firm being favored by new customers. It could eventually become dominant through positive feedback. This could last for several generations of products, although it is unlikely to survive a major paradigm shift unless the gap between the "good enough" and the "best" is quite small.

There are several reasons why the incumbent's dominance might persist. For new customers, the incumbent supplier's existing platform may provide a better price/performance ratio even if the "platform" is inferior or higher cost because the complementary products are more numerous or lower cost. For upgrades to existing users, the incumbent supplier might have an advantage (apart from switching costs) if it is easier for producers of complementary products to upgrade existing products to the incumbent supplier's new platform (rather than an alternative new platform) or if vendors believed more users would upgrade to the incumbent supplier because of switching costs. This would make the vendors more likely to invest in the existing platform or offer lower priced products. These advantages, however, simply reflect the characteristics of the market and the fact that the incumbent supplier succeeded at winning in the previous round of competition for platforms. The fact that an incumbent supplier wins the next generation even with a somewhat inferior product is not necessarily an anticompetitive outcome. Denying network externalities to produce a fragmented market is certainly no solution.

Moreover, the advantage conferred by network effects is not absolute. New platforms can and do supplant existing platforms even when network effects seem significant. If another platform is truly better and the only concern is lack of complementary goods, there is a gain from everyone switching. The platform owner can subsidize complementary goods manufacturers. A few suppliers of complementary goods are likely to see the market potential if the platform is truly superior.

D. Decoupling of information flows from the flow of goods and services

1. THE PHENOMENA New information technology and the adoption of standards is greatly assisting connectivity. Once every person and every business is connected electronically through networks, information can flow more readily. Historically, the transfer/communication of rich information has required geographic proximity, and specialized channels to customers, suppliers, and distributors.

New developments are undermining traditional value chains and business models. In some cases, more "virtual" structures are viable, or shortly will be viable, especially in certain sectors like financial services. New information technology is facilitating specialization. Bargaining power is being reduced by an erosion in the ability to control information. Search costs and switching costs are also declining, changing industry economics. Quick and low cost access to new mass markets are now often possible.

The concomitant expansion of markets for intermediate products (and associated vertical disintegration) is lowering entry costs into many businesses. The rapid growth of virtual corporations (which perform integration roles using markets rather than administrative process inside corporations) reflects the growth in the number of intermediate product markets for all kinds of components and inputs.

The new information technology is also dramatically assisting in the sharing of information. Learning and experience can be much more readily captured and shared. Knowledge learned in the organization can be catalogued and transferred to other applications within and across organizations and geographies. Rich exchange can take place inside the organization, obviating some of the need for formal structures.

2. IMPLICATIONS FOR ANTITRUST The decoupling of the flow of information from the flow of goods and the expansion and liberalization of markets is sharpening competition, and lowering search costs and switching costs. New entrants benefit. The premium on quick response time is increasing, favoring smaller business units. Trends such as this are sharpening competition almost everywhere. While in and of itself this does not imply that antitrust agencies can be less vigilant, the phenomenon is easing the burden on antitrust policy, as competitive response time is being shortened.

III. Scarcity rents, Schumpeterian rents, and monopoly rents

A. General

There has long been a recognition in economics that high profits may not reflect market power. There are not only serious measurement problems associated with using accounting profits, but as Demsetz(24) and Peltzman(25) pointed out decades ago, superior profitability may reflect superior efficiency.

However, it is of some interest to break the sources of rents down more finely. In the context of innovation, Ricardian (scarcity) rents reflect difficult to expand competences; Schumpeterian (entrepreneurial) rents occur because imitation does not occur instantaneously, even though imitators might well "swarm" around the innovators' key technologies and products. Both are benign sources of rent from an antitrust perspective; from a public policy perspective they are beneficial as they encourage investment in valuable knowledge assets and in innovation.

We believe these distinctions are quite fundamental; yet to our knowledge there is no literature in antitrust economics that recognizes them. This is because of the static nature of mainstream antitrust analysis, and its gross neglect of innovation. The distinctions we draw do exist in the economics literature, and they are of quite some importance in the field of strategic management.(26) The welfare implications of each are markedly different, as we will discuss.

B. Ricardian (scarcity) rents

In many contexts where knowledge and other specialized assets underpin a firm's competitive advantage, additional inputs cannot simply be purchased on the market to expand output. Hence, at least in the short run, a firm's output is limited by the available stocks of the scarce inputs. Over time, however, the firm can typically augment its stocks of scarce inputs. However, such replication typically involves the use of the firm's existing stock of idiosyncratic resources, because productive knowledge is not fully codified and labor inputs available on the market do not have the requisite firm-specific skills. This can be a major restraint on the firm's growth.

If the firm in question owns 100% of the world's supply of the unique input (e.g., a unique engineering skill) and if the input is necessary to produce the output, the firm could be a (transitory) "monopolist" in the output market until it is able to expand the availability of such skills. It could fully utilize the constrained input, and yet still end up with price in the final product market being above cost. While the firm might be thought of as a monopolist, its profits are scarcity rents properly attributed to the scarce input. The firm has no incentive to restrict output; but output is nevertheless below the "competitive" level--a hypothetical condition in which more of the scarce input were available. However, if the scarce input (here on engineering competence) were somehow to be broken up and distributed amongst a group of competitors, the price in the final product market would not decrease, and might well increase.(27) In this case, the scarcity rent is simply the normal return to the scarce asset, and there is no efficiency loss to monopoly. Moreover, it is of course the existence of scarcity rents that engenders expansion of output through replication of the underlying skills.

It is somewhat puzzling that the impact of Ricardian rents has not been analyzed in the antitrust literature. Perhaps the answer lies with the fact that, historically at least, economists have associated Ricardian rents with scarce natural resources, like land or iron ore. Scarcity rents then tend to flow upstream to the owners of the scarce inputs. Profits to downstream firms then get competed away. However, when the scarce input is knowledge--embedded in a team or in a small group--rents do not get bid away to the owners of the scarce inputs, for several rather subtle reasons. One is that the market for know-how/knowledge is rather imperfect,(28) so all the rents need not accrue to the owners of the scarce inputs, simply because the market is imperfect. Secondly, the productivity of knowledge assets may depend in part on the presence of certain cospecialized assets, the services of which must be employed for the knowledge assets in question to have value. This can prevent all of the rents from accruing to the scarce inputs themselves.(29)

To summarize, an innovating firm seeking to operate on a larger scale, but temporarily constrained by its stock of idiosyncratic resources, may be highly profitable, but this in no way implies that it is exercising socially undesirable restraint over its output. It is likely that the innovator is simply collecting sufficient Ricardian rents to cover its initial investment and offer encouragement to other innovators and entrepreneurs.

C. Schumpeterian (entrepreneurial) rents

Other situations may generate supranormal returns that are also not properly thought of as monopoly rents. A firm may develop product and process innovations and/or unique business routines (knowledge assets), but these eventually are imitated by competitors. However, there may be a period of temporary excess returns enjoyed by the developer/owner of the knowledge assets in question. These returns are once again not monopoly rents, but Schumpeterian rents. "Low investment and slow imitation spell greater financial success for the innovator.(30) In the absence of imitation, or the absence of the fear of imitation, the innovating firm has significant control over the scale at which the innovation is implemented in the long run.

There are a number of factors that prevent competitors from appropriating the rents from innovation instantaneously (see figure). An obvious one is that much of the knowledge at issue may be highly tacit, rendering the product/process difficult to imitate. Secondly, the knowledge at issue may not be observable in use, and so reverse engineering is not feasible as an imitation pathway. Furthermore, the process/product in question may enjoy a certain amount of intellectual property protection, rendering imitation more costly, and possibly impossible (in the case of a broad-scope patent), at least for a period of time.

[Figure ILLUSTRATION OMITTED]

Nevertheless, barriers to imitation such as these are almost always temporary, and afford the owner of knowledge assets a certain period of time within which to earn supernormal profits. However, these profits are the return to innovation (more specifically, they are a return too difficult to imitate knowledge assets) and are generally necessary to induce investment in the creation of such knowledge assets. Such rents are accordingly necessary and desirable, and should not be the target of antitrust action.

D. Monopoly (Porterian(31)) rents

The type of rent that ought to be the target of antitrust concern stems from the naked exercise of market power by a firm (or firms). These circumstances might arise because of exclusionary conduct lacking efficiency justifications, from predatory conduct, or from governmentally conferred privileges (e.g., licenses).

In the context of innovation, anticompetitive conduct is extremely chancy as an efficacious strategy. This is because the payoff from such conduct is likely to be minuscule in the total scheme of things, because of the power of new technology to shape competitive outcomes. New technology can change the price/performance profile of a product by several orders of magnitude, whereas anticompetitive conduct is likely to have at most a minor impact on the total scheme of things. History shows that commercially relevant and value-enhancing technologies triumph, even in the face of considerable adversity.

IV. The hallmarks of monopoly power in high technology

A. Introduction

The competitive landscape is so different in high-technology industries that the traditional hallmarks of monopoly (reduction in output or increases in prices) are rarely seen. This is either because (1) monopoly power is so difficult to acquire in high-technology industries or (2) the traditional hallmarks of monopoly are no longer operational because the benchmarks (the competitive levels of price and output) are unobservable and very difficult to estimate, raising anew the question of how to identify monopoly, and how to measure market power. This is obviously one of the most basic questions in antitrust; but our answers to it leave much to be desired in the context of high-technology industries.

Irving Fisher defined monopoly as an "absence of competition."(32) Subsequent treatments have done little to improve the definitions. Consider modern textbooks. Pindyck and Rubinfeld define it as follows: "a monopoly is a market that has only one seller...."(33) "Firms may be able to affect price and may find it profitable to charge a price higher than marginal costs."(34) Carlton and Perloff point out that a "monopolist recognizes that the quantity it sells is affected by the price it sets."(35) The emphasis should of course be on whether the monopolist can profitably raise price.

Economists must admit that their criteria for defining monopoly are far from perfect. The issue is further complicated because lawyers and the laity all think they know the meaning of monopoly. The difficulty is amplified because economists often use words that are in everyday use for which the everyday meaning is quite different from the technical definition. Even some economists have erred in labeling as a monopoly a situation where a firm controlled 100% of its own output!(36)

In the context of innovation, the task is especially complicated. Competition is a dynamic process and takes place over time, often following the punctuated processes described earlier. The commercialization of innovation will at first generate high profits, which may be either of the Ricardian or the Schumpeterian kind described above. The presence of such profits will (in the case of Ricardian rents) cause the innovator to endeavor to expand output, or (in the case of Schumpeterian rents) lure imitators into the business. This will cause the price to come down, or the performance to improve, since the imitator/emulator will need to provide something to lure customers away from the innovator to the imitator. The innovator must respond by lowering price or improving performance to hold onto its customers.

What is clear is that profits are necessary to grease the competitive process. It is the quest for profits that encourages innovation in the first place; it is the quest for profits by imitators that spurs competition. And if the innovator responds to the imitator with lower prices it need not be predation but merely the process of competition at work.

In the high-technology context, a monopolist cannot therefore be identified by traditional (textbook) marginal cost pricing tests, such as the Lerner index. Perhaps a more meaningful approach to monopoly pricing is to ask whether consumers are paying a price higher than is needed to draw forth the products and services they desire over time. The price cannot therefore be analyzed statically; it must be viewed dynamically, and across products. If, for instance, prices are not high enough to cover R&D costs for both the successful and the unsuccessful products, innovation will wither.

What then is monopoly in such an environment? It is not a situation of high market share; nor is it a situation where profits are high, or where prices are above marginal cost. Rather, a monopolist would be a firm shielded from entry, i.e., insulated from competition from other innovators and imitators. The monopolist could stay ahead without innovating or lowering prices. The crucial difference between monopoly and competition is that with competition market forces compel improvement in the product offerings available to the customer. With monopoly, there is no such compulsion from the marketplace.

B. Market power

Since a pure monopoly circumstance (100% of an economically relevant market) is rare in the absence of governmental control, as a practical matter antitrust economists tend to analyze monopoly in terms of market power. If a firm has high market power, it faces minimal compulsion from the marketplace.

The courts have defined monopoly as the "ability to price without regard to competition" or "the power to set prices and exclude competitors." Properly understood, these are good definitions, and from an operational prospective, perhaps better than what is contained in many economics textbooks.

In order to measure market power, the field of antitrust had developed a methodology (which the Superior Court endorses) whereby one must first define a relevant antitrust market, and then assess competition within it. In a fundamental sense, this is not required of course. One could in principle figure out whether a firm has the ability to act in an unconstrained way without defining a market. However, we agree that market definition, done correctly, is a useful aid to analysis. One must place in the relevant market those products and services, and their providers, whose presence and actions serve to check the behavior of the tentatively identified monopolist.

C. Market definition

1. GENERAL The primary question in defining a relevant market remains whether there are constraints on the tentatively designated monopolist. One must always be careful to insure that the market definition exercise does not obscure the fundamental question of constraints on power. The principal constraints can be of two types: (1) those relating to demand and (2) those relating to supply. The concepts of demand and supply substitutability are useful in assessing whether there are constraints in a tentatively identified monopolist. So are some related concepts.

But the analysis of supply and demand substitution possibilities and opportunities is quite complicated in regimes of rapid technological change. Simply analyzing the market from a static perspective will almost always lead to the identification of markets that are too narrow. Because market power is often quite transitory, standard entry barrier analysis--with its 1-to-2-year fuse for entry--will often find that an innovator has power over price when its position is in fact extremely fragile. Further, much of the data on which courts and antitrust regulators rely are necessarily backward-looking,(37) meaning that firms at the end of an innovation-based period of dominance are actually more likely to be the subject of antitrust scrutiny than be in a position to exercise market power.(38) The evidence presented earlier suggests that not all firms in existing product markets are well-positioned to compete in next-generation product markets. If a firm is unable to keep up with a shift in the technological basis of the market, whether because of path dependencies or problems in replicating technical success, market analysis should dramatically discount that firm's participation in the market in evaluating market power. Unfortunately, for the most part courts are content to use past success as a proxy for future viability.(39) In many cases, doing so will overstate (understate) the competitive forces at work in a market.

These problems with the assessment of power in a dynamic market are compounded by difficulties in even defining what the relevant market is in high-technology industries. The rather monolithic approach that the FTC and Antitrust Division's 1992 Horizontal Merger Guidelines take to market definition and the assessment of market power risks defining high-technology markets too narrowly, especially if applied too mechanically. As explained in the Merger Guidelines, the agencies will include in the product market a group of products such that a hypothetical firm that was the only present and future seller of these products ("monopolist") could profitably impose a "small but significant and nontransitory increase in price" (SSNIP)--generally five percent lasting 1 year.(40) The implicit assumption adopted by the Guidelines is that products in a market are homogeneous and competitors compete on price. Application of the SSNIP test in an industry where competition is performance-based (almost always true when product innovation is present) rather than purely price-related is likely to create a downward bias in the definition of the size of the relevant product market, and a corresponding upward bias in the assessment of market power. This can be illustrated by looking at a couple of businesses where competition is performance-based. (See appendix A.) However, deficiencies in the standard approach can possibly be remedied with a multi-attribute SSNIPP. (See appendix B.)

2. NOTE ON SWITCHING COSTS Whether using a SSNIP or a SSNIPP test (appendix B), demand-side substitution is of some importance to market definition. In the context of high-technology industries, where technical compatibility and/or interoperability issues are of importance, the issue of switching costs frequently come to the fore.

In many high-technology industries, customers purchase systems of components including a platform and complementary products. The platform is only valuable (or is more valuable) if the customer acquires or creates complementary goods, services and know-how. For instance, consumers who purchase a computer (with a particular operating system) also acquire applications, and create files and knowledge of how to use that system. Other examples include CD players and CDs, VCRs and videotapes.

Suppliers of new platforms (or new versions of the platform), be they existing or new suppliers, will sell to two groups of potential customers. The first group includes new users who do not have existing systems and thus face no switching costs. The second group of customers includes users with existing systems who are seeking to replace the platform component of their existing system because technology in the platform has changed or the product has worn out. A potential consideration for customers when upgrading is whether their existing investments in complementary goods, services and know-how will be usable with the new platform or will they need to make an investment in the new complementary products in addition to the platform. If new investments in complementary goods must be made, the new base product will have to provide enough improvements over the existing platform, given the cost of the new system, to justify those investments. Thus, to attract this user group, suppliers of new platforms must not only provide improvements over existing products but also assess how switching costs might affect the purchase decision. Platform suppliers might be able to impact the level of switching costs through the manner in which they develop their product or through the development of complementary products that ease migration.

The importance of switching costs to the upgrade decision (1) increases as the existing investment in complementary goods increases; (2) decreases the more likely the customer will purchase new complementary goods; (3) decreases the more complementary goods that can be used with the new platform; and (4) decreases the greater the difference in functionality between new platforms and the existing platform. Whether the incumbent platform supplier has an advantage over other suppliers for next-generation products depends on whether the existing supplier can produce more easily a next-generation product with lower switching costs than could other suppliers. If so, the incumbent could make fewer improvements than other suppliers (or charge higher prices) and still retain customers. In essence, the firm can earn a rent equal to its advantage in switching costs. The size of this rent is constrained by the size of switching costs and the extent to which other suppliers can provide products that ease the transition of complementary goods to the new platforms; or such an increase in performance to justify investment in new complementary goods.

Consider the example of CD players. When CD players were introduced, consumers had large inventories of records and records could not be made to work on a CD player. To have a CD player be useful, therefore, consumers had to purchase a whole new inventory of CDs to replace their existing record collections. However, CD sound quality relative to records was so superior and the price for such systems dropped so rapidly that consumers switched en masse to CDs within a few years.

Similarly, many businesses have switched from proprietary closed systems to open systems running UNIX or other software to run their business applications software. The open systems were generally not directly compatible with the proprietary system. However, open-system vendors frequently provide migration tools to help port the existing files and applications to the new system, and customers were willing to purchase new applications that took advantage of the advanced features of the new system. In some cases, suppliers switched vendors; in others, customers decided to migrate to new systems from their existing vendors.

Again, it must be emphasized that the source of this advantage, to the extent it exists, is not due to anticompetitive actions taken by the existing supplier.(41) Rather this advantage is due to the nature of the products sold. The fact that new suppliers with "better" products do not supplant existing suppliers is not necessarily inefficient. Consumers will make decisions about whether to upgrade to new platforms based on the cost and advantages of switching to the new platform. A new platform should not succeed unless its advantages are greater than the full costs associated with switching to that platform.(42) Thus, it is important to emphasize that it is not necessarily reflective of anticompetitive actions if new "better" technologies do not succeed--rather it can be that the advantages of these technologies do not justify the investments in new complementary goods that allow transition of existing investments to the new platforms.

In dynamic industries, an incumbent supplier cannot rest on its laurels and expect to be able to retain its customers. To sell new products it must upgrade its products, or risk losing new and existing customers to other suppliers who offer better products. In addition, the supplier must take into account that the level of switching costs is not a given. Rivals can invest in means to reduce switching costs and thus reduce the incumbent supplier's advantage. In fact if the rival has developed a truly superior technology it has the incentive to do so.

The existing supplier may in fact be at a disadvantage for new-generation technology if switching costs are an issue. Implementing new technologies may cause incompatibilities for its existing customers but supporting multiple platforms may be costly. The existing supplier may not wish to "strand" its existing customer base and thus might be at a disadvantage in seeking to implement new technologies. All of these factors indicate that in a dynamic industry, switching costs may provide an advantage, but this advantage is likely to be limited particularly as regards the potential for significant "leaps" in technology.

The extent of lock-in also relates to the pace of technological change. The more rapid the pace of change, the more quickly customers are likely to switch to new base and complementary products to take advantage of new advanced features. While compatibility of existing files may still be important, the need to purchase new applications or other complementary goods may be less important since the customer would probably upgrade those in any event. Thus, the more rapid the rate of change, the lower the switching costs, and the broader the market.

D. A note on barriers to entry

Entry analysis plays a major role in market definition and the assessment of market power. In terms of the apparatus of antitrust analysis, this occurs either through the identification of competitors already in the market, or whether such firms could enter the market in a timely fashion to discipline the exercise of market power by an incumbent. Where rents are monopoly rents, or where entry is difficult, and there are not already many existing competitors in the market, monopoly power can survive. The correct analysis of entry (and expansion of firms already in the market) is thus important to the assessment of market power.

Any factor that stands in the way of an entrant is not a barrier to entry, but could simply be a cost of entry. An entry barrier exists only when entry would be socially beneficial but is somehow blocked. Unnecessarily high profits result, and society (and not just new entrants) would be better off if they were competed away.

The innovation context is most important. After a superior product has been invented, society might be better off in the short run if imitators could produce it right away. Just because they cannot does not imply the existence of economically meaningful barriers to entry. The profits earned are likely to be Schumpeterian profits, and reflect a return to investment in R&D, and to creative activity and risk taking.

E. Market share

1. GENERAL After a market has been defined, and the competitors in a market have been identified, the next step in traditional antitrust analysis is the computation of share. Plaintiffs in antitrust cases wish to make them high, defendants tend to point out that they are low. If a market is defined narrowly, it is more likely that shares will be high, and vice versa if the market is defined broadly.

However, the meaning of market share is a function of how one has defined the market. Define it too narrowly or too broadly, and a high share doesn't carry much information. Not everything that is in the market need be weighed equally in terms of constraining the power of the dominant firm; not all that is excluded is irrelevant for explaining the constraints on the dominant firm.

Market share is not the end of the story, particularly in high-technology industries. Many economists, drawing on their understanding of static contexts, tend to believe that a small share shows the absence of market power while a larger share indicates its presence. This is frequently not the case where there is rapid innovation. (Our presumption here is that markets have been defined correctly.)

The more fundamental question is what happens to the firm's business when (if) monopoly profits are sought. This is traditionally analyzed through entry barriers, if not already analyzed in the market definition exercise when looking at the supply-side response. Absent entry barriers, even a high level of concentration does not convey market power. This is commonly recognized in antitrust analysis. Thus a firm with a large market share in a relevant market may simply be efficient or innovative. It could be sustaining its position through lower prices and/or superior products. One should not for a moment necessarily infer market power from such a large share.

To determine whether one is looking at a firm exercising antitrust market power (a monopolist) one has to go deeper and analyze the nature of rents. Are they Ricardian (scarcity rents), Schumpeterian (entrepreneurial rents), as defined earlier, and as elaborated in more detail below. Our position is of course akin to the legal question as to whether the monopoly is acquired and maintained by superior skill, foresight, and efficiency. If it is, the antitrust law recognizes it as a legal monopoly; we would prefer to say that a large share and associated high profits are not a monopoly if the source of the rents are Ricardian or Schumpeterian. It is only a monopoly if it earns monopoly rents. Put differently, in our view a true monopolist is a firm that is earning monopoly rents and not Ricardian or Schumpeterian rents. Ricardian and Schumpeterian rents may be considerable, but they tend to be transitory unless renewed by continuous innovation.

2. INDUSTRIAL DYNAMICS AND CONCENTRATION LEVELS When an industry is in ferment, a proper definition of the market must include a variety of competing technologies, and concentration will then generally be quite low. But suppose the antitrust analyst is quite wooden and stubbornly adheres to a static framework, refusing to recognize the (Schumpeterian) gales of creative destruction blowing in an industry. Then one must surely be flexible with respect to the concentration levels that indicated market power.

For merger analysis, the Justice Department has traditionally used Herfindahl-Hirschman thresholds (HHIs).(43) The Merger Guidelines select critical HHIs at 1000 and 1800.(44) But consider a snapshot of two markets with the same HHI: one in ferment because the regime is characterized by rich opportunity and weak appropriability, and where the incumbents lack complementary assets; the other stable because the technology is mature, appropriability is strong, and the incumbent owns the complementary assets. Clearly, competition circumstances in these two markets are quite different, even though the level of concentration is the same. In short, market concentration thresholds that are insensitive to industrial dynamics are likely to be somewhat misleading.(45) When the technological regime is in ferment, market power, even if it exists momentarily, is likely to be transient because of changes in enabling technologies and in demand conditions.

Consider as an example the diagnostic imaging industry. Teece et al. documented that with the possible exception of nuclear imaging, all modalities displayed rapid reductions in HHIs over time.(46) In the case of CT scanners, concentration fell from 10,000 to 2200 within 5 years. Magnetic resonance fell from 10,000 to 2489 in 5 years. Each had fallen below 1800 within a decade.(47) An antitrust analyst endeavoring to understand competitive conditions would surely be misled by a snapshot taken early in the history of this industry. Moreover, if all identified modalities are put into the same market for purposes of market definition, the HHI is in the 928-1637 range for the period 1961-1987.(48) This vividly demonstrates the importance of static versus dynamic analysis in making a market power determination.

A second example shows that dynamic analysis in antitrust can work both ways. In the telecommunications industry, alternative technologies have historically been separated by regulatory barriers, and often given exclusive franchises within a particular territory. Cable television, local telephony, long-distance telephony, satellite communications, and broadcast were all distinct regulatory categories. That began to change in the middle of this decade as technical and regulatory barriers separating the different technologies dissolved. A market power analysis of the telecommunications industry conducted in 1992 that did not anticipate these changes would have been seriously in error. The opening of telecommunications markets to competition should result in a broader market, one in which any one firm cannot maintain significant power for long. At the same time, the broadening of this market means that antitrust regulators should rightly be concerned about mergers that would have raised no antitrust issues in 1992 if the market were viewed in static terms. If a franchised cable company buys the dominant local telephone company in its geographic region, for example, the potential for competition between the technologies may be reduced in that region.(49) A static market analysis might improperly discount such potential competition and treat the merger as a conglomerate.(50) Here, the problem with the HHI is that it assumes a proper (static) market definition is already in place. It is next to impossible to measure the postmerger HHIs of two companies who are not yet in the same market. Of course, the fact the HHIs do not tell the whole story does not mean that they should be discarded entirely. But it does suggest that their use should be tempered by the economic learning discussed in this article.

V. Implications for conduct analysis

A. General

Anticompetitive conduct must differ from action that would be expected under competition; it is conduct that makes no sense without the monopoly profits that can be made after competition is eliminated or reduced. In the U.S., courts have been reluctant, and wisely so, to impose the penalties of section 2 of the Sherman Act on firms that have gained substantial market power without having engaged in conduct that otherwise violates the antitrust laws. The law does not penalize firms that have succeeded because of superior "skill, foresight, and industry." To find a violation of section 2, courts faced with a defendant possessing monopoly power must find that the defendant engaged in troublesome "anticompetitive" or "exclusionary" conduct.

Our earlier analysis showed that in the context of innovation, market power need not be monopoly power in the economic sense. This is not going further than the Supreme Court has so far gone. What the Court has found to be blameless we would not call monopoly. Conduct that is objectionable is thus reduced to circumstances where true monopoly power exists (as evidenced by monopoly rents rather than Ricardian or Schumpeterian rents) and the monopolist is using "bad acts" to limit the expansion of competitors.

From an economic perspective, conduct must satisfy at least three criteria to be anticompetitive:(51)

1. The conduct must not be a sort that society seeks to encourage, today, such as nonpredatory price reductions. Absent this criterion the antitrust laws could be used to discourage socially beneficial conduct.

2. The conduct must be socially inefficient, in the sense that it tends to inhibit industry innovation or otherwise create distortions inconsistent with (long-run) consumer welfare.

3. The conduct must be substantially related to the maintenance or acquisition of monopoly power, and ought to be a substantial cause of the monopoly power under scrutiny. It is not enough that the conduct exploit market power derived from other sources. The reason for this is administrative. In the absence of such a criterion, any action by the firm with substantial market power could be challenged.

The first criterion is obvious if competition is to be encouraged. The second criterion is necessary since in the context of innovation, it is not uncommon that conduct of an incumbent (especially an incumbent innovator) will tend to impair the progress of competitors (but not necessarily of competition). Given the difficulties associated with applying traditional antitrust lenses, we think it is increasingly important to break through to fundamentals and ask what is the impact on economic efficiency over time. This will assist one in arriving at more confident and more accurate answers than one would obtain from asking questions such as Does the practice increase price of reduce output? The third criterion is likewise obvious in that if the conduct is ineffectual, it is irrelevant. Failure to pass this third criterion ought to be dispositive. (This criterion is not unlike Professor Areeda's definition of exclusionary conduct, which he defined as conduct "other than competition on the merits ... that reasonably [appears] capable of making a significant contribution to creating or maintaining monopoly power."(52)) Conduct is neither anticompetitive nor exclusionary if it fails these criteria.

B. The importance of innovation

In the context of high-technology industries, the second criterion is almost equivalent to determining whether the conduct reduces innovation in the industry. This is because it is innovation that will most assuredly undo an incumbent monopolist's position, and it is innovation that is fundamental to the generation of benefits to the consumer, and to the economy more generally. It promotes both competition and economic welfare. Conduct that does not in the aggregate affect innovation negatively does not assist the fundamental ability of the firm to charge monopoly prices. Consequently, if conduct is to be subjected to antitrust scrutiny on the ground that it contributed to market power, then the critical inquiry is whether it impedes aggregate innovation in the industry.

The difficulties of implementing this test need not in most instances lessen its utility. Many practices can be shown by theoretical analysis alone to have no negative effect on industry innovation. In any case, it is always better to answer the right questions relatively well then to answer the wrong questions precisely correctly.

The focus on impacts on industry innovation as the linchpin of conduct analysis is perhaps novel, but it is not without foundation. As we argued strongly in earlier work,(53) innovation is the most powerful force animating competition. Throttle innovation, and one throttles the most fundamental factor driving competition and insuring superior products and competitive prices for the consumer.

Our focus and our benchmark is industry innovation, by which we mean the sum of the incumbents and new entrant innovation. Conduct need not be anticompetitive, for instance, if it limits the freedom of clones, if the clones would have the effect of reducing industry innovation.

C. Predatory pricing

It should be noted that pricing that excludes competitors is not necessarily anticompetitive, and may in fact be procompetitive. Monopoly power is the power to keep prices artificially high, earn monopoly (as compared to Ricardian and Schumpeterian) profits, and still exclude competitors. Firms that are more efficient than their rivals always have the power to exclude competitors by setting prices low. Indeed, that is what competition is supposed to achieve.

In many high-technology contexts, prices are set low for a variety of reasons, most of which reflect competition at work. For instance, innovators may set the price of hardware low to encourage the sale of software, or vice versa. Many would agree that if a price is set so that anticipated revenue is above avoidable cost, the price in question is certainly not predatory. A firm pricing below avoidable cost might be incurring losses that it could have avoided, unless it is developing a new market or promoting a new product.

Consider innovation more generally. When the innovator has a first-mover advantage, the innovator may well price high at first ("cream skimming"), but drop prices when the "me too's" arrive and undercut the innovator's price. The innovator must respond, or possibly face a disastrous loss of market share. The innovator must lower prices precisely because it does not have market power. If the innovator's costs are lower than the imitators', the imitators may not be able to make a profit in this particular line of business.

To condemn such behavior as anticompetitive is to condemn the very process of Schumpeterian competition. To use antitrust to protect such competitors is to confound the protection of competitors with the protection of competition. Fully efficient firms will not be deterred because they will still be able to compete if prices are still above the incumbent's avoided costs. Certainly innovation isn't harmed if inefficient imitators are kept out by prices too low for them to survive.

D. Tying and bundling

Tying occurs when the sale of one product (the tying product) is conditioned on the sale of another (the tied product). Examples include the sale of replacement parts by the manufacturer on the condition that the buyer also purchases repair service. Bundling occurs when the price of two or more products are sold together as a package is less than the sum of their individual prices.

Welfare-enhancing motivations for tying and bundling (such as economies of scope, protection of reputation, and risk sharing) have been well developed in the literature.(54) Efficiencies of various kinds are often paramount. Some relate to performance and quality. In circumstances where the various items are used in a system, separate supply may confound the determination of systems failure. Whenever complex systems are involved--be it in telecommunications, computers, or aircraft engines--the consumer/user may be hard pressed to uncover the reasons for system failure, and the reputation of the vendor of quality parts suffers, along with the reputation of the vendor of the part or subsystem that caused the system to fail. Tying thus benefits qualified firms, and will be objected to by vendors with less salutary reputations. Tying can thus be, and frequently is, the handmaiden of economic efficiency.(55)

Bundling is likewise frequently beneficial. It results in lower prices to consumers. In the high-technology context it may also enable experience to be obtained in the use of certain goods that might not otherwise sell on a stand-alone basis. Once experience is obtained, the product in question might be viable on a standalone basis.

Moreover, economic analysis has shown that the standard leveraging arguments--that tying is used to lever monopoly in one market into a second--fail as a matter of logic, at least in most cases. If monopoly power exists in one market, it can generally be extracted there. While theoretical possibilities suggesting otherwise can be constructed, the assumptions required are rarely met in real-world circumstances.

E. Integration of function

In the context of high-technology industries, various plaintiffs have from time to time insisted that it is anticompetitive for an alleged monopolist to meld into a single product two or more products that were previously offered separately, or could have been offered separately. The argument is commonly made that design integration of this kind restricts entry or growth by the specialized suppliers of one or both of the separate products. It is indeed common in the computer and the software industry for new products to combine functionality, which may well have been provided by separate vendors or by separate products offered by the vendor in the past.

Such integration of function implies no threat to an incumbent's rivals unless consumers prefer products integrated in this fashion (rather than being forced to integrate the products themselves). If consumers prefer buying separate products, and integrating them themselves, then the innovator who joins the functionality in just one product will fail to gain sales. So to insist that it is anticompetitive to give consumers what they want is to once again fall into the trap of using antitrust to protect competitors rather than competition and consumers.

Determining the legitimacy of design choices by innovative firms requires more than just economic analysis. It requires an evaluation of technical choices and consumer preferences. Such data, even if available to antitrust enforcement agencies, cannot typically be adequately processed and understood by them. Accordingly, enforcement agencies are likely to make serious errors if they endeavor to second-guess design decisions of innovators in regimes of rapid technological change and/or product redefinition.

F. "Vaporware" and the premature announcement of new products

In industries like autos, computer hardware, and computer software, vendors announce new products expected to be available in the future. It is sometimes alleged that firms engage in this practice to discourage customers from switching to their competitors during the period before the new product becomes available.

It is clear, however, that consumer choice is assisted by timely information about future product availability. Such information enables consumers to plan their product acquisitions. We cannot see how this practice could have any anticompetitive effect. There are two obvious circumstances to evaluate: First, the announcement turns out to be accurate. Clearly, if the issue is simply that a vendor signals early on its future product releases, and does in fact release products consistent with its announcements, there cannot possibly be any antitrust or consumer protection issues. Consumer welfare is unambiguously enhanced. Second, the announcement turns out to be inaccurate or misleading. These cases are a little more complex. Both competitors, customers, and the vendor could be injured by such mistakes. The injury does not, however, enhance market power if consumers are well informed and rational. This is because consumers and the marketplace will quickly calibrate the announcement/forecasting errors of the vendor, and discount its announcements accordingly. This is not unlike how investors discount a company's (and its analyst's) earnings forecasts on Wall Street. A firm that misses the analyst's forecasts, even occasionally, will have its market value discounted in short order. Customers in product markets act in much the same way.

VI. Luck, incentives, and ignorance

With increasing returns and network effects, some antitrust analysts may be led to the view that the leading firm in an industry, while performing in a fully credible though possibly unexceptional (technological) manner, has been thrust ahead of its competitors merely by chance events. Market outcomes achieved by luck need not be easily undone for some period, although our discussion of paradigms earlier suggested that in high-technology industries (e.g., computer and computer software) good luck would carry one for only a short period. Indeed, the more recent literature on first movers suggests that first-mover advantages aren't durable.(56) Competences built-up need not be easily transferred, however. So once obtained, dominance, if underpinned by specialized competences, can sometimes be maintained--but usually only until the next paradigm shift.

There are two classes of reasons why absent unambiguous anticompetitive conduct by a firm with market power, the antitrust agencies shouldn't intervene. One is that the antitrust action might produce severe disincentive affects throughout the entire economy. The possibility of success through superior skill, foresight, and acumen, or just dumb luck, induces entry, investment, and unparalleled even maniacal effort. To penalize success with poorly reasoned antitrust intervention is dangerous. Oliver Williamson(57) reminds us of Robin Marris'(58) comments almost three decades ago:
   ... trust busting effectively contradicts the most fundamental principle of
   capitalism. Whatever may be said of the liberty of the individual,
   capitalism insists on the liberty of the organization. That liberty
   includes the right to grow, and the system rewards, with growth, the fruits
   of both good luck and good guidance. I cannot conceive how any political or
   other mechanism can sustain that principle if it is modified to read "You
   shall continue to be rewarded for success, but for successive success you
   shall be punished."


In high-technology contexts, we do not need to suggest that one should reward the tortoise for the foibles of the hare, for the selection environments in high-technology industries simply do not allow tortoises to survive. Contemporary high-technology industries attract and utilize the best, the brightest, and the fleet footed, and display a genre of competition completely unparalleled in the history of capitalism.

In any case, as indicated earlier, the risk of error associated with the agencies and the courts endeavoring to intervene in high-technology contexts is high. As Judge Easterbrook has aptly stated:
   if any economic institution survives long enough to be studied by scholars
   and stamped out by law, it probably should be left alone, and if an
   economic institution ought to be stamped out, it is apt to vanish by the
   time the enforcers get there.(59)


Moreover, Easterbrook reminds us that the journey from social science to law should be a process of conserving on the costs of error and information:
   we wish to hold to a minimum the sum of (a) the welfare losses from
   inefficient business arrangements; (b) the welfare losses from efficient
   business arrangements condemned or discouraged by legal rules; (c) the
   costs of operating the legal system. How? The judge must decide between
   approaches before the economics profession is confident which is best, and
   in the process increases all three of these kinds of cost. A century of
   antitrust law, and the profession is still debating the merits of almost
   every practice except cartels and mergers to monopoly--and dissenting
   voices are being heard even on those subjects.(60)


Accordingly, we are rather skeptical that the "surgical" approach to antitrust intervention confidently suggested by Deputy Assistant Attorney General Rubinfeld in his article in this issue of The Antitrust Bulletin, is in fact feasible. The agencies and the courts wield blunt axes, not surgeons' knives. While the agencies and their advisors may be bright and well educated, the high-technology industries they seek to regulate have arguably attracted an even brighter and more hardworking cadre of top talent.

Moreover, the real world of high technology is extremely complex. The information requirements to determine when an action is likely to be anticompetitive in dynamic, high-tech industries are high. Assessing whether a firm has market power requires an understanding of the sources of its competitive advantages as well as customer preferences. Many actions that harm competitors have valid justifications or are strongly related to competitive motivations. Distinguishing between these effects is frequently very difficult, and the politics of the process gives considerable advantage to the complainers.

The problem of incomplete information escalates dramatically when the activity at issue involves issues about how firms design their products, and what firms decide to include as features in their products. Determining the right answer likely requires significant technological detail, and understanding of consumer preferences are inherently uncertain. Enforcement agency personnel are not likely to be capable of making the technical distinctions and would have to rely on industry personnel who may have divergent views and special agendas.

This indicates the need for extreme caution. Moreover, government antitrust investigations are typically long and costly procedures, particularly since nonmerger investigations do not face the constraints imposed by the Hart-Scott-Rodino Act.

Firms may even enter into consents to minimize litigation costs and negative public opinion, even if the conduct at issue is not anticompetitive. These consents can have two effects. If the agency has not made the right call, it will stop a behavior that is procompetitive. A secondary effect can occur with other firms that may be concerned that they could face investigation and potentially litigation for similar procompetitive behavior. For instance, a ruling that limits the design choices of firms can have far-reaching effects. In many high-technology markets, suppliers need to coordinate behavior with other vendors of complementary products to have their products be valuable to customers. However, it is not likely to be optimal to always insure compatibility with every potential complementary product. In addition, there may be times when it makes most sense to incorporate a complementary product into the platform. Any time a potential rival at one level is harmed by these decisions, the supplier could face investigation and potential litigation. This would likely affect the firm's strategies, and incentives, reducing innovation as it increases the cost of investment in innovation or reduces the benefits from innovation.

VII. Conclusion

Despite the confidence displayed by some agency lawyers and economists, we do not believe the antitrust agencies anywhere in the world are at present well equipped to deal with competition policy in high-technology industries.(61) This is not because the agencies have failed to hire the requisite staff, but because the economic profession has for many years largely ignored the study of innovation. Belated attention to network effects and increasing returns, while admirable, is a sideshow. The very nature of competition, the definition of industries, the basis of competitive advantage, the effects of "restrictive" practices and the nature of economic rents are all different in the context of innovation. The costs of error are great. The agencies, whose reputations have improved considerably over the past two decades, run the risk of squandering their reputational capital and becoming viewed as clumsy-footed spoilers if they do not recognize that they are now on unchartered terrain.

The good news is that the cost of inaction is not high, and pales next to the costs and likelihood of error where innovation is rapid. In high technology, we observe competition orders of magnitude more fierce than in industries where the agencies have in the past found problems. We have little doubt in the eventual self-corrective capacities of markets in such contexts, even in the presence of networks.

In the U.S. the political economy of governmental antitrust action in high technology, we note that the activists seem to fall into two groups: the agencies themselves and former agency employees; and competitors who would benefit if leading firms were hobbled. Consumers themselves seem relatively silent and consumer surveys strongly disfavor agency intervention in high-technology situations.

The task ahead--learning how to apply competitive policy principles to high technology--is an important one to which we remain committed. This special issue of The Antitrust Bulletin is one small step. Fortunately, there is a large literature in "innovation studies" and "innovation management," but it is neither read nor cited by antitrust lawyers and economists. It is our belief that antitrust jurisprudence can be considerably enhanced by a deeper appreciation of the economics of technological innovation and the fragility of competitive advantage in the absence of continuous innovation.

(1) THOMAS M. JORDE & DAVID J. TEECE, ANTITRUST, INNOVATION, AND COMPETITIVENESS 234 (1992).

(2) Id. at 233.

(3) Id.

(4) Id. at 234.

(5) In this regard we note that some economists believe that work on network effects yields sufficient guidance for enforcement policy, at least with respect to software. See Michael Katz & Carl Shapiro, Antitrust in Software Markets (unpublished Working Paper, Department of Economics, UC Berkeley, March 17, 1998) state the following: "we disagree with those who say that antitrust enforcers lack the economic tools to understand software markets" (p. 1). Despite important work by Katz, Shapiro and others, the amount of serious scholarly inquiry on software markets is very limited.

(6) For a recent assessment of the antitrust treatment of mergers and acquisitions in the software industry, see id.

(7) This is the message we tried to convey a decade ago in Jorde & Teece, supra note 1. We warned that competitive regimes propelled by innovation would outclass other regimes in the benefits they could bring to the consumer, and that the agencies and the courts should be very cautious about intervening. To our dismay, the agency viewpoint, at least as expressed publicly by former Assistant Attorney General Anne Bingaman on several occasions, is that the importance of innovation requires additional vigilance and intervention on the part of the agencies. Such hubris displays a lack of appreciation for the complexity of the issues, and optimism--completely unsupported by the historical record--with respect to what regulators can accomplish with respect to improving outcomes in regimes of rapid technological change. The historical record, not just in antitrust but in other regulatory contexts, indicates that regulatory processes are fundamentally limited in the context of innovation because of complexity and regulatory lags.

(8) JAMES UTTERBACK, MASTERING THE DYNAMICS OF INNOVATION (1996).

(9) Id. at 191.

(10) Gary Pisano, Amy Shuen & David J. Teece, Dynamic Capabilities and Strategic Management, 18 STRATEGIC MGMT. J. 509 (1997).

(11) Rebecca M. Henderson & Kim B. Clark, Architectural Innovation: The Reconfiguration of Existing Product Technologies and the Failure of Established Firms, 35 ADMIN. SCI. Q. 9 (1990).

(12) With the advent of global network computing, the industry is finding that the network itself can be a distribution channel--both the Internet and corporate intranets. Electronic distribution is a new channel that is having a major impact on the software industry because there are zero barriers to firms using this as a means of distribution. Moreover, distribution costs are low, suggesting that access to distribution is unlikely to impair entry.

(13) The harbinger of this metamorphosis was the prescient statement by Sun's Scott McNealy in the mid 80s that, "the network is the computer."

(14) Bill Joy, quoted in FORTUNE, Dec. 11, 1995, at --.

(15) Robertson, Stephens & Company, Enterprise Software Applications in the 90's (June 2, 1992), at 7.

(16) Competition from firms outside the established paradigm also probably needs to be weighted more heavily than competition from firms operating within a paradigm, particularly if firms operating "outside the paradigm" have products whose price/performance attributes are superior to the incumbent's.

(17) An additional factor supporting high margins is the shorter lifetime-before-obsolescence during which the initial R&D investment must be recovered.

(18) See David J. Teece, Technology Transfer by Multinational Firms: The Resource Cost of Transferring Technological Know-How, 87 ECON. J. 242 (1977); also reprinted in THE ECONOMICS OF TECHNICAL CHANGE (Edward Mansfield & Elizabeth Mansfield eds. 1993); also reprinted in MARK CASSON (ed.) MULTINATIONAL CORPORATIONS, THE INTERNATIONAL LIBRARY OF CRITICAL WRITINGS IN ECONOMICS 1 (1990), at 185-204. See also David J. Teece, The Market for Know-How and the Efficient International Transfer of Technology, 458 ANNALS ACAD. POLIT. SOC. SCI. 81 (1981).

(19) Brian Arthur, Competing Technologies: An Overview, in TECHNICAL CHANGE AND ECONOMIC THEORY (G. Dosi et al., 1988).

(20) Pisano, Shuen & Teece, supra note 10.

(21) When there are large up-front costs, entry may be difficult because new entrants must consider postequilibrium entry and because the development costs are typically sunk and this adds to the entry risk.

(22) See Michael Katz & Carl Shapiro, Network Externalities, Competition, and Compatibility, 75 AM. ECON. REV. 424 (1985).

(23) We are not sanguine that antitrust policy can assist transitions in any meaningful way. The stronger the network externalities, the less impact antitrust action is likely to have. Antitrust action is simply not capable of "fine-tuning" industry outcomes in the face of strong network effects.

(24) Harold Demsetz, Industry Structure, Market Rivalry, and Public Policy, 16 LAW & ECON. 1 (1973).

(25) Sam Peltzman, The Gains and Losses from Industrial Concentration, 20 Law & EOCN. 229 (1977).

(26) See, for example, Pisano, Shuen & Teece, supra note 10.

(27) This is because valuable routines could be broken, and knowledge could be "lost." See RICHARD NELSON & SIDNEY WINTER, AN EVOLUTIONARY THEORY OF ECONOMIC CHANGE (1982) for a discussion of routines.

(28) See David J. Teece, The Market for Know-How and the Efficient International Transfer of Technology, 458 ANNALS 81 (1981) and David J. Teece, Capturing Value from Knowledge Assets: The New Economy, Markets for Know-How, and Intangible Assets, 40 CALIF. MGMT. REV. 8 (1998).

(29) See David J. Teece, Profiting from Technological Innovation: Implications for Integration, Collaboration, Licensing and Public Policy, 15 RES. POLICY 6 (1986).

(30) Sidney Winter, Four R's of Profitability: Rents, Resources, Routines, and Replication in RESOURCES-BASED AND EVOLUTIONARY THEORIES OF THE FROM: TOWARDS A SYNTHESIS 147-77 (Cynthia Montgomery ed., 1995).

(31) Michael E. Porter has developed a theory of strategy around conduct designed to impair competition. As Porter notes "public policy makers could use their knowledge of the sources of entry barriers to lower them, whereas business strategists could use theirs to raise barriers." Michael Porter, The Contributions of Industrial Organization to Strategic Management, 6 ACAD. MGMT. REV. 612 (1981).

(32) IRVING FISHER, ELEMENTARY PRINCIPLES OF ECONOMICS (1923).

(33) ROBERT PINDYCK & DANIEL RUBINFELD, MICROECONOMICS 327 (2d ed. 1992).

(34) Id. at 328.

(35) DENNIS CARLTON & JEFFREY PERLOFF, MODERN INDUSTRIAL ORGANIZATION 97 (1990).

(36) An economist at the University of Washington once claimed, in the context of litigation, that Chevron had a monopoly in the sale of Chevron gasoline.

(37) The fundamental test for market definition is the reasonable interchangeability of goods--the substitutability of good x for good y in response to a change in the price of good y. See United States v. E.I. duPont de Nemours & Co., 351 U.S. 377, 380, 395-400 (1956). This cross elasticity of demand is most commonly tested by looking at the historical evidence--what has happened to sales of good x in the past when the price of good y went up.

(38) For a discussion of this problem in the context of the markets for Internet software, see Mark A. Lemley, Antitrust and the Internet Standardization Problem, 28 CONN. L. REV. -- (1996).

(39) See United States v. General Dynamics Corp., 415 U.S. 486, 501 (1974) ("In most situations, of course, the unstated assumption is that a company that has maintained a certain share of a market in the recent past will be in a position to do so in the immediate future"). On occasion, courts will take into account factors suggesting that a current market participant is unlikely to be an effective future competitor. For example, in General Dynamics, the Court allowed a merger that significantly concentrated the coal industry in Illinois, because the purchased firm was running out of coal reserves and so would not be an effective competitor for future long-term coal supply contracts. Id. at 503-04.

(40) 1992 Horizontal Merger Guidelines, Issued by the U.S. Department of Justice and the Federal Trade Commission, April 2, 1992, at [sections] 1.11.

(41) Unless the methods used by the supplier to win its existing customers were anticompetitive.

(42) Not all customers are likely to face the same switching costs or have the same valuation of the improvements in the new platforms versus the old platforms. Thus the new platform may be able to attract some but not all customers.

(43) The HHI is measured by taking the sum of the squares of the market share of all market participants, expressed as percentages. Thus, an industry with two firms, each of which has 50% of the market, has an HHI of [50.sup.2] + [50.sup.2] = 5000. HHIs range from numbers approaching 0 (in perfectly competitive markets) to a theoretical maximum of 10,000 (monopoly).

(44) These numbers are thresholds--the Department is unlikely to challenge a merger if the postmerger HHI is below 1000, and is likely to challenge a merger if the postmerger HHI is above 1800 and the change in HHI is greater than 50. HHIs between 1000 and 1800 occupy a middle range in which the discretion of the Department is considerable. However, it is rare for the agencies to challenge a merger where the HHIs are below 2200-2500.

(45) The same problem exists to an even greater degree with court decisions whose measurements of market share have tended to use four-firm concentration ratios (sum of the percentage share of the four largest firms in the market), a measure that is even less sensitive to industry conditions than the HHI.

(46) David J. Teece, Raymond Hartman & Will Mitchell, Assessing Market Power in Regimes of Rapid Technological Change, 2 INDUS. & CORP. CHANGE 317 (1993).

(47) Id. at 347.

(48) Id.

(49) See Laura Land Sigal, Challenging the Telco-Cable Cross-Ownership Ban: First Amendment and Antitrust Implications for the Interactive Information Superhighway, 22 FORDHAM URB. L.J. 207 (1994); Darin Donovan, Competition for All, Security for None: Antitrust and Market Definition Problems of Future Telecommunications Industries (unpublished manuscript 1994).

(50) See John M. Stevens, Antitrust Law and Open Access to the NREN, 38 VILL. L. REV. 571 (1993). Cf. Bruce A. Olcott, Will They Take Away My Video-Phone if I Get Lousy Ratings?: A Proposal for a "Video Common Carrier" Statute in Post-Merger Telecommunications, 94 COLUM. L. REV. 1558 (1994) (suggesting that mergers between different media were critical to effective competition, and therefore that media should be integrated and regulated as a whole). Subsequent events have effectively disproven the need for (and desirability of) mergers between companies in different media.

(51) We purposefully avoid the "less restrictive means" criterion sometimes advanced in antitrust analysis. This is because there is almost always a less restrictive contractual mechanism for achieving any economic outcome; the creative mind can always find an arrangement that is less restrictive; but if it is less efficient or effective from an economic perspective, then it is less desirable. Note also that while our framework endeavors to be general, our focus here is on high-technology industries.

(52) PHILIP AREEDA & DONALD F. TURNER, ANTITRUST LAW [paragraph] 626C (1978). See also PHILIP AREEDA & HERBERT HOVENKAMP, ANTITRUST LAW [paragraph] 651c (1996).

(53) See Thomas Jorde & David J. Teece, Antitrust Policy and Innovation: Taking Account of Performance Competition and Competitor Cooperation, 147 J. INSTITUTIONAL & THEORY. ECON. 118 (1991).

(54) See Michael Whinston, Tying, Foreclosure, and Exclusion, 80 AMER. ECON. REV. 837 (1990) and William Baxter & Daniel Kessler, Toward a Consistent Theory of the Welfare Analysis of Agreements, 47 STAN. L. REV. 615 (1995).

(55) Tying can also assist in the diffusion of new technology in circumstances where consumers axe not quite sure of the value of a new product. See John Lunn, Tie-in Sales and the Diffusion of New Technology, 146 J. INSTITUTIONAL & THEORET. ECON. 249 (1990).

(56) Marvin Lieberman & David Montgomery, First-Mover Advantages, 9 STRATEGIC MGMT. J. 41 (1988); and David J. Teece, Profiting from Technological Innovation: Implications for Integration, Collaboration, Licensing and Public Policy, 15 RES. POLICY 285 (1986).

(57) OLIVER WILLIAMSON, MARKET AND HIERARCHIES: ANALYSIS AND ANTITRUST IMPLICATIONS 219 (1975).

(58) Robin Marris, Is the Corporate Economy a Corporate State?, 62 AMER. ECON. REV. 103, 113 (1972).

(59) Frank Easterbrook, Ignorance and Antitrust in ANTITRUST, INNOVATION AND COMPETITIVENESS 119 (Thomas Jorde & David Teece eds., 1992).

(60) Id. at 122.

(61) See also James Langenfeld & Mary Coleman, Antitrust Analysis and Remedies in High-Tech Industries, GLOBAL COMPETITIVE REV., June/July 1998, at 42.

APPENDIX A

Examples of Performance Competition

Diagnostic imaging

In 1993, we studied the market for diagnostic imaging devices in some detail.(1) Diagnostic imaging devices are used by physicians and other health care professionals to obtain information about the internal condition of the body. Examples include x-ray machines, nuclear imaging devices, ultrasound machines, computer tomography (CT) scanners, magnetic resonance imaging (MRI), and digital radiography. The application of each is somewhat different but to varying degrees these devices provide, or could provide, alternative ways of getting the same or similar information. Each modality, however, has particular utility in certain applications; these applications overlap partially but not substantially.(2) Application of a small but significant and nontransitory increase in price (SSNIP) analysis would probably identify each modality as a separate market from the demand side. Since price is only one of three of more key demand attributes,(3) the SSNIP focus on price alone biases the assessment of competitive responses when the hypothetical monopolist raises prices. Narrow (and concentrated) markets necessarily follow, despite the strong qualitative evidence of competition between modalities.(4)

Microprocessors

The microprocessor industry has been characterized by an astounding rate of technological innovation. Competition is based not just on the speed of microprocessors (measured in millions of instructions per second, or MIPS), but on architecture, compatibility with hardware and software, reliability, power usage, and other factors. There is competition between architectures, manifesting itself primarily in competition between Motorola and Intel in earlier generations, and more recently between RISC and CISC-based architectures and between the Intel standard and the PowerPC. Intra-architectural competition has also been present, with AMD and Cyrix introducing clones of one of the dominant architectures (the Intel 80x86 architecture). Competition is also intergenerational as well as intragenerational. New generations of microprocessors substitute for older generations, eventually replacing them entirely. This tends to occur rather quickly, as processing power has tended to double every 18 months to 3 years(5) (see figure 1).

[Figure 1 ILLUSTRATION OMITTED]

Firms that fail to keep up with the pace of technological innovation in this industry will be left out.(6) Success in one generation does not guarantee success in the next, but merely offers the opportunity to compete in the next round of innovation and product launches.

Performance-based competition during multiple generations of microprocessors has in fact been quite intense. The magnitude of the price-performance improvement is simply dazzling--and is difficult to square with the idea that Intel is a monopolist that controls the industry and is free to do what it wants. If the purpose of competition is to bring benefits to the consumer, there is probably no industry that can match this one. Yet a mechanical and uninformed application of the DOJ/FTC's SSNIP might well impute market power to Intel, to AMD, to Motorola, to Sun Microsystems, and to others at certain times during the history of the semiconductor industry. These firms are clearly vigorous competitors, suggesting there is something quite wrong with the test.

Ironically, if there was anticompetitive conduct or dominance in the semiconductor industry during this period, the SSNIP approach would be unlikely to detect it. Because of the fast-paced nature of technological change in this industry, the only way one company might dominate the market would be to find a way of maintaining or leveraging its power from one generation to the next. But establishing such a leveraging claim would require the plaintiffs to prove the existence of two semiconductor markets that differed only in time--e.g., the market for generation 1 microprocessors and the market for generation 2 microprocessors. The SSNIP approach does not appear to leave room for such intertemporal market definition.(7) And while one company (Intel) has in fact managed to maintain market leadership across several product generations, at least among 80x86-architecture chips, that leadership appears to result from Intel's exploitation of its technological edge, and not from anticompetitive use of its market power. The SSNIP needs to be recast to have it perform properly in the context of innovation.

[Figure 2 ILLUSTRATION OMITTED]

(1) Raymond Hartman, David J. Teece, Will Mitchell & Thomas Jorde, Assessing Market Power in Regimes of Rapid Technological Change, 2 INDUS. & CORP. CHANGE 317 (1993).

(2) Id. at 329.

(3) Id. at 329.

(4) Id. at 325.

(5) Indeed, this rather astonishing (from a technological standpoint) tendency has become so engrained in the consciousness of the semiconductor industry that it is referred to as Moore's Law, after Intel founder Gordon Moore.

(6) Figure 2 shows the generational price/performance of microprocessors where performance is again measured in MIPS. The number of dollars per MIPS has declined from thousands of dollars per MIPS in the earlier generations to $10, or less per MIPS in the later generations.

(7) A more likely leveraging claim in this industry is what one might call backward temporal leveraging--that (for a hypothetical example) Intel tied sales of its new 486 microprocessors to the purchase of its older 386 microprocessors, at a time when Intel was the only supplier of 486 chips but faced competition for 386 chips. This claim too would require that markets be defined intertemporally.

APPENDIX B

A Multi-Attribute Small but Significant and Nontransitory Increase in Price (SSNIPP)

When competition proceeds as much on the basis of features as price (as it does in regimes of rapid technological progress), an equally pertinent question to ask is whether a change in the performance attributes of one product would induce substitution to or from another. If the answer is affirmative, then the differentiated products, even if based on alternative technologies, should probably be included in the relevant product market. Furthermore, when assessing such performance-induced substitutability, a 1-year or 2-year period is simply too short because enhancement of performance attributes may take a longer time to accomplish than price changes. While it is difficult to state precisely (and generally) the "right" length of time, it is clear that the time frame should be determined by technological concerns. As a result, it may be necessary to apply different time frames to different products and technologies. In the semiconductor industry, it might be tailored to each generation of the product (about 3 years for microprocessors).

When assessing performance-based competition among existing producers, the product changes to be included as a metric should include those available from the reengineering of existing products using technologies currently known to existing competitors.(1) Thus if firm A, by modifying its product X at a relatively modest cost using known product and process technology, could draw sales away from product Y of firm B, such that B would need to improve its products to avoid losing market share to A then X and Y are at least potential competitors in the same relevant market.

When assessing potential competition and entry barriers, a 1- or 2-year 5% price rule suggested by SSNIP must also be modified to include variations in performance attributes of existing and potentially new technologies. In high-technology innovative industries it is this potential competition that is often most threatening to a firm that attempts to dominate the market, and may be the most important from a welfare standpoint. Unfortunately, this potential competition also takes the longest time to play out and is the most difficult to fully anticipate. Antitrust regulators must determine a more realistic time frame over which the new products and technologies that may enter the market will be considered. The precise length of time allowed for the entry of potential competitors must reflect technological realities. Hence, it too may vary by product and technology.

To see how this approach would work, assume that several firms offer various products with different attributes. One producer improves the performance of a certain attribute, holding price and other attributes constant. If a shift in demand away from a similar product results, there exists a performance cross elasticity between the two products. If this cross elasticity is high enough, the products are in the same market. However, if the producer were to improve the performance of a certain attribute, while simultaneously raising the product's price such that no substitution occurs, this does not necessarily mean that the products are in different markets--merely that the price-performance ratio of the product had not in fact changed.

This framework requires one to analyze and quantify both price and performance (attribute) competition. For example, one can retain the 5% price guideline while extending the DOJ approach to incorporate performance competition, perhaps by assessing percentage changes in performance attributes. However, such an extension is far from straightforward. In general, performance changes are more difficult to quantify than price changes because performance is multidimensional. As a result, quantification requires measuring both the change in an individual attribute and the relative importance of that attribute. Unlike price changes, which involve altering the value of a common base unit (dollars), performance changes often involve changing the units by which performance is measured. Nonetheless, rough quantification is often possible, based on the pooled judgments of competent observers, particularly product users.

We suggest introducing a 25% rule for a change in any key performance attribute.(2) Assume that an existing manufacturer lowers the quality of a key performance attribute of an existing product by 25%, while the price of the product and all attributes of goods produced by other suppliers remain the same. If no substitution to other products occurs, then the original product constitutes a distinct antitrust market. If substitution to other products does occur, then those other products share the market with the original product. Conversely, assume that a new product is introduced that is identical to an existing product in all ways except that it offers a 25% improvement in a key performance attribute. If there is no substitution from the existing product to the new product, then the two products do not share the same antitrust market.

The criterion of 25% performance improvement for a single key performance attribute is conservative. Not only is a 25% improvement small compared with those that commonly occur in industries experiencing rapid technological change, but a 25% improvement in a single attribute is likely to imply an overall performance improvement of considerably less than 25%. Focusing on changes in a single attribute has the advantage of enhancing quantifiability. It avoids the need for determining the importance of the attribute itself in relation to the product as a whole as would be required in a general performance enhancement test. Further, single attributes may be more readily given to quantification in existing terms. For example, performance in microprocessors can be measured in MIPS (millions of instructions per second), or in clock speed, or in transistors per square millimeter, or in some other respect that consumers find relevant to their purchasing decisions.

While courts and regulators could rely on past market data regarding the effects of changes in performance on market demand they need not do so. Many product users are familiar with the key development programs of their suppliers and are able to assess the likelihood that a particular product change will emerge in the near future. One possible measurement procedure, therefore, would be to rely on the informed judgments of users of existing products. This procedure would involve identifying market experts, asking them to list key performance attributes, and then asking them to assess the substitutive effects of changes in the attributes. The sample of product users could be supplemented by a corresponding sample of commercial participants, although care would be required to avoid introducing competitive bias into the judgments. A sample of such participants could be asked whether a 25% change in the performance of any one attribute would lead to product substitution. While surveys are less exact than market data regarding past substitutions, they are forward- rather than backward-looking. In innovative industries, that tradeoff may be well worth making.(3)

Determining the value of an improvement may not always be so easy, of course. A percentage assessment requires both knowledge of market conditions and an accurate evaluation of the new product. Either one may be hard to come by when there is high uncertainty. The problem is most severe in the case of quantum changes, such as the introduction of a specific new application for a device. Entirely new products or applications have no background against which to benchmark the value of the new technology. Following introduction, however, most product changes take place along a relatively steady trajectory of technological improvement. Even performance improvements based on "unpredictable" problems like software bugs have this feature.

In addition to threshold rules regarding performance changes, market definition requires an identification of a time frame for the competitive product changes--that is, the definition of the "near future." Because there is significant variation among products, no single number will be appropriate for all cases. We suggest that a 4-year period be established as a default time frame, with the option of adjusting the period if strong evidence suggests that it would be appropriate in an individual case.(4) Like the DOJ's old 1-year rule(5) or the patent law's traditional 17-year grant,(6) a fixed 4-year rule will not be optimal in all cases.(7) It could provide too broad a market definition in some cases and too narrow a definition for others; with the benefit of hindsight, it looks about right for the microprocessor industry.

(1) Product changes that depend on anticipated technologies not currently commercially viable may also be relevant to future competition, depending on the circumstances.

(2) Raymond Hartman, David J. Teece, Will Mitchell & Thomas Jorde, Assessing Market Power in Regimes of Rapid Technological Change, 2 INDUS. & CORP. CHANGE 317 (1993).

(3) A multiattribute SSNIP also has the virtue of effectively accounting for the level of appropriability in an industry. Industries in which intellectual property rights are particularly strong should be characterized by greater performance-based competition and low price competition, than industries without strong intellectual property protection. The flexibility of the multiattribute SSNIP allows it to account for competition in all sorts of appropriability regimes.

(4) Teece et al., supra note 2, at 341.

(5) The 1992 Merger Guidelines abolished the 1-year rule for testing the market efforts of a SSNIP (stated in the 1984 Merger Guidelines at [sections] 2.11), substituting instead market effects "for the foreseeable future." 1992 Merger Guidelines at [sections] 1.11.

(6) The term of patents beginning in 1995 was changed to run for 20 years from date the patent application is filed, with the actual term varying in practice depending on the length of time the patent spends in prosecution. For more detail on this point, see Mark A. Lemley, An Empirical Study of the Twenty Year Patent Term, 22 AIPLA Q.J. 369 (1995).

(7) This does not mean it is not optimal as a general rule. See Louis Kaplow, The Patent-Antitrust Intersection: A Reappraisal, 97 HARV. L. REV. 1813 (1984). DAVID J. TEECE, Director, Institute of Management, Innovation and Organization, University of California at Berkeley, and Chairman of LECG, Inc.

AUTHORS' NOTE: The authors wish to thank participants in the interdisciplinary studies workshop at U.C. Berkeley for helpful comments. Special thanks to Richard Gilbert, Douglas Kidder, Henry Kahwaty, James Langenfeld, Jeff Machler, Jackson Nickerson, James Ratliff, Carl Shapiro, Pablo Spiller, and Oliver Williamson.

MARY COLEMAN, Senior Managing Economist, LECG, Inc.
COPYRIGHT 1998 Sage Publications, Inc.
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 1998 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Teece, David J.; Coleman, Mary
Publication:Antitrust Bulletin
Geographic Code:1USA
Date:Sep 22, 1998
Words:17212
Previous Article:Mystery or magic? the intriguing interface of antitrust law and today's information technologies.
Next Article:Antitrust enforcement in dynamic network industries.
Topics:

Terms of use | Privacy policy | Copyright © 2020 Farlex, Inc. | Feedback | For webmasters