Network neutrality and quality of service: what a nondiscrimination rule should look like.
By contrast, some participants in the debate would ban all discrimination, requiring network providers to treat every packet the same. (106) The FCC's draft nondiscrimination rule in the Open Internet proceeding is an example of this type of approach. (107) A rule that required network providers to treat every packet the same would make it impossible to offer Quality of Service, which, by definition, entails the network treating packets differently. (108)
Proponents of this option are concerned that network providers may use the provision of Quality of Service as a tool to distort competition among applications or classes of applications. For example, they are concerned that a network provider may offer Quality of Service exclusively to its own applications, but not to other, competing applications, or may sell Quality of Service exclusively to one of several competing applications. (109) They also point out that network providers who offer Quality of Service and are allowed to charge for it have an incentive to reduce the quality of the baseline service below acceptable levels to motivate users to pay for better service. (110) Moreover, selling Quality of Service allows network providers to profit from bandwidth scarcity, which reduces their incentives to increase the capacity of their networks. (111) While these arguments all have merit, these problems can be solved without totally banning Quality of Service. As will be explained below, it is sufficient to constrain how Quality of Service can be offered and charged for. (112)
Supporters of banning Quality of Service also question whether Quality of Service is needed at all. (113) If there is no need for Quality of Service, then banning it creates limited social costs. (114) So far, proponents of a ban point out, the lack of Quality of Service has not prevented real-time applications from becoming successful on the public Internet. (115) For example, although Internet telephony is sensitive to delay and high variations in delay ("jitter") and may benefit from a network service that provides low delay and low jitter, Internet telephony applications such as Skype or Vonage work in the current Internet. (116) Video telephony applications like Skype or Google Video Chat function over today's broadband connections. (117) The success of real-time applications on today's best-effort Internet is due to two reasons: First, many regions currently seem to have sufficient network capacity to prevent the lack of Quality of Service from becoming a problem. (118) If there is enough capacity so that congestion is generally low, the level of delay will be low enough most of the time to be tolerable for real-time applications. (119) Second, network engineers and application designers have developed end-host-based techniques that allow real-time applications to compensate for the lack of Quality of Service in the network. (120) Pointing to this experience, proponents of a ban argue that capacity increases, combined with end-host-based measures, are sufficient to meet the needs of applications that require low delay or low jitter. (121)
While available capacity affects the benefits of offering Quality of Service, the relationship between the two is more nuanced than is often assumed. Applications that would benefit from Quality of Service ("QoS-sensitive applications") are sensitive to the increase in delay, jitter, or loss, or to the variation in throughput that arises if queues build up in routers along the application's path, creating congestion. (122) (See Box 6: The Relationship Between Congestion, Delay, Jitter, and Loss below.) A network that offers Quality of Service can "help" these applications by providing classes of service that may offer throughput, delay, loss, or jitter that are better suited to the needs of QoS-sensitive applications than the unpredictable and potentially highly variable throughput, delay, loss, and jitter offered by the best-effort service. (123) Potential classes of service may offer throughput, loss, delay, or jitter that is relatively better than the throughput, loss, delay, or jitter provided by best-effort service during times of congestion (124) or may provide a performance that is more constant and predictable than best-effort service. (125) These services, however, can improve on the performance of best-effort service only if there is congestion. (126) If there is no congestion (i.e., if all queues are empty), congestion-related loss and queuing delay will constantly be zero, jitter will be low for all packets, and data flows will experience the maximum throughput and minimum end-to-end delay that is possible on their path. (127) No class of service can improve on that. Thus, Quality of Service is only useful if there is at least some congestion.
BOX 6 THE RELATIONSHIP BETWEEN CONGESTION. DELAY, JITTER, AND LOSS Throughout this Part, "congestion" denotes the building up of a queue for an outgoing link at a router, which may increase delay, jitter, or packet loss. (128) (This definition differs from the definition of congestion that is often used by network providers. See Box 7: Definitions of Congestion and Benefits from Quality of Service below.) Data packets travel across the Internet from router to router until they reach their final destination. At each router, packets arrive through incoming links and are transmitted through the appropriate outgoing link that leads to the next Stop--which can be a router or the receiving end host--on their path to their ultimate destination. If packets arrive for transmission over an outgoing link while another packet is being transmitted across that link, they are stored in a queue (or "buffer") for that link until it is their turn to be transmitted. (129) If packets destined for a specific outgoing link arrive faster than they can be transmitted over that link, the number of packets in the queue increases. This may happen, for example, at routers that connect faster incoming links with slower outgoing links, or when different data transfers across the same link coincide. (130) As the number of packets in the queue increases, packets arriving for transmission across that link have to wait longer until they are transmitted, which increases the delay they experience. If the queue is full and cannot accommodate additional packets, the router starts dropping arriving packets, creating packet loss. The end-to-end delay (or "latency") experienced by a packet indicates how long it takes the packet to travel from its origin to its destination. A packet's end-to-end delay consists of a number of components: how long it takes for the packet to be processed by the various routers along its path, how much time the packet spends in router queues waiting to be transmitted (or, in other words, how much congestion the packet encounters along its path), how long the various routers need to transmit the packets onto the appropriate outgoing link, and how long the packet needs to travel along the links from one router to the next. (131) The longer a packet has to wait in one or more router queues along its path, the higher its end-to-end delay. Now consider an application that sends a number of data packets from one end host to another that travel along the same path ("data flow"). If the different packets spend varying amounts of time in router queues along their way, their end-to-end delay will vary. This variation in end-to-end delay is called jitter. (132) If all packets in a data flow have a similar end-to-end delay (e.g., because they all experience no queuing delay, or because all experience a similar, higher queuing delay), jitter is low. By contrast, if the end-to-end delay experienced by packets in the flow is highly variable (e.g., because some packets experience a lot of delay while others experience little delay), jitter is high. BOX 7 DEFINITIONS OF CONGESTION AND BENEFITS FROM QUALITY OF SERVICE Throughout this Part, "congestion" denotes the building up of a queue for an outgoing link at a router, which may increase delay, jitter, or packet loss. (See Box 6: The Relationship Between Congestion, Delay, Jitter, and Loss.) This definition is derived from the definition of congestion used in queuing theory. (133) As explained in the text, Quality of Service only provides an improvement over best-effort service if this type of congestion exists. By contrast, under a definition often used by network providers, congestion occurs if the average utilization of a link over a certain time period exceeds a certain threshold. (134) While Quality of Service is useless in a network that never experiences congestion under the definition used throughout this Part, it may still be useful in a network that is not congested under the definition used by network providers. Even in a network with low average utilization, queues will build up occasionally. (135) Thus, a network that is not congested under the definition used by network providers may experience congestion under the definition used throughout this Part and may therefore benefit from Quality of Service. As a result, the statement "Quality of Service is only useful if there is congestion" is correct only under this Part's definition of congestion, but is false if the term "congestion" is used according to the network providers' definition.
In a network where average utilization is high, congestion will occur often and for extended periods of time. During periods of extended congestion, QoS-sensitive applications may become effectively unusable with best-effort service and may require a different class of service to function satisfactorily. (136) In such a network, users may find Quality of Service very valuable and may be very willing to pay for it. (137)
Adding capacity to reduce average utilization will reduce the amount of congestion. If average utilization is low, congestion will tend to occur less often and may cause less loss or delay. But even a network with low average utilization will experience occasional congestion. (138) For a number of reasons, queues will form temporarily even when average utilization is low, and if the resulting increase in delay, jitter, or loss exceeds the amount that a QoS-sensitive application can compensate for, the performance of that application will suffer. (139) (See Box 8: Causes of Congestion in a Network with Low Average Utilization.)
BOX 8 CAUSES OF CONGESTION IN A NETWORK WITH LOW AVERAGE UTILIZATION Congestion will occur even in a network with low average utilization. For a number of reasons, queues will form temporarily, creating congestion, even when average network utilization is low. Many Internet applications are bursty: their peak rate is much higher than their average rate. (140) Under these circumstances, focusing only on average utilization is misleading. The capacity of the links along a bursty application's path may be more than sufficient to transmit data at that application's average rate without delay. But if the application's peak rate is higher than a link's available capacity, the application will temporarily send data faster than the link can transmit, filling up the link's queue until the burst subsides. More generally, whether a specific link gets congested at a specific point in time depends on whether the actual data rates of the various applications sharing the link at that moment exceed the link's capacity, not on the average data rates of these applications. On today's Internet, bursty applications create challenges for interactive applications. For example, applications such as web browsing or streaming video send short bursts of data packets that may temporarily fill queues; when the burst ends, the queues drain quickly. This rapid building up and emptying of queues not only increases the delay experienced by other applications that are transferring data over the same link at the same time, but also increases jitter. The increase in jitter and delay harms applications such as interactive voice and video applications or online gaming applications that need low jitter or delay. (141) Recent changes to transport protocols (142) and operating systems (143) have increased the amount of data a single Transmission Control Protocol (TCP) connection may send, which increases the potential peak rate at which bursts may occur. In addition, today's browsers transmit data over several parallel transport-layer connections simultaneously, creating even larger bursts of data that can easily fill up a link's queue. (144) Applications that upload or download a lot of data using TCP (e.g., for uploading a video to YouTube, sending or receiving e-mails with large attachments. or backing up data to the cloud) pose challenges of a different kind. They create long-lived data flows that cause standing queues in routers for the duration of the flow, which increases delay for other applications trying to transfer data at the same lime. (145) Moreover, TCP is designed to increase its transmission rate until it uses all available bandwidth and to reduce its transmission rate when it detects congestion. Thus, as long as the amount of data to be sent by an application is sufficiently large, TCP by design creates instantaneous congestion, even in a well-provisioned network. (146)
While many users may be willing to tolerate the temporary lower performance associated with occasional congestion, some users may value more reliable performance. Many users use Skype even though the quality of the call often varies over the duration of the call and calls break up occasionally. While Skype's quality will often be good enough for them, at least some of these users (or users who are not using Skype in the current Internet because Skype's performance is not good enough for them) may value (and be willing to pay for) the option of using a different class of service that would allow them to get reliably good or even excellent call quality for selected Skype calls. Hearing-impaired users that rely on sign language to communicate may value perfect picture quality in video telephony more than "normal" users. A traveler on a business trip may be willing to tolerate occasional glitches and breakups in the video chat when saying good night to her children at home, but may need high-quality, predictable performance when using the same application to give a talk at a conference. (147) Thus, the absence of classes of service that provide more reliable (or potentially better) performance than best-effort service may hurt users who would value being able to take advantage of them when needed.
In addition, allowing Quality of Service may enable the development of new applications that cannot function in today's public Internet since they have requirements that a best-effort network cannot support. For example, a best-effort network cannot provide any guarantees with respect to throughput, jitter, or delay, making it impossible to support applications that strictly need guaranteed throughput, jitter, or delay. (148) More generally, there may be applications that may benefit from the availability of services other than best-effort. Thus, it is at least possible that a total ban on Quality of Service may reduce innovation in QoS-sensitive applications, harming users who would have benefited from these applications. (149) In conversations, proponents of a ban on Quality of Service often reject this argument as hypothetical. They would like to see compelling examples of applications that require Quality of Service before they are willing to consider the possibility that Quality of Service may foster application innovation. (150) Economic theory and the history of general-purpose technologies suggest, however, that it is usually not possible to predict in advance how a general-purpose technology will be used and which potential uses will be successful. (151) Throughout the history of the Internet, most Internet applications that later became highly successful either were not envisaged by the designers of the network or were met by widespread skepticism when they first became available. This was true, for example, for e-mail, the World Wide Web, eBay, and search engines. (152) Thus, just because we cannot imagine socially beneficial applications that require Quality of Service does not mean that such applications do not exist. Instead, the history of the Internet suggests that when a large, diverse group of innovators is allowed to innovate under the right conditions, (153) the innovators will find ways to use the Internet's functionality that those who originally designed that functionality had not necessarily thought of, and at least some of the resulting applications or uses will create significant social value. (154)
Finally, in situations in which a user's desire for bandwidth exceeds the amount of bandwidth available to him (for example, because the size of the access link is limited or the network provider limits the amount of bandwidth available to individual subscribers during peak times when average network utilization is high), allowing certain forms of Quality of Service may enable users to use that limited amount of bandwidth more efficiently. (155)
Network providers could reduce the likelihood of congestion even further by increasing capacity so that "the capacity of individual links is significantly larger than the peak average traffic of all users." (156) This solution is called "overprovisioning." (157) Provisioning links significantly above the peak average traffic of all users of the link requires considerably more capacity (and will be considerably more expensive) than ensuring low average utilization. For example, in 2006, representatives of the research network Internet2 suggested that overprovisioning residential access networks, or, as they described it, providing the "overabundance of bandwidth... [that] ensure[s] that the odds of network congestion are minimized," would require offering a 1 gigabit per second connection to residential users (where 1 gigabit per second equals 1000 megabits per second (Mbps)). (158) Since then, the demands and capabilities of end devices and applications have evolved rapidly, so the capacity required to overprovision access networks today will most likely be higher. For example, a single TCP connection on a personal computer can send data at a rate of hundreds of megabits per second, so a single user could easily create peak rates of more than a gigabit per second by opening several TCP connections simultaneously. (159) Moreover, TCP is designed to use all available bandwidth. As long as it has data to send, TCP speeds up until it detects congestion, so any network over which TCP is used will always experience some temporary congestion. (160) Finally, even in an overprovisioned network, data may travel from faster to slower links, coinciding data transfers may temporarily exceed the capacity of a link, or unexpected spikes in demand may exhaust a link's capacity, all of which create congestion as well. (161) Thus, while overprovisioning will further reduce the probability of congestion, it cannot eliminate it. (162) Due to the low likelihood of congestion, a network that is truly overprovisioned will probably be able to support most QoS-sensitive applications most of the time. But even in such a network, Quality of Service may still be useful as "insurance" against the residual risk of congestion. (163)
In sum, the value of Quality of Service is not restricted to networks with high average utilization, which are often congested. While Quality of Service is only useful if there is congestion (i.e., if queues build up in routers), increasing capacity does not necessarily prevent congestion, and Quality of Service may therefore be useful in networks with more capacity as well. In networks that have low average utilization, but are not overprovisioned, Quality of Service may give users the option to improve the performance of existing applications by using classes of service that provide more reliable or potentially better performance than best-effort service if congestion occurs. (164) Quality of Service may also enable new applications that we have not yet thought of that cannot function in a best-effort Internet or that would benefit from classes of service other than best-effort. And it may allow users whose bandwidth is limited to use that limited amount of bandwidth more efficiently. While the relative value of Quality of Service is likely to decline as a network's capacity approaches the capacity required for overprovisioning, Quality of Service may provide benefits even in overprovisioned networks by allowing users to protect selected applications against the residual risk of congestion. Thus, banning Quality of Service has social costs, and these costs exist over a wide range of network capacities.
While some proponents of banning all forms of Quality of Service argue that the costs of a ban are negligible since the needs of QoS-sensitive applications can be met by increasing capacity, some supporters of a ban make a stronger claim: According to them, banning Quality of Service does not have social costs because overprovisioning is economically and technologically more efficient than offering Quality of Service, so banning Quality of Service only prohibits a technical solution that is less efficient anyway. (165) Quality of Service makes the network more complex and is more difficult to manage than a single best-effort service. Network engineers have debated for years whether the benefits of Quality of Service outweigh the added complexity and cost, or whether overprovisioning is more efficient. (166) After developing and successfully testing Quality of Service technology in the research network Internet2 for several years, Internet2 researchers suspended the effort indefinitely. (167) While they acknowledged that being able to protect important applications against the risk of congestion is valuable even in an overprovisioned network, they concluded that "the costs... are too high relative to the perceived benefits" and that overprovisioning was the more efficient solution. (168) In congressional testimony and elsewhere, representatives of Internet2 have used this experience to argue in favor of network neutrality rules that ban Quality of Service. (169)
While introducing Quality of Service creates costs, overprovisioning--which requires considerably more capacity than that needed to ensure low average utilization--is not costless, either. Routers' processing power, the administrative costs of deploying and managing Quality of Service technology, and the costs of deploying additional capacity may differ across different types of networks and may change over time. For example, backbones may be easier to overprovision than access networks because they can take advantage of statistical aggregation. Overprovisioning research networks whose users are already attached to high-speed campus networks may be less costly than overprovisioning residential access networks. The complexity and costs of deploying and running Quality of Service may be lower in enterprise networks, where the same entity controls all parts of the network infrastructure (including the end hosts) than in multiprovider networks. (170) Today, many corporate intranets use Quality of Service; large Internet service providers give business customers the option of buying different classes of service. (171) Thus, whether overprovisioning is more efficient than introducing Quality of Service may differ depending on the circumstances and may change over time.
The debate over the relative costs and benefits of overprovisioning and Quality of Service is an important one that is worth having. But whatever the merits of this debate from a technical perspective, arguments over the relative cost efficiency of alternative technical solutions should be irrelevant for the regulatory debate over network neutrality rules.
Rather, in the context of the network neutrality debate, the only relevant question is whether banning Quality of Service is necessary to protect the values that network neutrality rules are designed to protect. If the restrictions are not necessary to protect these values, they should not be imposed. (172) By contrast, whether introducing Quality of Service makes sense from a technical or business perspective is a question that should be left to network engineers and network providers. (173) If regulators adopt nondiscrimination rules that allow certain forms of Quality of Service, they do not pick winners and losers in this debate. Such nondiscrimination rules do not require network providers to introduce Quality of Service; they only allow them to do so within the constraints imposed by the rules. If network providers decide that overprovisioning offers a better cost-benefit trade-off than offering Quality of Service in line with the rules, they are free to go down that route.
In sum, while allowing Quality of Service may indeed harm competition among applications or investment in the network, these concerns can be mitigated without totally banning Quality of Service. Different forms of Quality of Service have different social benefits and social costs, so a more nuanced treatment than an all-or-nothing approach is needed. While the value of Quality of Service may decline as network capacity increases, Quality of Service may be useful over a wide range of network capacities, not just in networks with high average utilization. In networks that have low average utilization without being overprovisioned, Quality of Service may allow users to improve the performance of existing applications, enable new applications that benefit from the availability of different classes of service, and enable users whose bandwidth is limited to use that bandwidth more efficiently. Ensuring low average utilization requires considerably less capacity than overprovisioning, so many networks may belong to the category just described. In an overprovisioned network, Quality of Service offers users the option of protecting applications against the residual risk of congestion. Thus, at least some forms of Quality of Service may provide social benefits over a wide range of network capacity. At the same time, the social costs of offering Quality of Service can be limited through appropriate rules. Under these circumstances, requiring network providers to treat every packet the same would be too restrictive, constraining the evolution of the network more than absolutely necessary to protect the values that network neutrality is designed to protect.
C. Case-by-Case Approaches
A second set of approaches would determine case by case whether discriminatory behavior that falls short of blocking should be forbidden. Proponents of these approaches recognize that some forms of differential treatment will be socially harmful, while others will be socially beneficial. As a result, they reject a blanket ban on discrimination as overinclusive. At the same time, they doubt that it is possible to distinguish socially beneficial from socially harmful differential treatment in advance. According to them, this determination is best made ex post, when the facts that will allow an accurate assessment of the practice, such as the motivations for and impact of the practice, are known. (174) To support their proposals, they point to the example of antitrust law, which evaluates behavior that may be anticompetitive or procompetitive depending on the circumstances after the fact on a case-by-case basis. (175)
Approaches in this group differ along two dimensions (176): the degree to which they prescribe the standard that regulators should use to assess specific discriminatory behavior, and the extent to which the approaches are able to capture the instances of discrimination that threaten the values that network neutrality rules are designed to protect. Taken together, these two characteristics determine how likely it is that an actor who encounters discrimination that network neutrality proponents would classify as harmful will prevail in the future.
Approaches at one end of the spectrum specify the standard for separating socially harmful from socially beneficial discrimination, but the standard would not capture many instances of discrimination that threaten the values that network neutrality rules are intended to protect, classifying them as socially beneficial. Thus, these approaches would often make it impossible to successfully challenge behavior that network neutrality proponents would view as harmful. Proposals that suggest using an antitrust framework, discussed in Part II.C.1, are an example of this type of approach.
Approaches at the other end of the spectrum do not specify the standard at all. As a result, the proposed rule is consistent with interpretations that capture all relevant (from the perspective of network neutrality proponents) instances of discrimination and with interpretations that do not. Thus, under such a rule it is at least possible, but not certain, that a challenge to behavior that network neutrality proponents deem harmful will be successful. The draft Open Internet Rules circulated by FCC Chairman Genachowski in early December 2010 are an example of this type of approach. They banned "unreasonable discrimination," without specifying how this term should be interpreted, as discussed in Part II.C.3.
In all case-by-case approaches, whether certain discriminatory conduct violates the nondiscrimination rule is determined in future case-by-case adjudications. As Part II.C.4 shows, this creates considerable social costs. Rules in this category provide little certainty to the market, result in high costs of regulation, and tilt the playing field against those who do not have the resources to engage in long and costly regulatory proceedings. They are also unlikely to lead to decisions that adequately protect the values network neutrality rules are intended to protect. In spite of these costs, the strategic incentives of policymakers and big stakeholders are aligned in favor of such approaches, so it is not surprising that many negotiated compromise proposals favor this type of approach.
1. Ban discrimination that violates an antitrust framework
The first set of proposals in this group suggests using an antitrust framework to distinguish socially beneficial from socially harmful discrimination. (177) These proposals interpret the concerns raised by proponents of network neutrality regulation as concerns about anticompetitive vertical leveraging or vertical foreclosure (178) and apply the framework used to evaluate vertical leveraging and vertical foreclosure claims under U.S. antitrust laws to determine whether discriminatory conduct should be banned. (179) The term "vertical leveraging" describes a situation in which a firm that has a monopoly in one market--here, a provider of Internet access service--"abuses" or "leverages" its market power in the first market to obtain an unfair (180) advantage in a second, vertically related market--for example, in the market for a specific application. The term "vertical foreclosure" applies to situations in which a monopolist in a primary market--that is, a provider of Internet access service--uses its market power in the first market to deny firms in a second, vertically related market--that is, the market for a specific application--access to that second market. (182) Over the years, the views of U.S. antitrust scholars and courts towards these practices have evolved considerably. Today, U.S. antitrust law condemns vertical leveraging or vertical foreclosure only if the exclusionary conduct meets the criteria of section 2 of the Sherman Act, which prohibits monopolization or attempts to monopolize. (183)
This standard does not capture all instances of discrimination that threaten the values that network neutrality rules are designed to protect. Challenges to discriminatory behavior that network neutrality proponents deem socially harmful may fail for one of four reasons.
First, U.S. antitrust law only condemns a network provider's discriminatory behavior that affects the market for a specific application, content, or service if the network provider participates in that market or is affiliated with a participant in that market. As Phillip Areeda and Herbert Hovenkamp's antitrust treatise explains,
Even the most expansive formulations of 'leveraging' ... limit the concept to situations where the defendant [i.e., the primary good monopolist] actually does or intends to do business in the secondary market. Mere injury to firms in a vertically related market in which the defendant does not operate cannot be leveraging, for nothing is being leveraged. (184)
By contrast, network neutrality proponents are also concerned about discrimination in application markets in which the network provider does not participate. For example, network providers may have an incentive to block unwanted content that threatens the company's interests or does not comply with the network provider's chosen content policy. This incentive is independent of whether the network provider operates in the market for the affected content. In the examples of content-based discrimination that are often mentioned in the debate (e.g., TELUS/Voices for Change, Verizon Wireless/NARAL ProChoice America, and Apple/iSinglePayer, discussed below in Box 9: Examples of Content-Based Discrimination), none of the content providers whose content was blocked was competing with the network provider. Similarly, a network 184 provider may have an incentive to exclude or slow down selected bandwidth-intensive applications to manage bandwidth on its network, even if the network provider does not offer a competing application itself. (185) In these cases, the resulting harm--users' inability to participate in social, cultural, or democratic discourse related to the blocked content, their inability to use the Internet in the way that is most valuable to them, or application developers' difficulty in obtaining funding for an application--is caused by the discriminatory behavior as such and is independent of whether the network provider is active in the market or not.
BOX 9 EXAMPLES OF CONTENT-BASED DISCRIMINATION (186) In 2005, TELUS, Canada's second-largest Internet service provider, blocked access to a website that was run by a member of the Telecommunications Workers Union. At the time, TELUS and the union were engaged in a contentious labor dispute, and the website allowed union members to discuss strategies during the strike. In 2007, Verizon Wireless rejected a request by NARAL Pro-Choice America, an abortion rights group, to let them send text messages over Verizon Wireless's network using a five-digit short code. In the same year, AT&T deleted words from a webcast of a Pearl Jam concert in which the singer criticized President George W. Bush. In 2009, Apple rejected an application called iSinglePayer that advocated for a single-payer health insurance system as "politically charged." Verizon Wireless, AT&T, and Apple all argued that the rejected or deleted content violated their content policies. They later changed their view after the incidents were widely reported. While the latter three examples are not direct examples of Internet service providers restricting content on their networks (Verizon Wireless restricted a service on its wireless mobile network, not the wireless Internet; AT&T acted in its role as a content provider, not as an Internet service provider; and Apple acted as provider of the Apple App Store), it is easy to imagine virtually identical incidents in which an Internet service provider enacts a content policy and restricts content on its network accordingly. (187)
Second, U.S. antitrust law only condemns vertical leveraging or vertical foreclosure as monopolization or attempted monopolization under section 2 of the Sherman Act if the monopolist is reasonably capable of monopolizing the primary market or the secondary market. (188) Thus, to be classified as socially harmful under an antitrust framework, a network provider's discriminatory behavior in the market for a specific application must be reasonably capable of creating, increasing, or maintaining monopoly power in the market for that application or in the market for Internet access services. (189) By contrast, network neutrality proponents may classify discriminatory behavior as socially harmful even if the behavior is unlikely to monopolize the application market or the market for Internet access services.
U.S. antitrust law generally only condemns exclusionary conduct if there is a reasonable likelihood that the behavior will harm competition, not just competitors, by worsening the structure or performance of the affected market. (190) In the case of section 2 of the Sherman Act, the behavior must be reasonably capable of creating, increasing, or maintaining a monopoly or of producing the higher prices or lower output or quality that attend monopoly. A firm's exclusionary behavior that just harms one or more competitors (e.g., by enlarging that firm's market share at the expense of its competitors) without creating or sufficiently threatening the higher prices or lower output or quality associated with monopoly is outside the scope of section 2 of the Sherman Act. (191) Thus, to be condemned as socially harmful under an antitrust framework, a network provider's discriminatory conduct in the market for a specific application would have to drive affected applications from the market for that application, prevent new entry into an application market that the network provider has already monopolized, or impair the application provider's ability to compete effectively by forcing it to operate at a less efficient scale.
This requirement may be difficult to meet (192) : In many cases, the market for the application that is being discriminated against will be national in scope, while the network provider's customers only make up a part of the nation's Internet access customers. (193) For example, in the United States, the four largest broadband Internet access providers currently serve 25% (Comcast), 19% (AT&T), 14% (Time Warner), and 11% (Verizon) of the nation's broadband Internet access customers. (194) Whether a network provider's discriminatory behavior will be capable of driving the application from the market or preventing the application provider from reaching its minimum efficient scale in a way that unreasonably restrains the application's ability to compete effectively depends on (1) the number of foreclosed Internet access customers relative to the overall number of Internet access customers, (2) the size of any economies of scale in the market for the application, and (3) the size of the cost disadvantage associated with operating at a less than efficient scale. (195) While many Internet applications are subject to significant economies of scale due to large fixed costs and low marginal costs or due to network effects, (196) exclusion from access to one Internet service provider's customers may not create the type of anticompetitive harm that antitrust law is concerned about. (197) In such a case, an antitrust framework would not classify the exclusionary conduct as socially harmful.
By contrast, network neutrality proponents may classify behavior as socially harmful even if it is unlikely to monopolize the market for the affected application. In the Internet context, discrimination will often be profitable even if it does not monopolize the market for the application in question. (198) While the resulting harm may be irrelevant for antitrust law, network neutrality proposals are driven by concerns about a broader range of harms than the specific type of "harm to competition" that antitrust law is concerned with. (199) For example, exclusion allows the network provider, not the users, to choose which applications will be successful on its network. This not only distorts competition among applications on the network provider's network, but also removes an important part of the mechanism that creates innovation under uncertainty, reducing the quality of application innovation. (200) The threat of future discrimination will often reduce the incentives existing and future application providers have to innovate (not just those of the application provider that is being discriminated against) and will make it more difficult for them to get funding. (201) The resulting decline in the amount and quality of application innovation limits the Internet's value for users and its ability to contribute to economic growth. (202) Discrimination not only deprives all Internet users of the value of future applications that would have been developed but for the threat of discrimination, but also harms the network provider's Internet access customers who cannot use the application that is being discriminated against. For applications through which users interact with others (for example, Internet telephony or online gaming), the exclusion also harms other network providers' Internet access customers by preventing them from using the application to interact with users whose Internet access provider is blocking the application. Finally, exclusion may impair the Internet's ability to improve democratic discourse, to facilitate political organization and action, or to provide a decentralized environment for social and cultural interaction in which anyone can participate. (203) All of these harms arise even if the behavior is unlikely to monopolize the market for the application in question.
Third, U.S. antitrust law usually has very stringent requirements about the degree of market power in the primary market that is required for vertical exclusionary conduct to be considered problematic. (204) By contrast, network neutrality proponents are also concerned about a network provider's discriminatory behavior if that network provider does not have a dominant position in the local or nationwide market for Internet services. (205)
Fourth, under an antitrust framework, discriminatory conduct that is justified by a legitimate business purpose would be classified as socially beneficial. (206) While those who propose using an antitrust framework to distinguish between socially beneficial and socially harmful discrimination do not explain this criterion in detail, they seem to agree that conduct that is designed to increase the network provider's private efficiency should not be considered socially harmful. (207) For example, most proponents of an antitrust framework seem to assume that any discriminatory conduct that is adopted to manage congestion is procompetitive and should be considered socially beneficial discrimination. (208) Price discrimination that is designed to recover fixed costs of network
infrastructure or network innovation is often mentioned as another example of a business justification that may legitimize discriminatory conduct. (209) For those who would evaluate discriminatory conduct by network providers under an antitrust framework, the existence of an efficiency rationale ends the inquiry. The efficiencies created by the conduct do not need to outweigh any harm to competition. Nor does it matter whether there is a less restrictive alternative that might reach the same goal with less harm to competition. (210)
By contrast, network neutrality proponents often classify discriminatory behavior as socially harmful even if it is motivated by the network provider's desire to increase its own efficiency. (211) Thus, the existence of a private efficiency rationale does not automatically legitimize the behavior.
Network neutrality proponents evaluate discriminatory conduct based on its social costs and benefits. Network providers make decisions based on the conduct's private costs and benefits. As I have explained elsewhere, these decisions often diverge. (212) From the perspective of network neutrality proponents, this divergence between the public's interests and the network providers' private interests is a key justification for regulatory intervention. According to them, network neutrality regulation is needed precisely because what is privately efficient for network providers is not necessarily socially efficient. Under these circumstances, the fact that certain behavior is privately efficient for the network provider cannot automatically excuse the behavior. (213)
The social costs of discriminatory conduct are created by the conduct as such; they do not change depending on the network provider's motivation. If an application is being blocked, it cannot reach its customers. Users will be unable to use it, and the application developer and his investors will be unable to reap its benefits, whether the network provider is blocking the application to manage congestion or to exclude a competitor. Thus, the social harm--the reduction in application developers' incentives to innovate and in investors' willingness to invest, and users' inability to use the Internet in the way that is most valuable to them or participate in social, cultural, or democratic discourse related to blocked content--is caused by the blocking as such, not by the motivations that are driving it.
Finally, the possibility that discriminatory behavior may increase efficiency by, for example, reducing costs or increasing performance, has already been factored into the fundamental trade-off underlying calls for network neutrality regulation. (214) From the perspective of network neutrality proponents, the loss of certain short-term efficiencies from discriminatory behavior is a social cost of network neutrality rules. It is, however, the price of a system that can evolve and will remain open to new applications in the future. In other words, network neutrality rules are based on the assessment that the social benefits associated with network neutrality rules are more important than the social costs, including the loss of short-term efficiencies. Since short-term efficiency gains have already been considered and rejected as a justification for discriminatory behavior on a general basis in the fundamental trade-off underlying network neutrality regulation, the fact that certain discriminatory conduct increases a network provider's efficiency cannot automatically justify individual instances of discriminatory behavior when they occur. After all, if legislators or regulators had deemed the loss of short-term efficiencies more important than the social benefits associated with an open, nondiscriminatory Internet, they would not have adopted network neutrality rules in the first place.
All of this does not mean that proponents of network neutrality will never allow discriminatory conduct that is motivated by considerations of private efficiency. For example, there are circumstances in which discriminatory network management may be justified. For network neutrality proponents, however, the insight that the discriminatory conduct is designed to address a network management problem is only the beginning, not the end, of the inquiry. (215) As a result, discriminatory conduct may be considered socially harmful by proponents of network neutrality, even if it is justified by a legitimate business justification and therefore would be allowed under an antitrust framework. (216)
In sum, a nondiscrimination rule based on an antitrust framework will not prohibit all instances of discrimination that threaten the values that network neutrality rules are designed to protect and should therefore be rejected. (217)
2. Ban discrimination that is anticompetitive or harms users
Other proposals would ban discrimination that is "anticompetitive" or "harms users." The proposed nondiscrimination rule may define certain behaviors as presumptively allowed or not allowed. For example, user-controlled prioritization may be presumptively legal, and application-provider-paid prioritization may be presumptively illegal. Whether a specific discriminatory behavior is anticompetitive or harms users and whether the presumptions should apply would be decided by the regulatory agency in case-by-case adjudications.
The proposal for a legislative framework on network neutrality put forward by Google and Verizon in August 2010 constitutes an example of such a rule. It prohibited "undue discrimination... that causes meaningful harm to competition or to users," and included the rebuttable presumption that "[p]rioritization of Internet traffic would be presumed inconsistent with the non-discrimination standard." (It included, however, an exception for reasonable network management that allowed network providers "to prioritize general classes or types of Internet traffic, based on latency.") (219) The FCC-led industry negotiations in the summer of 2010 seem to have focused on a nondiscrimination rule of this type as well. (220)
These proposals are less specific and more ambiguous than proposals based on an antitrust framework. They use criteria that are open to interpretation without indicating which theories of harm should drive the interpretation. Instead, this decision would be made by the agency in the context of a specific adjudication. Compared to an antitrust framework, which would immediately rule out many of the cases that threatens the values that network neutrality rules are designed to protect, these proposals could capture more of these cases under some but not all possible interpretations of the rule.
For proponents of a narrow scope of network neutrality rules, terms like "anticompetitive" or "harm to competition" are meant to evoke the standards used in antitrust analysis, where behavior is only anticompetitive if it harms competition, not just a competitor. (221) As explained above, antitrust standards would prohibit only a subset of cases that network neutrality proponents would classify as socially harmful. Under this narrow interpretation, exact outcomes would vary depending on whether the terms "anticompetitive" or "harm to competition" were used to import the full antitrust framework outlined above or only parts of that framework.
By contrast, proponents of network neutrality use terms like "anticompetitive" or "harm to competition" in a looser sense that is not tied to antitrust law. To them, any discriminatory behavior that singles out specific applications or classes of applications for differential treatment distorts competition among applications or classes of applications. This harms the competitive process, and thereby competition, by making it impossible for all applications to compete on a level playing field, without interference from network providers. It is unclear how far such an interpretation would go, but it would capture more, if not all, of the cases that threaten the values that network neutrality rules are intended to protect than an interpretation based on antitrust law. (222)
From the perspective of network neutrality proponents, the term "harm to users" resonates with the notion that network neutrality is designed to safeguard users' ability to use the applications of their choice and to access and distribute the content of their choice without interference from network providers. There is, however, considerable uncertainty regarding the interpretation of this term. Depending on how the term is interpreted, it could capture fewer instances of discrimination than network neutrality proponents would find justified.
Consider the example of Comcast's blocking of BitTorrent. Proponents of network neutrality usually agree that singling out specific applications to manage bandwidth on a network is not an acceptable form of discrimination or "reasonable network management" as long as other, application-agnostic ways of managing the network are available. (223)
An application of the rule to this case immediately raises a number of questions:
First, who is a user? Singling out a specific application to manage bandwidth on a network harms the network provider's Internet access customers who want to use the application as well as the provider of the application. It is unclear, however, whether the term "harm to users" refers only to end users or also to application and content providers.
Second, how do regulators determine whether users are harmed? Do they focus on the individual user who cannot use the Internet as she would like, or do they focus on users as a group, similar to the way antitrust law defines harm to consumers when evaluating whether certain conduct is anticompetitive? For example, slowing down peer-to-peer file sharing, a network provider may argue, may harm the file-sharing users and the provider of the file-sharing software, but, according to the network provider, is only done to protect the Internet experience of all the other non-file-sharing users. (224)
Third, does it matter that there are alternative, nondiscriminatory ways of managing the network that are not similarly harmful to the users and the providers of the file-sharing software yet maintain the quality of the Internet experience for the non-file-sharing users? Network neutrality proponents usually allow discriminatory network management only if the problem cannot be solved in a nondiscriminatory way, (225) but it is unclear whether a regulatory agency would read this requirement into the term "harm to users."
Finally, individual filmmakers often use peer-to-peer file-sharing applications to inexpensively distribute their creative works, as we know from the Canadian proceeding that reviewed the Internet traffic management practices of Internet service providers. (226) Nonprofits can use peer-to-peer file sharing to distribute their video contributions to political debates. (227) Thus, peer-to-peer file-sharing applications help foster a more decentralized environment for democratic discourse and cultural production in which anybody can participate. (228) Network neutrality proponents factor the loss of these societal benefits into their evaluation of discriminatory behavior, but it is unclear whether the term "harm to users" would permit this type of consideration.
In sum, while seemingly more specific, the rule's substantive criteria are open to interpretation and do not necessarily capture the behavior that concerns network neutrality proponents. However, contrary to a nondiscrimination rule based on an antitrust framework, it is at least possible that challenges to discriminatory conduct that proponents of network neutrality consider harmful will be successful.
3. Ban discrimination that is unreasonable
A final set of approaches does not specify the criteria to be used in separating socially beneficial from socially harmful discrimination beyond very general terms. For example, the draft Open Internet Rules circulated by the FCC Chairman in early December 2010 banned "unreasonable" discrimination by providers of wireline broadband Internet access without specifying how the term should be interpreted. (229) The Chairman's proposal was based on a compromise bill that had been negotiated by the Chairman of the House Committee
on Energy and Commerce, Representative Henry A. Waxman, and the Chairman of the House Subcommittee on Communications, Technology and the Internet, Representative Rick Boucher, with the large phone and cable network providers, Internet companies, consumer groups, and open Internet groups in the fall of 2010. (230) The bill would have banned network providers from "unjustly or unreasonably discriminat[ing] in transmitting lawful traffic over a consumer's wireline broadband Internet access service." (231)
This type of rule leaves all substantive decisions about the legality of discrimination to decisions by the regulatory agency in future case-by-case adjudications, providing future decisionmakers with maximum flexibility. Contrary to nondiscrimination rules based on an antitrust framework, this type of proposal does not immediately rule out cases that concern network neutrality proponents and makes it at least possible, but not certain, that a complaint targeting behavior that network neutrality proponents deem socially harmful will be successful.
4. Problems with case-by-case adjudication
All of the proposals in this Subpart leave the substantive decision over the legality of specific discriminatory behavior to future case-by-case adjudications. The most general proposals ban "unreasonable discrimination" but do not provide any guidance on how to distinguish socially beneficial from socially harmful discrimination, leaving both the development of substantive criteria and their application to the specific behavior under consideration to future decisionmakers. While proposals that prohibit discrimination that "causes meaningful harm to competition or to users" seem more specific, they are afflicted with the same problem. The outcome of any adjudication depends entirely on how these ambiguous terms would be interpreted, with different interpretations leading to radically different outcomes. Other nondiscrimination rules evaluate discriminatory conduct after the fact using multiple factors without specifying how the factors relate to each other. Here, the outcome of specific adjudications depends not only on how future decisionmakers interpret and apply those factors, but also on how they weigh the different factors against each other. The nondiscrimination rule proposed by the FCC in its May 2014 Notice of Proposed Rulemaking is an example of such a rule. (232)
These kinds of case-by-case approaches create considerable social costs. (233)
a. Lack of certainty and predictability
First, case-by-case approaches fail to provide much-needed certainty for industry participants.
Under the proposals discussed above, network providers do not know which forms of network management are acceptable. For example, it is unclear whether and, if so, which forms of Quality of Service would be considered socially beneficial in future applications of the rule. It seems rather unlikely that network providers would make the investment needed to introduce Quality of Service in their Internet access networks if that investment could subsequently be made moot if a regulator, following a complaint, declared the practice socially harmful. (234) By contrast, the more nuanced rules described below would clearly allow certain, though not all, forms of Quality of Service. Thus, under a case-by-case approach, network providers may refrain from deploying network technology that would have been clearly legal under one of the more nuanced rules discussed below. The resulting lack of evolution of the network infrastructure harms innovation in applications that need Quality of Service and deprives users of the benefits associated with the emergence of these applications.
More generally, some research and anecdotal evidence suggest that in the broadband context, certainty regarding the regulatory framework and its stability over time may be more important for network investment than the substance of the regulatory decision. (235)
In a network that can identify applications and control their execution, application developers who must decide whether to realize their innovative ideas and investors who consider funding them face the fundamental risk that the network may discriminate against the application at any time, which would reduce the affected application provider's ability to reap the benefits associated with her innovation. Thus, the threat of discrimination reduces application developers' incentives to innovate and their ability to get funding. (236) Network neutrality rules aim at mitigating that problem by providing application developers and their investors with certainty that they will not be discriminated against. A case-by-case approach falls short of this goal. Innovators and their investors will not know in advance if and against which network provider conduct they are protected because this decision will only be made after discriminatory conduct has occurred. If the application is discriminated against, its chances with users are harmed immediately, and this harm persists while the application provider goes through a long and costly process to reach a regulatory decision on the discriminatory behavior in question. In markets in which first-mover advantages are important, the temporary disadvantage may be sufficient to tip the competition against the affected application. Moreover, venture capitalists and other investors fund start-ups so that these companies can build their products and better meet the needs of their users. Paying lawyers and economists to clarify how to interpret an ambiguous nondiscrimination rule in order to allow the application to reach its customers is not how investors would like their funds to be used. Thus, this type of nondiscrimination rule does not sufficiently protect users and application developers against actual discrimination and fails to remove the threat of discrimination as a factor that affects application developers' and innovators' decisions about innovation. (237)
While individual adjudications may reduce the amount of uncertainty over time, it is unclear whether and how fast useful precedents will emerge.
Over time, individual adjudications may clarify the interpretation of the standard and its application to specific behavior, reducing uncertainty. (238) Whether future adjudications manage to reduce uncertainty in a meaningful way depends on a variety of factors: First, network providers need to be willing to engage in discriminatory conduct and take the risk of being faced with a complaint and having the behavior declared socially harmful. If network providers do not engage in a particular practice (e.g., if they do not deploy Quality of Service in their networks), there is no basis for a complaint, and the legality of the practice will never be determined. Second, contrary to a rule that clearly specifies which behavior is and is not allowed, an adjudicatory regime puts the burden on a particular party to bring a complaint that will allow the uncertainty to be resolved. Third, future adjudicators may not be any more willing than the current legislator or regulator to do more than absolutely necessary to resolve the case under consideration. Narrow decisions that are deliberately tied to the facts of the specific case and refuse to elaborate broader principles may not provide meaningful guidance for future cases. (239) Thus, it is unclear whether and how quickly useful precedents will emerge. In the meantime, the costs associated with the uncertainty persist. (240) Moreover, as set out in more detail below, the substantive principles emerging from case-by-case adjudications are less likely to adequately protect the values and actors that network neutrality rules are designed to protect.
b. High costs of regulation
Second, case-by-case approaches create high costs of regulation. (241) Each adjudication requires detailed investigations into the facts of the case and invites protracted and resource-intensive fights over the interpretation of the rule. Precedents established through adjudication may not necessarily be binding on other industry actors. (242) Their applicability may also be limited by the facts of the case. (243) As a result, subsequent cases may need to be fully adjudicated even if they are based on similar facts, with network providers arguing that the facts of their case differ from the precedent in relevant ways. For example, when the FCC ordered Comcast to stop interfering with BitTorrent and adopt application-agnostic ways of managing congestion, (244) the Commission based its decision on three different rationales: First, the specific practice used by Comcast--sending RST packets to terminate BitTorrent connections--was quite questionable and violated the Internet Engineering Task Force (IETF) standards for the operation of the TCP. (245) Second, the discriminatory practice, which singled out BitTorrent and other peer-to-peer file-sharing applications for differential treatment, was not narrowly tailored to Comcast's stated goal of managing congestion. (246) Third, Comcast had not disclosed the use of the practice to its Internet access customers. (247) The order did not explain whether each of these factors alone would have made the network management "unreasonable" or whether the Commission's decision was based on the confluence of these factors, providing ample room for network providers to distinguish their case on the basis that their behavior violated only one, but not all, of the criteria used in the Comcast case. (248)
c. Limited ability to protect values and actors that network neutrality rules are designed to protect
Finally, in the context of network neutrality, case-by-case approaches are less likely than rule-based approaches to adequately protect the values and actors that network neutrality rules aim to protect.
Case-by-case approaches provide an advantage to well-financed actors and tilt the playing field against those--end users, low-cost application developers, start-ups, nonprofits, independent artists, and members of underserved communities--who do not have the resources necessary to engage in extended fights over the legality of specific instances of discrimination in the future. (249) Network providers and large application providers can conduct fact-intensive investigations, pay lawyers, economists, and other experts to engage in the fight over the correct interpretation and application of the rule at the regulatory agency and, later, in the courts, and employ lobbyists to organize support for their position in Congress or at the White House. End users, low-cost application developers, and start-ups lack these resources. Thus, adjudications will likely be systematically biased against their interests. They are, however, some of the key groups that network neutrality rules are intended to protect. (250)
Decisions in individual adjudications will often be driven by the specific facts of the case. A sympathetic party or a limited fact pattern that does not illuminate all relevant aspects of the underlying problem may distort the decisionmaker's view of the underlying policy issues in a way that a more general analysis of the issues in the context of a rulemaking proceeding may not. (251) For example, as in the FCC's investigation of Comcast's blocking of BitTorrent, debates over the reasonableness of network management practices arose first in the context of discriminatory treatment of peer-to-peer file-sharing applications. Most people have heard of BitTorrent and other peer-to-peer file-sharing applications as tools for illegal file sharing. They do not know that peer-to-peer file-sharing applications have many legal and socially valuable uses. For example, at the time of Comcast's blocking of BitTorrent, established content providers such as the BBC, Showtime, the History Channel, MTV Networks, 20th Century Fox, and Paramount were distributing their video content online through services that utilized the BitTorrent protocol. (252) Developers of open source applications such as the Linux operating system or OpenOffice and game providers such as Blizzard Entertainment, the company behind World of Warcraft, employ peer-to-peer file-sharing applications to distribute their software or software updates. (253)
Peer-to-peer file-sharing applications foster a more decentralized environment for the creation and distribution of creative works by allowing independent filmmakers to sidestep traditional, more centralized distribution channels and distribute their films directly to the public. (254) Internet video applications based on peer-to-peer protocols like the Miro video player let a diverse set of actors distribute their videos on a wide range of subjects, providing an important outlet for free speech. (255) Still, based on the inaccurate perception that applications like BitTorrent are primarily used for illegal file sharing, regulators and members of Congress or the White House may be more reluctant to side with complaints against network management practices that single out these applications. After all, who wants to side with "pirates"?
More generally, the question at the core of the debate over reasonable congestion management--who should prioritize among competing uses at times when people most want to use the network--may receive more attention and a more balanced assessment in a general rulemaking than in an adjudication involving peer-to-peer file-sharing applications. Adjudications focused solely on peer-to-peer file-sharing applications foster the general perception that network providers engage in congestion management to protect socially valuable applications from the bandwidth demands of applications that have little social value, providing little reason to question network providers' role as benevolent stewards of the platform. By contrast, a more general analysis of network management practices would broaden the focus to include attempts to limit the use of other applications, for example of streaming video applications, during times of congestion. In 2009, for example, BT restricted the bandwidth available to the BBC iPlayer and other streaming video applications to 896 kilobits per second in a particular version of BT's broadband service. (256) Many people like to use streaming video applications like Hulu or Netflix in the evening, when the network is most congested. In North America, Netflix traffic now makes up thirty-four percent of downstream traffic on fixed broadband networks during peak times. (257) As a result, in a generalized rulemaking that also considers limits on applications other than peer-to-peer file-sharing applications, the sympathy of decisionmakers and observers will be more evenly distributed among restricted and unrestricted uses of the network. At the same time, streaming applications, which compete with network providers' traditional video offerings, bring the potential gap between network providers' and users' interests into sharp relief, (258) making the argument more convincing that users, not network providers, are in the best position to decide how the network should be used, whether there is congestion or not. For all these reasons, an individual adjudication focused on network management practices singling out peer-to-peer file sharing is more likely than a general rulemaking to result in a decision that grants network providers broad discretion in managing congestion. At the same time, the precedent set by the adjudicatory decision may make it more difficult to limit network providers' discretion when congestion management practices arise that target other uses of the network.
More generally, adjudicators who need to decide whether a certain discriminatory behavior should be allowed as part of an adjudication will be less likely to have access to the full set of relevant facts and arguments than public actors trying to distinguish socially beneficial from socially harmful discrimination as part of a rulemaking. (259) In contrast to rulemakings, adjudications are adversarial proceedings, with procedural rules that make it more difficult for other interested actors to participate. This limits the range of actors from which the adjudicator will receive input. (260) This is particularly problematic in the context of network neutrality rules, where any decision over the legality of discriminatory behavior is likely to have far-reaching implications for users, application providers, their investors, and network providers who are not directly subject to the discriminatory practice under consideration.
Moreover, network neutrality rules are designed to protect, among others, the interests of users as well as of current and future innovators and entrepreneurs. As large groups with diffuse interests, they face well-documented challenges in organizing and representing their interests, which makes it more difficult for them to participate and be heard in any type of legislative or regulatory
proceeding. (261) Adversarial proceedings increase these challenges. (262) For example, entrepreneurs are often reluctant to speak out on network neutrality because they fear retaliation by network providers. (263) They may be even more reluctant to do so in the context of an adjudication that is directed against a specific network provider. Also, it may be easier to mobilize users and entrepreneurs once, in the context of a rulemaking, than again and again for individual adjudications. Users or entrepreneurs may not only find it difficult to understand how a specific adjudication may affect them; like public decisionmakers, they may also be subject to biases or intuitive reactions resulting from an adjudication's specific fact patterns. (264) For example, a user who does not use BitTorrent and does not engage in illegal file sharing may fail to grasp the importance of an adjudication focused on network management practices targeting peer-to-peer file sharing. Entrepreneurs offering streaming video applications that do not use peer-to-peer protocols may have the same reaction. For all these reasons, users and entrepreneurs may be less willing to get involved in specific adjudications than in a general rulemaking, depriving the decisionmaker of input from important stakeholders.
In addition, an ex ante regime is better suited to the consideration of the very fundamental values at stake than case-by-case adjudications. Network neutrality rules are based on very general trade-offs among competing values. (265) Network neutrality rules foster application innovation, protect user choice, and preserve, among other things, the Internet's ability to foster democratic discourse, all of which create social value. They limit the evolution of the network's core to some extent, limit network providers' ability to realize all potential efficiency gains or optimize the network in favor of the applications of the day, reduce network providers' profits, and, like all regulation, need to be administered and enforced, all of which create social costs. Thus, there is a trade-off that regulators need to resolve. An ex ante rule that specifies what behavior
havior is and is not allowed resolves this trade-off for all future cases at once, in favor of the social benefits. If the legality of discriminatory behavior is decided case by case instead, it is more likely that decisions will deviate from this general trade-off and allow discriminatory behavior than under a rule that makes this decision ex ante. This is because the adjudicator's decision will be affected by several well-known cognitive limitations and biases. (266)
While the costs of banning the practice will be immediately apparent (e.g., the network provider cannot manage its network in a certain discriminatory way), the current and future benefits associated with a ban will be less clear. While the discriminatory practice immediately harms the provider and the existing users of the affected application, the value of a specific application often only becomes apparent over time. Thus, the immediate cost of the discriminatory practice (or the immediate benefit of banning it) may be difficult to quantify. Determining the future benefits of banning the discriminatory practice is even more difficult. We do not know which applications will never be developed because innovators and investors are concerned about the threat of discrimination, so their social value cannot be determined, either. (267)
Moreover, an adjudicator is likely to underestimate other negative consequences of allowing a deviation from the general nondiscrimination rule in the particular case under consideration. Often, it takes a while to recognize the negative consequences of a specific discriminatory practice (beyond any reduction in incentives to innovate due to the threat of discrimination). This problem may be particularly pronounced for an adjudicator who lacks technical expertise. (268) For example, network management practices that single out specific applications or classes of applications for negative treatment may motivate the designers of the affected applications to adopt techniques to evade detection (269) : applications that are the target of discriminatory network management practices and others that want to avoid being targeted in the future often choose to encrypt their communications across the network. (270) The increase in encryption has motivated some operators to slow down all encrypted traffic, which in turn hurts legitimate traffic that is encrypted for security reasons. (271) Widespread use of encryption also complicates network analysis, planning, and security. (272)
Similarly, Comcast's old, discriminatory method of managing congestion--sending spoofed RST packets to terminate certain peer-to-peer files-haring connections--used certain types of TCP packets in a nonstandard way. Once such a practice emerges, programmers can no longer rely on standards to determine how their software should respond to an RST packet, which considerably complicates protocol and application design. (273) Thus, allowing only a single discriminatory network management practice (e.g., one targeting peer-to-peer file-sharing applications) may have significant unintended negative consequences.
Beyond that, several small deviations may quickly add up to create big roadblocks for innovation. (274) For example, while application developers may be able to adapt their application to one network provider's idiosyncratic discriminatory network management practice, the costs of adapting their application to the network management practices of more than a few providers will quickly become prohibitive. (275) Thus, an adjudicator's focus on a single practice whose exact effects may yet be unknown is likely to lead him to underestimate both the isolated effect of the practice and its interactions with other current or future deviations from nondiscriminatory network management. By contrast, decisionmakers in a general rulemaking can take a broader view that takes account of cumulative effects and generalizes from past experiences. (276)
Finally, research in behavioral economics suggests that individuals tend to systematically undervalue future benefits, discounting them more than rational discounting would suggest. (277) Uncertainty about future benefits aggravates this bias. (278) Thus, in weighing the immediate benefits of allowing the discriminatory practice against the future, uncertain benefits of a ban, an adjudicator will disproportionately discount the future benefits.
For all these reasons, deciding whether to allow discrimination on a case-by-case basis makes it more likely that discrimination will be allowed than under an ex ante rule that resolves the above trade-off for all future cases at once.
So far, the discussion of the social costs of case-by-case proposals in this Subpart has focused on the costs associated with general or ambiguous nondiscrimination standards. Although case-by-case approaches based on an antitrust framework provide considerably more guidance on how to evaluate discriminatory behavior, the outcome of specific cases under an antitrust framework still depends on the exact interpretation of the framework and on its application to the facts. In addition, the results of cases under an antitrust framework turn on facts (e.g., the network provider's market share in the nationwide market for Internet access services, the existence and size of economies of scale, and the cost disadvantage associated with operating at a less than efficient scale) that are highly specific to individual cases and that are often difficult and costly to prove. (279) As a result, an antitrust framework is afflicted with the same social costs as case-by-case proposals based on more general or ambiguous standards. In particular, the uncertainty about the legality of specific discriminatory conduct is not resolved until after the discrimination has occurred. In addition, since the outcome of an adjudication depends on the specific facts of the case, the same practice may be legal for some providers, but not others, or with respect to some applications, but not others. Thus, prior adjudications will not necessarily remove the uncertainty. Finally, like the general or ambiguous nondiscrimination rules discussed above, a nondiscrimination standard based on an antitrust framework creates high costs of regulation, tilts the playing field against those who do not have the resources to engage in lengthy and costly fights over the legality of discrimination, and usually limits the ability of interested third parties to participate in the adjudication.
|Printer friendly Cite/link Email Feedback|
|Title Annotation:||II. Proposals for Nondiscrimination Rules B. All-or-Nothing Approaches 2. Ban All Discrimination through C. Case-by-Case Approaches 4. Problems with Case-by-Case Adjudication c. Limited Ability to Protect Values and Actors That Network Neutrality Rules Are Designed to Protect, p. 37-81|
|Author:||van Schewick, Barbara|
|Publication:||Stanford Law Review|
|Date:||Jan 1, 2015|
|Previous Article:||Network neutrality and quality of service: what a nondiscrimination rule should look like.|
|Next Article:||Network neutrality and quality of service: what a nondiscrimination rule should look like.|