Printer Friendly

Arrested development: how policy failure impairs internet progress.

Executive Summary

The Internet and related networking technologies have fueled unprecedented, disruptive change across the entire global economy. These technologies have enabled entrepreneurs to reinvent the creation and sale of news, entertainment, professional services, shopping, and a host of other cultural and economic activities.

While the Internet has ushered in remarkable changes, it has so far left many activities relatively untouched. Experiments are underway in telemedicine, remote learning, and remote group and individual conferencing, but progress in these fields has been slow despite jaw-dropping increases in network speed and device power brought about by Moore's Law--driven technology advances. The Internet has yet to upend interpersonal communication in the same way that it has disrupted content distribution.

The Internet technical community has engaged in developing the means for communications applications to connect to richer network services for a very long time; such means were actually incorporated (in a basic way) in the Internet's original design. In fact, Internet teleconferencing standards play a vital role in today's LTE mobile networks. Network engineers actually realized advanced networking capabilities were important application enablers long before application developers did.

The technical work that enabled the Internet to support applications traditionally enabled by the telephone, cable television, and mobile communications networks was reasonably mature by the time the Telecommunications Act was enacted in 1996 and has improved since then. The migration of these discrete networks to a common Internet is known as "Internet convergence," and its essential elements are Internet standards known as Integrated Services and Differentiated Services, as well as Quality of Service mechanisms in the mobile and fixed broadband networks that underpin the Internet.

While the technical elements of convergence have been well developed for nearly 20 years, policy, law, and regulation have failed to keep pace with technology. Unwinding the regulatory apparatus established for the traditional networks--especially the public switched telephone network--has proved to be a more substantial challenge than developing the technology.

The refusal of regulators to embrace the opportunities provided by Internet convergence is a peculiar development. The low-water mark of the regulatory obstruction of convergence is the Federal Communications Commission's (FCC) 2015 Open Internet order, a remarkable departure from the regulatory consensus that prevailed in the mid-1990s. The collected papers from the 1995 and 1996 Telecommunications Research Policy Conference clearly show regulators, scholars, and policy analysts of all stripes embracing a consensus that the Internet must be a deregulated space in which competition rather than regulation would provide market discipline.

While the emphasis on competition remains a strong feature of intellectual discourse on Internet policy, other voices dominate the wider political and social debate on Internet policy. Progress toward a converged Internet cannot continue until regulators balance the positives that can come from convergence against the worst-case scenarios touted by advocates who seem to prey on public ignorance, fear, and animosity.

The Internet has reached an impasse because of inappropriate regulation. Restoring the Internet's dynamic character will require innovation on the part of regulators that parallels the innovation produced by the Internet engineering community in the wake of the 1996 Telecommunications Act.

The paper consists of three main sections. The first examines technical drivers of innovation, primarily those related to Moore's Law. The second section examines convergence technologies in network engineering through an overview and five case studies. Finally, the third section examines the arc of policy reactions from the innovation-friendly pro-competition consensus of the 1990s to the interventionist and traditionalist spirit evident in the FCC's recent order. It concludes with suggestions for restoring a more optimistic spirit to Internet policy.

Some portions of the paper delve more deeply into the inner workings of Internet technology than may be customary in policy discourse. The nature of the subject matter makes technology discussion unavoidable, but those whose interests lie exclusively in politics or law may safely treat the paper's technology exposition as evidentiary rather than explanatory.

The Science of Network Innovation

Innovation in networked applications and services is most visible at the point of use: when we use Facebook, Google, YouTube, Instagram, Amazon, Skype, Pandora, Twitter, or iTunes, we are immediately aware of the cleverness of the entrepreneurs who created these novel applications. But entrepreneurs do not work in a vacuum, and innovation is not magic. Individual feats of creativity are enabled by the human character and a host of cultural, legal, economic, and technical factors.

Technical factors are among the least understood by the general public, but they are not difficult to grasp at a high level. The bedrock of innovation in information technology (IT) is Moore's Law, an observation made by Intel Cofounder Gordon Moore 50 years ago. Technology advances in networks, devices, and applications may be rightly viewed as side effects of Moore's Law.

Moore's Law

Moore's Law is simply a prediction about the rate of improvement in integrated circuit electronics. Improvement is the essence of innovation, but it does not happen at the same rate in all fields. For example:

* Average yields of corn have increased by 2 percent a year since 1950.

* The generation of electricity from steam improved by 1.5 percent annually in the 20th century.

* Outdoor lighting efficiency has improved by 3.1 percent annually over the past 135 years.

* The speed of intercontinental travel improved by 5.6 percent per year from the 1900 ocean liner to the 1958 Boeing 707 but has been flat since.

* Between 1973 and 2014, passenger-car fuel efficiency has improved by 2.5 percent annually.

* The energy cost of steel declined by 1.7 percent per year between 1950 and 2010.(1)

In most fields, performance gains and cost reductions range from 1.5 to 3 percent a year, but in electronics these figures have improved by a whopping 50 percent a year for the past 50 years. (2) Something special has been going on in the electronics industry, and that phenomenon is known as Moore's Law.

This law--which is more a conjecture or a prediction based on past experience than an actual scientific theory or law--works in different ways. As Chris Mack explains, in its first generation (the 1960s and 1970s), progress in integrated circuit electronics consisted of adding more components to chips, or "scaling up." This produced high-capacity, dynamic memory chips and high-powered microprocessors.

More recently, Moore's Law 2.0 has been about "scaling down," or decreasing the size and cost of electronic components and improving their power efficiency. As present materials have little room for improvement in scaling down past the next decade, we may be entering a Moore's Law 3.0 phase in which analog components, such as sensors and cameras, join their digital relatives in a new generation of integrated circuits. (3)

Moore's Law operates within the sphere of integrated circuits, which leapt from concept to commercial reality on the strength of a pair of inventions in 1958: Jack Kilby's "flying wire" integrated circuit and Robert Noyce's tidy interconnection technique. By I960, the integrated circuit was a commercial reality in the form of the Texas Instruments Type 502 Solid Circuit, a bistable multivibrator (figure 1).

Integrated circuits are manufactured by printing wires and electronic components such as transistors, diodes, resistors, and capacitors on a silicon wafer treated with chemicals to isolate the components and to connect them where needed. They become more efficient as the purity of the silicon wafer improves, as photolithography becomes more precise, and as the size and scale of the components and their interconnections improves.

Generations of integrated circuits in the Moore's Law 1.0 phase were identified by the capacity of dynamic memories, such as 64 bits, 64 kilobits, or 64 megabits. In the 2.0 phase, they are distinguished by the size of the logic gates or features printed on the wafer; in the current generation, this is 22 nanometers (nm). By contrast, a human hair is 100,000 nm in diameter, a strand of DNA is 2.5 nm, and the previous integrated-circuit feature size was 32 nm.

If the electron itself has a size--a debated question in physics--it is at most one-millionth of 22 nm, but controlling electronics in a solid depends on molecules no more than 1,000 times smaller than today's feature size. Hence, Moore's Law as we understand it today has limits that will probably arrive within 10 to 20 years, barring new discoveries in materials science. Until then, and perhaps even after, integrated circuits will continue to grow more powerful, more energy efficient, and more economical according to design requirements. IBM announced one such advance in early October 2015: a novel use of carbon nanotubes. (4)

Moore's Law as we have known it will stop definitively only when a single molecule can form an electronic device. The first single-molecule diode capable of rectifying a nontrivial load has already been created in a lab (figure 2), but we are far from exploiting any of its capabilities, let alone all of them. (5)

The most recent advance in semiconductor process paves the way to 7 nm gates, half the size of today's leading-edge 14 nm chips. The 7 nm process relies on an advanced material--silicon germanium--capable of transporting more electricity through tiny gates than pure silicon can and of a new method of photolithography the extreme ultraviolet laser (figure 3). (6) Each plays a vital role, and each can theoretically enable further advances. Of course, 7 nm chips are far from production, and 10 nm--the intermediate stage between 14 nm and 7 nm--has encountered production problems. But Taiwan Semiconductor Manufacturing Company has said it plans to produce the new 7 nm chips by 2017. (7)

On the networking side, a new approach to signal processing in optical fiber promises a two to four times increase in range, which has been an intractable problem for years, and the next generation of mobile broadband, 5G, promises to increase data rates from 1 gigabit per second (Gbps) to 10 Gbps or more within five years. (8) Sometimes networking technology advances more slowly than Moore's Law, and sometimes it advances more quickly, but it always advances.

Slowing down the rate of improvement in networking slows down the rate of innovation in technologies that depend on networks: network applications. Hence, improper regulation of networks harms innovation overall.

Integrated circuits are the foundational building blocks of networks, computers, and all other forms of commercial electronics. As circuits improve in speed and power, systems improve as well, regardless of their function.

The dynamics of ingenuity, risk, and innovation are largely the same across all electronic-device markets, but they manifest differently in networks and applications for reasons that will be explained shortly. The technology base is built the same way in both spheres:

* Moore's Law improvements in semiconductor processes enable engineers to design more efficient and powerful computation and networking platforms;

* More productive platforms give rise to more complex, better-targeted service platforms;

* More effective service platforms give rise to more useful applications; and

* Users benefit.

Network innovation requires a higher degree of coordination and cooperation than does application development. This is true because the value of networks is utterly dependent on broad adoption. While every application benefits from broad adoption, the value curve for adoption of many applications tends to be linear, while the corresponding curve for networks is more commonly exponential: value can be derived from a single use of an application, but network value depends on large numbers of users. (9) Consequently, visible network innovation is less frequent than application innovation but dramatically more meaningful.

For a more complete examination of Moore's Law, see Moore's Law at 50: The Performance and Prospects of the Exponential Economy by AEI Visiting Fellow Brett Swanson. (10)

Network Innovation

Mobile broadband networks tend to be redesigned approximately every 10 years: the analog voice systems of the 1980s were replaced by second-generation 2G digital systems in the 1990s; 3G came online in the 2000s; 4G/LTE has been deployed since 2010; and 5G will begin its rollout by 2018, if not sooner. Each wireless generation increases data rates by about 10 times, thus enabling a new collection of more powerful applications.

Similar dynamics are afoot with wired networks. In the late 1990s, first-generation broadband networks offered speeds from 350 kilobits per second to a few megabits; within 10 years, speeds increased 20 times. By the early 2010s, vectored very high-speed digital subscriber line (VDSL) was pushing 80 megabits per second (Mbps), and cable modem was up to hundreds of Mbps in many areas. The next round of DSL innovation,, will push data rates to several hundred Mbps over short distances, and DOCSIS 3.1 can push cable modem speeds to multiple gigabits per second (Gbps). Fiber-optic broadband progresses along a different line, with speeds doubling twice as fast as Moore's Law.

Device Innovation

The systems that make use of networks progress along a similar path. The first computers that were entirely based on the microprocessor were the personal computers of 1975: the Altair 8800 and IMSAI 8080. These computers used the Intel 8080 microprocessor with 4,500 transistors and a clock speed of 2 megahertz (MHz). Today's 15-core Intel Xeon processors have as many as 4.31 billion transistors and clock speeds as high as 2.8 gigahertz (GHz), a thousand times faster than the 8080.

The performance rating of the 8080 was 0.29 million instructions per second (MIPS), while the 15-core Xeon is rated at more than 300,000 MIPS. Overall, this is an improvement of a million times in 40 years, just as Moore's Law predicted. It should be noted that Xeon processors are more likely to be used in data-center servers than in ordinary desktop computers, because such intense processing power is not needed for common tasks such as word processing and web surfing.

Video streams are commonly formatted, compressed, and streamed by Xeon-based systems. Streaming accounts for a large proportion of the traffic on modern broadband networks; compression allows streaming applications to use network capacity more efficiently.

Within network devices, the Moore's Law dynamics that produced the Xeon processor also provide bandwidth, routing, and network management and, under ideal conditions, would also fully ensure that video streams do not produce harmful side effects on other applications or among each other; this point will be developed further in the sections to follow.

Application Innovation

Compared to innovation in networks and devices, application innovation is easy.

As noted above, network innovations such as packet switching, fiber optics, local area networks (LAN), mobile broadband, and residential broadband cannot take off until they have been accepted by the broad group of stakeholders who participate in network standards bodies, are designed into infrastructure components such as network switches, and are deployed in real networks. Each phase or generation of network evolution requires this high degree of collaboration, so it does not happen overnight.

Similarly, when Intel or ARM designs a new microprocessor, it does not have economic impact until it can be manufactured, the company scores design wins, clients build it into devices, and consumers buy and use the new devices. A key obstacle in this process is the mammoth investment in semiconductor factories, or "fabs," which are required to reach new semiconductor generations. A 2012 Gartner Group report predicted 2016 minimum capital expenditures of $8-10 billion for logic fabs, $3.5-4.5 billion for dynamic memory (DRAM) fabs, and $6-7 billion for persistent memory (NAND flash) fabs. (11) This not the sort of development that takes place on a whim.

By comparison, developing a new application for a smartphone or laptop computer is almost trivially easy, even for network applications. Pierre Omidyar coded the initial version of eBay all by himself in the summer of 1995 (by some accounts, over the Labor Day weekend). (12) Bill Gates and Paul Allen wrote Microsoft's initial program in their free time on a university computer. (13) Mark Zuckerberg effectively wrote the first version of Facebook on a laptop computer in his free time, with a little help from some friends. (14)

In each case, these landmark applications required negligible capital investment, minimal planning, and essentially no coordination outside the inventors' circle of friends. The processes of network and device innovation on the one side and application innovation on the other are so different that parties on each side have little to say about the dynamics that characterize the other.

The Technology of Network Convergence

A well-functioning network tends to be invisible to the user. When our networks connect us to the resources of our choice and carry traffic without incident, we take them for granted; nobody ever called her Internet service provider (ISP) to congratulate the company for its excellent service.

But we tend to blame network operators for every failure we experience. If Netflix, Facebook, or Google has a service interruption for any reason, the consumer's first instinct is often to blame the ISP. Even sophisticated users fall into this trap: PC World Contributing Editor Rick Broida recently described a problem he experienced that involved Wi-Fi drivers, Google Chrome, and a Samsung smartphone, admitting he usually blames Comcast. (15)

Application developers commit a similar error, sometimes characterizing broadband networks as simply "dumb, fat pipes" incapable of providing bespoke services tailored to application needs. (16) Dumb, fat pipes are indeed fine for today's most popular applications--websites and video-streaming services--but they leave much to be desired for emerging applications such as virtual reality, telemedicine, and high-definition voice. (17)

Technical Constraints on Network Innovation

Much of the confusion about network innovation stems from the present state of the Internet. From an engineering perspective, networks are organized in layers, corresponding to the scope of data-transfer interactions. The layered architecture of the Internet is depicted below (table 1).

The bottom layers deal with interactions that take place in a small area, such as the representation of information by electromagnetic signals and the communication of information packets across LANs such as Ethernet and Wi-Fi. (18) The higher layers deal with interactions across global networks, encompassing the flow and pacing of web pages between servers and browsers or the delivery of motion pictures from video servers to screens. Internet architecture thereby tolerates diversity in both transmission technologies and applications.

Structural Issues

However, the Internet enables diversity by extracting a horrible toll. While the upper and lower layers of the Internet are diverse, the middle layer is one-dimensional, consisting in many cases solely of an incomplete implementation of the Internet protocol (IP). This uniformity is illustrated by Internet engineer Steve Deering's famous hourglass diagram (figure 4).

The narrow waist of the hourglass represents the tradeoff between diversity and performance embodied in the IP layer. Networks are capable of supporting a broader range of applications than current IP interconnection norms will allow them to support. These norms reduce IP to a single class of service--best efforts--which is harmful to real-time communications such as Voice over IP In other words, the narrow waist is harmful to innovation, because diverse applications have diverse requirements from networks. Networks are generally capable of meeting these needs, but IP interconnection norms prevent them from communicating their needs to the underlying network.

The narrow waist can be a barrier to innovation because it forces all applications to operate within the constraints of traditional implementations of the IP layer. But all applications are not alike when it comes to their use of network resources. Consider just four examples:

* Web browsing is an episodic activity. Web browsers consume all available network capacity while loading pages but impose virtually no load while the user reads a fresh page. Video streaming is similar to web browsing in terms of the on/off duty cycle, although it cycles more quickly and moves more data while active.

* Audio conferencing applications such as Skype send information at very regular intervals and cannot tolerate delays of more than one-tenth of a second without suffering noticeable service degradation.

* Video conferencing applications such as Cisco TelePresence are similar to audio conferencing but with greater sensitivity to packet loss and much greater bandwidth requirement.

* Background activities, such as software updates and file system backups, typically take place at night and are not at all sensitive to performance. Because they frequently transmit very large amounts of data, they do tend to be price sensitive.

Forcing these diverse applications to share a common service class imposes a bias in favor of one type of application and against the others. The favored class in today's Internet is episodic, noninteractive applications like web browsing and video streaming. Overcoming this bias is key to accelerating progress in other types of applications that require different, more sophisticated network functionality.

Overcoming Interconnection Bias

Networks are generally engineered with the goal of providing each application class with the particular type of service it requires. For example, the Wi-Fi Quality of Service standard, IEEE 802.11e, defines four service classes: Best effort (ordinary service), Voice, Video, and Background (table 2).

The versions of IP in wide use today were designed to allow applications to select a service class from the networks in use. The mechanisms to do this are the "Type of Service" identifiers in the IP header as originally designed, as well as subsequent elaborations such as the Differentiated Services (DiffServ) and Integrated Services (IntServ) protocols. (19)

However, each of the Internet standards for connecting application requirements to network services is flawed. The primary flaw is that Quality of Service (QoS) protocols presuppose a trust relationship between the application that specifies a particular QoS level and the network that carries it out, but in today's Internet, the necessary trust relationships only exist within the boundaries of particular networks; broadband carriers such as AT&T and CenturyLink use DiffServ within their networks to ensure that voice and video streams are received with the desired quality. But DiffServ is not generally operational across network boundaries, because Open Internet norms and regulations invite the abuse of such mechanisms.

Moreover, this problem is not easily solved. For example, if DiffServ markings were respected at network boundaries without careful restrictions, it would be possible for digital pirates to mark peer-to-peer file-sharing transactions with urgent priority to make them run faster, reducing the ability of service providers to detect transactions involving piracy and creating delays for other applications. Even more importantly the Internet undergoes hundreds of Denial of Service (DoS) attacks (where a target web site is flooded with data) every day; if attackers were able to access higher-priority transmission classes, they could effectively shield their attacks from correction. (20)

DiffServ is not unique in this respect, as many classical Internet protocols have trust issues that are subjects of ongoing efforts to improve Internet security, including the Internet's routing protocol, Border Gateway Protocol (BGP), as well as the name and address resolution protocol, Domain Name System (DNS). (21)

Signs of Progress

As a result of these shortcomings, there are ongoing efforts in the engineering community to develop improved methods for defining, identifying, and monetizing bespoke network services. These activities largely take place in network standards organizations. Three such endeavors show particular promise:

1. Pre-Congestion Notification, a means of bidding for priority on congested links;

2. BGP Extended Community for QoS Marking, a means of passing application requirements to routers; and

3. DiffServ Interconnection Classes and Practice, a means of passing application requirements to Multiprotocol Label Switching (MPLS) Traffic Engineering. (22)

None of these is an official Internet standard, but each is the subject of ongoing work. While this work continues, IntServ has been adopted by 3GPP as a key element of LTE, where it serves a vital role in ensuring voice quality on mobile broadband networks. (23)

Freeing the Untapped Potential

To recap, the Internet is a work in progress and probably always will be, right up to the day it is decommissioned. One of its many shortcomings is a poor ability to connect applications--especially real-time applications such as conferencing--with network services in an optimal way.

Perhaps because this capability is underdeveloped, many advocates mistakenly believe that what they see as the status quo--in which virtually all transmissions on the public Internet are made with the same default service level--is an ideal state. In fact, it is neither ideal nor, as the advocates are wont to suggest, intended by the Internet's original designers. The contrary, allowing applications to select differing service classes based on their diverse needs, was part of the Internet's original design for a good reason.

In an ideal world, engineers would be free to continue the work needed to allow future applications to gain maximum benefit from future networks. This might seem like a simple thing to ask, but regulators have expressed fears that opening the door to innovative combinations of applications and network services invites abuse. For example, the FCC has argued that:

Although there are arguments that some forms of paid prioritization
could be beneficial, the practical difficulty is this: the threat of
harm is overwhelming, case-by-case enforcement can be cumbersome for
individual consumers or edge providers, and there is no practical means
to measure the extent to which edge innovation and investment would be
chilled. And, given the dangers, there is no room for a blanket
exception for instances where consumer permission is buried in a
service plan--the threats of consumer deception and confusion are
simply too great. (24)

In fact, a blanket exception from the FCC's ban on paid prioritization (or de-prioritization in return for lower pricing or greater data-volume allowances) could easily be granted for services that serve the needs of realtime applications. Such an exception would incentivize the engineering work that needs to be done to place a greater range of services at the disposal of application and service developers and to encourage network operators to make the necessary investments and agreements to operationalize such capabilities.

As I show in the next section, networks are able to deliver application data in a much more powerful, effective, and efficient way than they have in the past. The capabilities of networks are constantly expanding, and the technology limits that made default treatment attractive in the past are receding.

Case studies

This section provides examples of mechanisms designed into converged networks that permit them to carry diverse applications in optimal ways. Of necessity, this section contains technical information that may not be of interest to all readers. A historical narrative, diagrams, and illustrations are included to make the section accessible to serious readers who lack technical knowledge.

The main takeaways from these case studies are that network differentiation has been regarded as essential to networking standards since the 1970s and that standards bodies agree substantially regarding its implementation. (This is by no means an exhaustive treatment of network-differentiation facilities; for a more complete treatment, I recommend "Differentiated Treatment of Internet Traffic" by the Broadband Internet Technical Advisory Group. (25))

Ethernet. Ethernet began as an experiment performed at the Xerox Corporation's Palo Alto Research Center (PARC) by a pair of young engineers, Bob Metcalfe and David Boggs. Metcalfe applied the name "Ether Network" to an enhancement to a network under development at PARC to support Xerox's Alto workstations. (26)

Ethernet underwent two iterations at PARC, the first a 1 Mbps, shared coaxial cable network and the second a 2.94 Mbps network with an added feature, a working collision detector. (27) It was inspired by ALOHANET, a wireless network invented by Norm Abramson in the late 1960s at the University of Hawaii:

ALOHANET consisted of a number of remote terminal sites all connected by radio channels to a host computer at the University of Hawaii. It was a centralized, star topology with no channels having multiple hops. All users transmitted on one frequency and received on another frequency that precluded users from ever communicating with each other--users expected to receive transmissions on a different frequency than the one other users transmitted on. At its peak, ALOHANET supported forty users at several locations on the islands of Oahu and Maui. (28)

ALOHANET was funded by ARPA Information Processing Techniques Office Director Bob Taylor, who was the assistant director of PARC when the Alto Aloha network project was initiated. (29) Taylor, one of two program directors for ARPANET, hired Metcalfe at PARC.

Metcalfe realized that for Ethernet to be successful, it needed support from other firms lest it remain a proprietary system, so he recruited Digital Equipment Corporation and Intel to develop an open standard incorporating broader expertise. Digital had already produced a very powerful, wide-area network known as DECNET, and Intel's expertise in chip development was clearly established.

The multivendor standard, memorialized in the 1980 Ethernet "Blue Book," called for a 10 Mbps network considerably more advanced than ALOHANET (30) With one major change to the frame format, Blue Book Ethernet became IEEE 802.3 standard 10BASE5, one of the first LAN standards created by the IEEE Standards Association in 1983. By comparison with the PARC Ethernet, Blue Book Ethernet was faster, used a larger and more resilient cable, and was capable of transmitting longer messages (or "frames").

10BASE5 and all prior versions of Ethernet lacked an upgrade path, as

they were passive systems wedded to a shared cable. In fact, each of the three revisions from the Alto Aloha network to 10BASE5 used a different type of cable, so upgrading meant replacing the entire cable plant and all the associated electronics, such as transceivers and interfaces.

With Moore's Law driving the advances from 1 Mbps to 2.94 Mbps to 10 Mbps, it was reasonably clear by 1984 that Ethernet needed to be redesigned. (31) Hence, the IEEE 802.3 Working Group chartered a task force in 1984 to devise a "low-cost LAN" to deal with the upgrade problem and the high cost of Ethernet installation. This task force ultimately produced the 1BASE5 standard known as "StarLAN" that enabled Ethernet to run over telephone wire in office settings. (32)

Following the topology of telephone wiring, StarLAN cables terminated at an electronic hub in an office telephone service closet. The network hub was upgradeable to support higher speeds and alternate forms of cabling, such as category 5 unshielded twisted pair (with higher noise immunity than telephone wire) and fiber optics. Switches devised for 100 Mbps and higher speeds are backward compatible with speeds as low as 10 Mbps. Current Ethernet standards scale up to 400 Mbps. The Ethernet standards that followed StarLAN include:

* 10BASE-T: 10 Mbps over category 3 copper cable;

* 10BASE-F, -FP, -FB, and -FL: 10 Mbps over various types of fiber optic cable;

* 100BASE-T and -TX: 100 Mbps over category 5 copper cable;

* 100BASE-FX: 100 Mbps over fiber optic cable;

* 1000BASE-LX: 1,000 Mbps over single-mode and multimode fiber-optic cable;

* 1000BASE-CX and -T: 1,000 Mbps over category 5 copper cable;

* 10GBASE-W and -EPON: 10 Gbps wide-area networks over fiber optics;

* 10GBASE-S, -L, and -E: 10 Gbps local area networks over fiber optics;

* 10GBASE-T and -X standards for 10 Gbps over twisted pair copper;

* 40GBASE-R 40 Gbps over fiber optics;

* 100GBASE-R 100 Gbps over fiber optics; and

* 400GBASE-SRx: 400 Gbps over fiber optics.

The next generation of Ethernet will allow 1 terabit per second over fiber optics.

Along the way to higher speeds, Ethernet standards also developed the ability to prioritize selected packets. While the various generations of Ethernet offer higher speeds, they do this through enhancements at the physical layer, layer one in the standards hierarchy. As layer two, the data-link layer, is undisturbed by these upgrades, it is capable of separate development. The IEEE 802.lp task group added QoS enhancements to the Ethernet data-link layer's "Virtual LAN" feature in the late 1990s. These enhancements consisted of eight service classes (table 3). (33)

These service classes are operationalized by the Ethernet Media Access Control (MAC) sublayer of the data-link layer and also by the MAC sublayers of other IEEE 802 standards such as Wi-Fi. Increased capacity does not guarantee low latency, and low latency is an absolute requirement for some applications. The Ethernet QoS enhancements were designed to be compatible with the Internet Engineering Task Force (IETF) DiffServ standards developed at the same time. (34)

DOCSIS: Internet Service for the Cable TV Network. Cable companies formed a research and development consortium known as Cable Labs in 1988 to develop standards for new telecommunications capabilities for their networks. In 1994, Cable Labs issued a $2 billion Request for Proposals (RFP) for telephone service equipment for cable networks, which ultimately led to cable's standing as a major provider of residential phone service. Less than a year later, it issued an RFP for devices and operational support for high-speed data services, which ultimately led to the creation of the Data over Cable Service Interface Specification (DOCSIS), better known as "cable modem."

By 1996, five modem companies were bidding for the cable modem business. (35) In March 1997, Cable Labs announced the standards for the first version of DOCSIS, and cable companies were free to replace propriety cable modems with standards-compliant upgrades. (36) The transformation of the cable TV network from a one-way system only capable of broadcasting analog television programs to a two-way digital system capable of providing Internet service took place at lightning speed, considering the scope of the enterprise.

From the outset, cable modem was capable of providing QoS despite the preliminary state of Internet work in the subject. This happened because the first bidirectional application, telephone service, required it. Also, contributors to the DOCSIS standard were aware of, and in some cases involved in, the development of IETF's first real QoS standard, Integrated Services, which began in 1994. (37)

DOCSIS 1.0 was an isochronous MAC protocol running over a 36 Mbps physical signaling sublayer. (38) The system was shared among dozens or hundreds of users, but its mechanisms allowed each application used by each subscriber to obtain the type of service it required despite what others were doing.

Isochronous MAC Protocols. While the original Ethernet was an asynchronous system--one designed around datagrams that appeared at random intervals--modern MAC protocols for Radio Frequency (RF) environments tend to be isochronous systems, managed to support network streams that produce traffic at regular intervals and random streams. (39) Cable TV was originally a shared antenna for TV reception, and it retains RF features. To support video on demand and telephony, DOCSIS followed an isochronous path similar to Wi-Max, Wi-Fi Scheduled Access, and LTE.

Asynchrony and Isochrony are correctly viewed as service requirements of network applications: Personto-person applications such as telephony and videoconferencing are inherently isochronous applications that produce long-lasting streams of datagrams spaced at regular intervals. Some human-to-machine applications, such as video streaming, are also isochronous, while more interactive ones, such as web surfing, are more asynchronous. Machine-to-machine communication spans a wide range of temporal requirements and can't be neatly categorized.

Wired LANs can accommodate a wide range of applications with relatively simple service capabilities because of their special properties: The user populations on many LANs are extremely small (on home networks especially), packet loss due to noise is negligible, end-to-end latency is minimal, bandwidth is extremely cheap and abundant, and users have dedicated channels to a shared, intelligent device (the network switch) that has the ability to exercise considerable control over packet streams from a privileged viewpoint: the Ethernet switch can see the bandwidth requirements of all active applications and manage them after the fact, and cables DOCSIS has a unique mechanism for making "unsolicited grants" of bandwidth to streams that exhibit regular patterns. None of these things is true of the typical RF system, however, so engineering for RF networks employs isochrony in order to maximize system utility. (40) DOCSIS Implementation. DOCSIS suffers from the burden of retaining backward compatibility with earlier usage models for the cable network while adding new features. DOCSIS was designed to coexist with the digital cable MPEG Transport (MPT) system that is still very deeply designed into the cable system. The overriding design assumption in cable networks is still MPEG oriented with limited convergence between the IP-based Packet-Streaming Protocol (PSP) and MPT.

In its original form, the DOCSIS switch, known as the Modular Cable Modem Termination System (M-CMTS), encapsulates IP into MPT packets, time stamps them twice, and moves them to the user's cable modem as MPT, where they are de-encapsulated and reconstructed into native IP packets. This round-about path adds cost and decreases throughput by creating a dependency on specialized MPT equipment for generic IP datagrams, but it preserves compatibility and makes for happy customers.

The overall scenario reflects the history of the cable system and neither malice nor engineering ineptitude. We can find similar inefficiencies in mobile cellular networks that were designed before the onset of IP hegemony and in Wi-Fi. Network technology is the product of a long design cycle and accommodates major paradigm shifts only incrementally, as previously mentioned.

As the current role of DOCSIS is to carry IP datagrams from both the Internet and proprietary video servers to the end user, some analysts have long proposed bypassing the traditional DOCSIS M-CMTS with a more direct path to the Internet through an Integrated CMTS (I-CMTS). (41) This has happened in DOCSIS 3.1, the most recent standard, with an alternate path.

The development of DOCSIS has featured incremental increases in speed:

* DOCSIS 1.0: 36 Mbps

* DOCSIS 2.0: 40 Mbps

* DOCSIS 3.0: 160-1,080 Mbps

* DOCSIS 3.1: 10Gbps

Speeds above 10 Gbps will require the full replacement of coaxial cable with Fiber to the Home (FTTH).

In summary, the addition of video on demand and telephone and data service to cable TV networks required a massive reconfiguration of the cable network. This redesign was essentially the equivalent of transforming a propeller-driven airplane into a jet while it was in flight. The new services are continuing to grow in importance and bandwidth, while linear TV is declining. The DOCSIS data service continues to offer QoS capabilities compatible with IETF standards.

Mobile Broadband. Cellular networks began as a Bell Labs idea for car phone service in the 1940s that was constrained by the FCC's refusal to make significant spectrum assignments; the agency's experts saw no value in mobile telephony during the 1950s and 1960s. In the late 1960s, AT&T Bell Labs developed the 1G Advanced Mobile Phone System (AMPS), which became the first US standard for cellular telephony. Motorola's pioneering work with cell phones in the late 1960s and early 1970s led to the creation of handsets for the AT&T AMPS network. (42)

While AMPS was an analog system, it was able to provide limited data-transfer service by acting as a modem. This system was known as Cellular Digital Packet Data (CDPD), and it provided a maximum speed of 19.2 kilobits per second (Kbps), a respectable speed for a modem in the early 1990s. CDPD development took place in parallel with the initial work on Wi-Fi, and the two fields had cross-pollination.

The next step forward, the Global System for Mobile communication (GSM) 2G cellular service, was an all-digital system, and its initial data option was a QoS-controlled, circuit-switched offering meant to provide fax, Short Message System (SMS) text messaging, and general data service. This implementation was followed by packet-mode service in the mid-1990s. SMS was created by accident, as it used a portion of the network initially reserved for network-management messages. After it was discovered that the network rarely used it for its intended purpose, the facility was made available for consumer use.

2G GSM's General Packet Radio Service (GPRS) was the first general-purpose packet data service offered over cellular. It provided download speeds as high as 80 Kbps and upload speeds of 20 Kbps, which made it a considerable advance from CDPD. While GPRS operated in packet mode, its implementation simply allocated portions of the network's time division multiple access (TDMA) circuits on demand. It was thus a hybrid of statistical multiplexing over a QoS-controlled network, just as commercial data service over Tl lines was. In other words, QoS-controlled isochronous networks are capable of providing asynchronous access while purely asynchronous networks can provide only limited isochronous services.

While GPRS provided limited access to the Internet, speeds were too low to make smartphones a very interesting proposition. Subsequently, the 3G upgrade to both CDMA and GSM networks provided considerably higher data rates, well beyond the speeds associated with dial-up modems. Before 3G was completed, hybrid 2.5G systems such as Enhanced Data Rates for GSM Evolution (EDGE or EGPRS) and CDMA2000 were placed into service. EDGE provided data rates up to 240 Kbps (60 Kbps per slot) on the download side, and CDMA2000 IX went up to 153 Kbps. 2.5G data service was implemented over isochronous circuits compatible with 2G norms; it was regarded as a bolt-on upgrade.

3G was another generational upgrade that did not fundamentally redesign the cellular circuit switched network, even though it did require wholesale replacement of existing switches and handsets. The distinction between 2G and 3G is less clear than the analog/digital distinction between 1G and 2G.

3G networks exist in two forms: an evolutionary mode that uses the same spectrum as 2G and a revolutionary form that requires additional spectrum. (43) Evolutionary 3G was the same as 2.5G: CDMA2000 and EDGE. But revolutionary 3G (or "real 3G") utilized new standards for wider (5 MHz) channels: CDMA2000 1xEV-DO Release 0 and UMTS (also known as "TD-SCDMA/UTRATDD"). (44) These new standards and spectrum assignments supported data rates from 2.5 to 3 Mbps with wide channels under ideal conditions.

CDMA2000 in particular began the transition from circuit switching to packet switching at the network's back-end, where the wireless network hands packets off to wired backhaul. The notion of bearers defined for particular service types emerged in 3G and became extremely important in AG. For our purposes, a bearer is a QoS class, effectively a replacement for the QoS circuit switching features dropped as the cellular network became purely packet oriented in the transition from 3G to LTE.

Current cellular-data networks follow the LTE standard formerly known as Long Term Evolution (LTE). LTE is either 4G or 3.9G, depending on marketing whim. It is significant for two major reasons: LTE converges the previously separate CDMA and GSM standards, and it fully replaces circuit switching with IP as an architectural design choice. As LTE includes support for wider channels, it can scale up to 1 Gbps with sufficient spectrum, or 40-100 Mbps in more realistic configurations.

QoS plays a crucial and central role in modern LTE networks. Circuit-switched QoS is replaced with bearers in LTE. Bearers are best thought of as bundles of IETF Integrated Service (IntServ) parameters. IntServ is a very rich system ranging from default, nonguaranteed services to guaranteed delivery of specified volumes of data with specified delay and loss rates. Such systems are too complex to be practical in the whole, hence 3GPP has reduced the complexity to a manageable level by predefining nine bearer classes. Each LTE connection begins on a default bearer without a guaranteed bit rate (GBR) and graduates from there depending on application needs. The precise levels of delay and loss for these nine bearers are listed in figure (5).

The acronym QCI stands for "QoS Class Identifier." Note the similarity between 3GPP QCIs and the IEEE 802.1p Priority Code Points shown in table 4. In both schemes, the highest priority is reserved for network control or signaling, the next highest is for voice, and the next highest after voice is effectively for gaming. Video calls get higher priority than video streaming, and common Internet applications contend for resources at the lowest or second-lowest priority.

Hence, LTE mobile broadband relies on IETF QoS standards created 20 years ago.

Wi-Fi. Initial design work for Wi-Fi began in 1989, at roughly the same time that CDPD was designed. The first version of the Wi-Fi standards, IEEE 802.11 (with no qualifier), was not approved until 2007, because so many problems had to be solved. But the overall system architecture was in place by 1992: it called for devices to be connected to a central access point, similar in function to the StarLAN hub, which connected to devices over shared RF spectrum or pervasive infrared light.

The access point connected to a wired Ethernet backhaul, and the system was initially known as wireless Ethernet. Veterans of StarLAN participated in the early definition of Wi-Fi, which is evident in the similarities of the StarLAN and Wi-Fi designs.

IEEE 802.11 was a 1-2 Mbps system with access to 75 MHz of spectrum in most countries, a considerably more generous allotment than the initial allocations for individual cellular networks. Each Wi-Fi radio channel was 25 MHz wide at a time in which cellular channels spanned a mere 1.25 MHz. Wi-Fi was specified for coverage areas of hundreds of feet, while cellular covered miles. It is unsurprising that Wi-Fi is a faster system in most settings.

The most recent iteration of Wi-Fi, 802.1 lac, supports data rates up to 1.3 Gbps. The increase from 1-2 Mbps to 1.3 Gbps is highly dependent on wider radio channels, but some significant engineering enhancements have occurred along the way, such as the frame-aggregation feature added in 802.11 n and more efficient means of modulation and bit coding, such as Orthogonal Frequency Division Multiplexing.

Initial Wi-Fi prototypes were developed on Ethernet controller chips such as the Intel 82593. These controller chips allowed for programmable inter-frame spacing, which inspired a QoS mechanism for Wi-Fi that was formally standardized in IEEE 802.11e in 2005 (after 10 years of work). 802.11e supports two QoS modes: HCF Controlled Channel Access (HCCA) is based on IntServ and strongly resembles LTE-Unlicensed; the more common Enhanced Distributed Channel Access (EDCA) is based on DiffServ and simply provides priority access for network control, voice, and video over common Internet use. EDCA provides four service levels, which Wi-Fi harmonizes with the IEEE 802.1p priority levels by collapsing 802.1p's eight levels into Wi-Fi s four (table 5).

EDCA includes an admission control feature in which stations desiring access to higher-priority levels must ask the access point for permission. Stations send an Add Transmission Specification (ADDTS) request to access points, which the access point may approve, deny or ignore. The Transmission Specification (TSPEC) informs the access point of desired characteristics per the following format (table 6). In the table, S means specified, X means unspecified, and DC means "do not care."

Consequently, the notion that Wi-Fi is a permission-less system is only partially true. Access to the default priority is permissionless, as it is in LTE, but access to higher priorities for bidirectional flows requires explicit approval by the access point. Wi-Fi QoS levels are important to Wi-Fi ISPs (WISP).

Internet Architecture and QoS. The original specification for IP lacked the robust QoS mechanisms later specified for IntServ, DiffServ, and MPLS. (45) The Type of Service capability simply passed the preferences of its higher-layer user to the data-link layer without prejudice. (46)

The most widely used data-link layer for the original implementation of IP was ARPANET, a system that honored two UP levels--one suitable for facilitating transfers and the other for interrupting them. Similar priority capabilities existed in the other data-link layers supported by IP: PRNET and SATNET (47)

IntServ was added to the Internet canon in 1994 and is widely used in LTE. DiffServ was added in the 1998, in part to overcome IntServ's deployment complexities. DiffServ provides incremental benefits from incremental deployments. It is widely used inside management domains, for Internet Protocol Television (IPTV) and VoIP, and occasionally between them.

MPLS is an IP sublayer that provides traffic engineering and expedited routing by shortcutting the route lookups that are otherwise performed packet by packet. Once an IP/MPLS router has determined the route to a given destination IP address, there is no reason to search the database (currently more than 300,000 entries) again and again for subsequent packets going to the same network. Like DiffServ, MPLS is primarily used within routing domains but could be extended in principle across consenting domains with appropriate fail-safes and security mechanisms.

One of the more interesting developments in MPLS is optical label switching, a battery of techniques that would extend the use of MPLS into the management of particular light frequencies (lambdas) in DWDM optical systems. (48) In principle, MPLS can be used to manage any channelized system, whether optical, coax, or wireless; MPLS-managed DOCSIS is not out of the question and may have considerable benefits.

Another aspect of Internet QoS innovation is the development of different forms of interconnection. While the Internet core once consisted of the National Science Foundation Network (NSFNET) backbone and a set of tributaries to it, it now consists of a mesh of commercial networks that exchange traffic between one another according to commercial agreements. These agreements are generally of three kinds: settlement-free peering, paid peering, and transit.

Settlement-free peering agreements, also known as settlement-free interconnection, are agreements to exchange traffic between networks of comparable size, scope, capacity, and traffic mix without charge. Paid peering agreements, which date back to the heyday of America On-Line (AOL), provide direct connection between two networks for a fee based on traffic imbalance, where the network that transmits more than it receives pays.

Transit agreements are subscriber/service-provider agreements between small and large networks where the smaller network pays. Transit agreements typically include Service Level Agreements (SLA), theoretically binding service providers to certain performance parameters and subscribers to volume parameters. The typical SLA is volume based at the peak hour of usage, the so-called 95th percentile.

Pathways through the Internet mesh have always been less uniform and more controlled than is commonly believed. They are distinguished from one another by both SLAs and peering agreements. SLAs are carried out in practice by provisioning links relative to usage: traffic with stringent latency and packet-loss requirements (as specified by contract) is routed through links that are more lightly loaded.

In some cases, high QoS core pathways are implemented as overlay networks that guarantee QoS by admission control. For example, video-conferencing overlay networks are currently sold by specialized providers such as WebEx, Avaya, and Cisco. In some European internet exchange points (IXP), inter-domain QoS is provided by carriers who honor each other's QoS requirements by recognizing agreed-upon DiffServ markings or BGP Community Attributes. Korea Telecom has built a premium backbone network for the firm's IPTV products and has engaged in negotiations to open it to third-party IPTV services for a fee.

While it was once regarded as a truism that congestion does not occur on the Internet core, the deployment of extremely high-speed edge networks such as Korea Telecom's Gigabit Ethernet system makes congestion more evenly distributed than it has been the past. Congestion can now occur in last mile and middle mile networks before or after they connect to congestion-free core networks. Consequently, extensive traffic management is as important as ever. The uptake of high-bandwidth applications such as 3D Ultra HDTV video streaming, latency-sensitive applications such as immersive video conferencing, and gaming by residential and business users with "fat pipes" in and out of the Internet core creates a need for end-to-end QoS management. It also creates a need for a set of robust and well-integrated protocols from the physical layer to the application layer that are capable of making the Internet's core TCP/UDP/IP protocols work better across a wide range of applications.

Congestion is also quite common at the boundary of directly connected networks with no intermediate backbone. Content delivery networks (CDN) routinely congest ISP networks, because they shift traffic according to server load independent of geographic efficiency at the receiving end. A CDN with excess network capacity but constrained server capacity may serve an end user in New York from a server in California, leaving the ISP to carry packets over long distances. In such instances, ISP users across the country may experience delay induced by congestion. (49)

QoS has applications beyond congestion mitigation, such as service definition. The network-engineering community would not have invested so much time and effort in it if it were not incredibly valuable.

Summary. Data-link layer standards Ethernet, cable modem, mobile broadband, and Wi-Fi incorporate QoS mechanisms that can be operationalized at the network layer by IETF QoS mechanisms IntServ and DiffServ. In one sense, this is remarkable because of the diverse origins and purposes of the data-link layer standards: some are wired and others are wireless; some were created in public fora and others by closed membership groups; some rely in private resources and others on commons; but all share compatibility with IETF standards that were created in advance of or in concert with data-link layer QoS standards.

Despite these differences, designers came to substantially the same conclusions about how to organize network resources to meet application needs. This is no accident: both the Internet and data-link taxonomies reflect the conclusion that efficient network design in a world with diverse applications requires some form of QoS.

In a network of diverse applications, "first come first served" is not the appropriate rule; "the greatest good for the greatest number" is superior. This insight is the fundamental basis of network quality of service.

Network Innovation Policy

Despite the intricate interplay of Moore's Law, network technology, and applications, it has become common in regulatory circles to regard networks as hostile to innovation. The FCC's 2015 Open Internet order is a profoundly divisive document that puts a white hat on edge services firms and a black hat on network service providers. It attributes nefarious motives to network service providers by stressing their incentives, both real and imaginary, for raising prices and blocking access to edge services without considering their counterincentives to keep prices low and utility high to attract and retain customers. (50)

The FCC's order posits a looming monopoly in wired networks without considering the shift that has already taken place from wired to wireless networks. (51) We will soon transport more edge data over wireless technologies than wired ones, if we do not already. (52) All in all, the Open Internet order displays a deep suspicion of network innovation.

The disparate treatment of networks and applications threatens network convergence. If a variety of applications is going to operate on a common network infrastructure, the network needs to adapt to a changing mix of applications just as applications leverage new and more powerful network services. And just as the relationship of networks and applications in the real world is cooperative, the relationship of regulation to both networks and applications needs to be more even-handed and consistent than it is today.

How We Got Here

Network policy has not always been this way. Selected papers delivered at the Telecommunications Policy Research Conference (TPRC), the leading international networking policy conference, reveal that a strong consensus prevailed in the mid-1990s around the proposition that competition was a better way to discipline broadband markets than regulation. This was not simply a political consensus but the collective view of a very distinguished group of scholars, including academics, regulators, and industry figures.

The 1995 TPRC proceedings recognized the Internet's significance in both technology and policy:

The Internet is of great importance: It represents a new integrated approach to the telecommunications industry that raises fundamental policy problems. Steady sustained technological advances in computers and electronics have caused a shift from traditional analog methods of providing communications to digital techniques with extensive computer intelligence. The Internet was created as a system of interconnected networks of communications lines controlling computers. Formally, the Internet is an "enhanced service under the Federal Communications Commission (FCC) categories and is exempted from regulation." (53)

Thus, the TPRC scholars recognized that the challenges the Internet brings to telecom policy arise from its integrated and competitive nature:

First, the Internet is created by the integration of multiple networks provided by independent entities with no overall control other than a standard for interconnection protocols. The Internet represents the fullest expression to date of the unregulated "twork of networks" and is widely expected to be a model for future communications....

Second, the Internet concludes the integration of multiple types of service with substantially different technical characteristics onto a single network. The Internet is used to transmit short e-mail messages, graphics, large data files, audio files, and (very slowly) video files....

Third, the Internet and commercial networks connected to it include the integration of the provision of transmission capacity with various degrees of the provision of information. Past telecommunication policies have sharply distinguished between providers of communication capacity and providers of information content.... In the Internet, and in the expected communications industry, those dividing lines are blurred as individual companies provide capacity to transmit communications for others and also provide their own content....

Under the [Telecommunications Act of 1996] it is likely that the entire telecommunications industry will move more toward the competitive network of integrated services that are already observed in the Internet. (54)

The 1996 TPRC proceedings reflected a continuation of the view of the Internet as an integrated, unregulated marketplace for diverse network and content services. One section of the conference examined bilateral payments to support network infrastructure. One paper in this section proposed a "zone-based cost-sharing plan" in which both senders and receivers contribute to the cost of transmission. (55) Others proposed alternate charging plans.

A paper by Camp and Riley addressed the impact of the Internet on First Amendment jurisprudence, which has long recognized four distinct media types (broadcaster, publisher, distributor, and common carrier). The paper argues that network newsgroups combine functions of all four media types and therefore require a novel approach to policy and regulation. (56)

The notion that the Internet requires a different policy response than the single-purpose, geographic telecom monopolies of the past continued through the early 2000s, but the theme hinted at in the 1996 TPRC--that the Internet required a new policy framework--gathered strength. But even then, policy choices ranged from radical deregulation of the Internet to a preference for light regulation that recognized the Internet's unique character. The idea of subjecting the Internet to a modified form of telephone network or cable network regulation was not seriously considered in the 1990s.

At least one paper in the 1996 conference forecasted the regulatory arbitrage to come. A group of telephone companies petitioned the FCC to regulate Internet telephony in the same terms as traditional telephony so this threat to Internet freedom in the name of the preservation of a legacy industry had to be addressed. (57)

In the early 2000s, the deregulatory consensus began to fracture. The 2002 TPRC proceedings emphasized institutional adaptation to the Internet and Internet adaptation to institutions. As the introduction to the proceedings points out: "Because technologies are embedded in social systems and are understood in this context, responses to new technologies may be as varied and complex as the social systems that incorporate (or reject) them." (58)

Rather than accepting the opportunities the Internet offered in terms of communication and publishing, institutional guardians were prepared to pick and choose, to name winners and losers, and to accept the Internet with conditions. The advocates for an unregulated or lightly regulated Internet still existed, but they were opposed by regulatory hawks who seemed to fear the impact the Internet might have on vested interests threatened by disruption, including those of telecom regulators. Tim Wu devised his network neutrality notion in 2002 against the backdrop of institutional worries about the Internet's social impact. (59)

While the policy status quo of the mid-1990s was closely aligned with technical developments in the Internet engineering space such as DiffServ and IntServ, net neutrality advocated a retreat to a more primitive Internet that was less powerful, less socially disruptive, and, most especially, less challenging to regulate. The policy retreat was motivated by a host of factors, which would require an additional paper to demonstrate fully. The following sections touch the highlights of the retreat.

Excusing Policy Inequity

The belief that networks are less innovative than applications drives a divide-and-conquer strategy, which gives regulators extraordinary control over the business models and practices of network service providers in return for lax oversight of similar models and practices by application providers. One example of such inconsistent regulatory approaches is differential privacy regulation, in which carriers' harvesting of information about users' web habits is severely restricted but similar practices by edge providers such as Google are not. This view is severely biased.

Networks and applications are different, of course, just as there are differences within the networks and applications categories. Wired and mobile networks are significantly different and so are content-oriented applications such as video streaming and communication-oriented applications such as video conferencing. Some applications provide services to other applications through Application Programming Interfaces (APIs), and other applications use these APIs to serve end users. Facebook and Google Maps are both end-user applications and API platforms for other end-user applications, such as Angry Birds and Zillow.

But the dynamics of ingenuity, risk, and innovation are largely the same inside networks as they are in the application platforms and services that rely on networks. As previously noted, the technology base is built in substantially the same way in both spheres: technology advances in one sector drive advances in all other sectors.

The divide-and-conquer strategy has harmful implications for innovation. If networks and applications are fundamentally different, it makes sense to create legal firewalls between them, such as the open Internet regulations that are premised on the fear of tacit collusion (or at least parallel behavior) among networks. But if networks and applications are fundamentally complementary and similar, each can influence the other to develop in a more cooperative and constructive fashion as long as they are not unreasonably restricted.

Upon discovering an orchid with a foot-long flower in 1862, biologist Charles Darwin predicted a gigantic moth would be found with the ability to pollinate it; the prediction was finally confirmed in 1992. Just as moths and orchids coevolve, so do networks and applications. (60)

But network/application coevolution is not one-dimensional. Networks become not only faster but also more reliable, pervasive, and adaptable to particular application needs such as low cost and controlled latency. The coevolution perspective allows us to view the development of new network-service models as potentially beneficial, while the firewalled perspective sees them as only harmful.

Analysis of the operation of Moore's Law provides policy insight. Because Moore's Law drives innovation in networks and applications, these policy insights are equally valid in both spheres.

Gordon Moore decomposed the magic of Moore's Law into three elements: "decreasing component size, increasing chip area, and 'device cleverness,' which referred to how much engineers could reduce the unused area between transistors." The analogies in networks are increasing the number of bits the network can carry each second (decreasing bit size), increasing wire capacity by channel bonding or fiber upgrades, and harvesting bandwidth that would otherwise go to waste.

By comparison with the circuit-switched telephone network, the packet-switched Internet excels at cleverness. Instead of allowing unused capacity to go to waste, it allows each application to use the network's entire capacity each time it transmits a unit of information, known as a packet.

This is an important feature that enables Internet technology to serve a wide variety of applications better than a circuit-switched network can. The telephone network is better in some respects than the Internet for carrying telephone calls, but the Internet is better at everything else--from email to web browsing to video streaming. This property motivated the development of IETF QoS. As the initial design document for DiffServ suggests, differential pricing may be necessary to fully implement QoS services: "Service differentiation is desired to accommodate heterogeneous application requirements and user expectations, and to permit differentiated pricing of Internet service." (61) It is difficult to see how the goals of DiffServ can be achieved under current FCC regulations, however.

The application analogy to "decreased component size" is the windfall to computation requirements that comes from faster processors. "Increased chip area" corresponds to large memory, and "cleverness" comes about from the ability of applications to marshal extraordinary computation capabilities from internal and external resources. Policy is wisely unconcerned about these application features.

The ultimate rationale for the FCC's ban on QoS in the Open Internet order is the agency's fear that a QoS market would inherently advantage networks and disadvantage edge services, especially small ones. The agency has no fear of diverse applications, but it refuses to tolerate Internet standards and industry practices that would monetize QoS. To understand these fears, it is necessary to examine arguments for differential regulation of networks and applications generally.

Arguments for Differential Regulation

A number of arguments have been made to justify heavy-handed regulation of the networking sector and light-touch regulation of the applications sector. This section is a high-level survey of these claims.

The Monopoly Argument. Proponents of restricting the ability of ISPs to allocate resources more efficiently generally argue that the near-monopoly condition that prevails in wired residential broadband-service markets justifies aggressive regulatory intervention. Law professor Susan Crawford claims the broadband market is effectively a monopoly:

It may be time for yet another label to enter the lists: "the looming cable monopoly." It is gaining strength, and it is not terribly interested in the future of the Internet. This is the central crisis of our communications era. (62)

Indeed, cable companies control 60--65 percent of the market for residential broadband services (if wireless is excluded), while traditional telecom firms and new entrants control the other 35-40 percent. (63) The FCC exaggerated this market-share disparity by cleverly redefining "broadband" to mean speeds at or above the 25 Mbps level. This move converted a 60/40 market into one in which 61 percent of consumers have no choices or one choice for true broadband service. (64)

However, substantial market shares are nothing special in the Internet ecosystem. The 60/40 split between cable companies and telecoms is more equal than the market-share division for smartphones, in which 83 percent use Google's Android, 14 percent use Apple's iOS, and the remainder are split among several options. (65) Still, no one is talking of a dangerous smart-phone monopoly.

Similarly, even if the FCC's new definition is valid and only 39 percent of Americans have broadband choice, the broadband market is substantially less concentrated than the market for desktop operating systems: Microsoft has an 88 percent share, and the nearest competitor, Apple, has only 9 percent. (66) This is not a perfect comparison, because every American with the money could buy an Apple computer if she wanted; but the fact is she does not.

In any event, it takes more than a temporary market-share disparity to make a dangerous monopoly that harms consumers and innovation. The reason is that in dynamic and highly innovative markets, market share does not automatically translate to market power. Take Microsoft, for example. Microsoft dominates the market for common office applications, but it has not been able to leverage its desktop dominance into control of browsers or mobile operating systems. In reality, Apple is gaining desktop and laptop share, and Microsoft is opening its operating systems in unprecedented ways:

Microsoft's software empire rests on Windows, the computer operating system that runs so many of the world's desktop PCs, laptops, phones, and servers. Along with the Office franchise, it generates the majority of the company's revenues. But one day, the company could "open source" the code that underpins the OS--giving it away for free. So says Mark Russinovich, one of the company's top engineers. (67)

Abusive monopolists do not generally give products away for free. Network services firms utilize this strategy as well: Google offers free use of its networks at 5 Mbps, and T-Mobile provides free use of its mobile broadband service for up to 200 megabytes of data per month. (68) These strategies maintain market share at the expense of market power, because they do not increase profit.

Technologies and platforms are fiercely competitive. Cable's high share of the 25 Mbps networking segment has not prevented all major telecoms (AT&T, Verizon, and CenturyLink) from upgrading DSL networks to speeds four times faster than the old ones or extending fiber service at speeds 40 or 50 times faster than the FCC's 25 Mbps benchmark. Nor is cable standing still: Comcast has announced Atlanta will soon have the option of purchasing 2 Gbps residential broadband services, the fastest in the world. (69)

The wired broadband market also faces potential competition by advanced wireless networks that fall just short of the FCC's arbitrary standard today but that are certain to exceed it in the near future if the FCC permits operators access to the necessary spectrum. Many groups globally are actually trending toward the exclusive use of mobile networks, and 15 percent of Americans today access the Internet predominately from mobile networks at home. (70)

While it is certainly true that the small screens on smartphones do not provide convenient access to the whole web, most can serve as mobile hot spots to connect full-size computers. Data limits prevent hot spots from serving as portals to streaming video services, but the web has much more than TV reruns, and data limits are becoming more expansive. (71) In the near future, we might very well see wireless networks challenge the dominance of today's wired networks at very high data rates.

So, as much as regulation advocates love to invoke the "cable monopoly" narrative, it is neither true in the present nor likely to be true in the future.

The Falling-Behind Argument. ISP critics also argue that US broadband service is falling behind the international standard for speed and quality. This argument does not square with regulatory tactics that not only discourage investment but also fail to encourage competition; its appeal seems to be a general indictment of the traditional light-touch regulatory consensus.

The falling-behind argument also does not square with the facts. As I pointed out in a previous paper, broadband service is faster and more heavily used in the United States than in comparable nations. (72) Bret Swanson, Christopher Yoo, Roger Entner, Roslyn Layton, and Michael Horney have made similar observations, because the data is unequivocal. (73)

Some, including the New America Foundation's Open Technology Institute (OTI), have chosen to look at very different facts, however. Its Cost of Connectivity report series ignores national data on actual speeds and prices in favor of advertising claims by small ISPs who serve limited portions of large cities. These data lead OTI to conclude that Americans are getting a raw deal: "Overall, the data that we have collected in the past three years demonstrates [sic] that the majority of U.S. cities surveyed lag behind their international peers, paying more money for slower Internet access." (74)

This assertion is transparently false. Ookla's Net Index has consistently reported that average download speeds in the US are higher than the averages for the G8, OECD, European Union, and APEC (table 7). (75)

Because broad surveys do not support the falling-behind claim, some advocates of heavy regulation have turned to price or adoption figures to support their claim. As I discussed in G7 Broadband Dynamics, the quest for gloom among these other datasets also fails. (76)

The Incentives Argument. As previously noted, the FCC offers a very general argument to the effect that network service providers have incentives to abuse customers and edge providers, but this argument fails in most cases because it ignores the effects of competition and emerging technologies--and because there is simply no evidence of such behavior occurring in the real world. It is true but trivial that competitive firms have the incentive, and in the abstract may also have the ability, to extract rents from rivals and partners alike. The question is whether they are constrained by competitive forces from doing so. All the evidence indicates the answer to that question is "yes."

An Example of Differential Regulation

The FCC's Open Internet order bans paid prioritization, a means by which real-time applications can reliably work around the congestion caused by other applications on broadband networks. Regardless of the capacity (or speed) of broadband networks, moments of congestion in the last mile and middle mile are utterly unavoidable.

It is not obvious to policy analysts why this should be the case because many assume that adding capacity makes congestion permanently disappear: "The net neutrality debate is technically a choice about how to respond to congestion and packet loss. One solution is to increase capacity in the network to accommodate an increase in traffic flow." (77)

But this is a misunderstanding of congestion, which can occur moment to moment even on networks that maintain low average levels of packet loss and delay One reason for momentary congestion is the bandwidth-seeking behavior of the Internet's TCP protocol, but there are others.

Expert analysis by the Broadband Internet Technical Advisory Group has shown that video streaming services typically deliver information to networks in clumps, where packets are delivered back-to-back for 10 seconds or so. (78) Such periods of activity allow servers to optimize disk access, but they produce moments of congestion for networks. Following packet clumps, video servers are silent for a period that can be longer than the clump.

Each clump of packets is stored in memory on the end-user device--a PC, Xbox, Amazon Fire TV, or TiVo--until it is needed. Video rendering, the process of converting network packets to video images, is a strictly time-controlled process. Rendering devices typically display a sequence of still pictures 30 or 60 times a second to create the illusion of motion, no more and no less. These systems work well as long as they have the next picture on hand while displaying the current one. While having more pictures on hand provides an insurance margin, the user does not see the difference.

Clumping makes hundreds of pictures available to the rendering device before they are actually needed, and it does this at the expense of other activities taking place on the network connection. A Skype video call, for example, also relies on the rendering of pictures in a series, but video-call pictures are not transmitted in clumps for a very good reason: video-call pictures literally do not exist until a fraction of a second before they are transmitted.

Movies are stored on servers all around the web, but video-call pictures are created by a camera and transmitted as soon as they are captured. Video-call pictures need to synchronize with the sounds captured by the microphone. They also need to be received by the other party within one-tenth to one-fifth of a second of when they are created.

When a household's broadband connection carries a video stream and a video call at the same time, the video stream inevitably degrades the video call. This problem can be overcome by the ISP without degrading the video stream, provided the ISP scans the packets that make up the stream and the call and reorders them appropriately. In other words, the video call should have higher priority than the video stream. Network devices have the power to do this today, because they include circuits that can scan and reorder packets in real time.

Recognizing and prioritizing activities also takes place within end-user computer systems: Microsoft Windows and Apple OS X routinely perform these activities on behalf of the various applications that multitask within our computers. It is not controversial when operating systems allocate resources to applications according to their requirements, but it is tremendously controversial (indeed, it is forbidden under the Open Internet order) when ISPs use the same tools to perform the same tasks within network elements on behalf of network applications. This peculiar disconnect is at the heart of the net neutrality controversy.

The Firewall Model of Internet Regulation

Net neutrality is an example of the firewall model of network regulation; the primary proponents of this theory are Harvard Law School Professor Larry Lessig and his proteges and followers. Lessig's seminal statement of the ideal state of affairs was spelled out in a now-famous passage in his first book on the laws of cyberspace:

Like a daydreaming postal worker, the network simply moves the data and leaves interpretation of the data to the applications at either end. This minimalism in design is intentional. It reflects both a political decision about disabling control and a technological decision about the optimal network design. (79)

Hence, Lessig's aspiration was to enforce a single law on ISPs that would later be named "network neutrality" by his student Tim Wu. (80) The major shortcoming of Lessig's law is highlighted in Wu's seminal net neutrality paper:

Proponents of open access have generally overlooked the fact that, to the extent an open access rule inhibits vertical relationships, it can help maintain the Internet's greatest deviation from network neutrality. That deviation is favoritism of data applications, as a class, over latency-sensitive applications involving voice or video. There is also reason to believe that open access alone can be an insufficient remedy for many of the likely instances of network discrimination. (81)

Wu is saying that treating all packets of information the same harms latency-sensitive applications such as gaming and conferencing and helps volume-intensive applications such as large file transfer, video streaming, and web browsing. Hence, a pure neutrality rule only succeeds in promoting application development and use for the "right kind of applications," those that move large amounts of data and are insensitive to the delay and loss of individual packets.

But Wu's issue was addressed by the designers of the Internet protocols (Vinton Cerf, Robert Kahn, Jon Postel, Bob Metcalfe, Yogen Dalai, Gerard Le Lann, and Alex McKenzie) who realized application bias should be avoided. This realization was the reason why IP permits applications to specify a desired "Type of Service" in every packet header. (82) The original Internet was a collection of three networks--ARPANET, SATNET, and PRNET--each of which permitted applications to identify their desired level of service from the network; this fact is memorialized in the early Internet RFCs. (83) The IP specification spelled out eight Type of Service precedence options from most urgent to least, for example. (84)

As explained much more fully above, the design of IP also includes options or delay, throughput, and reliability. These options were subsequently redefined by the DiffServ protocol but never abandoned. (85) Consequently, Lessig's law and Wu's reservations about application bias are indicative of faulty understandings of the Internet architecture.

The Integrated Model of Internet Policy

Lessig's error also reflects a simple but profound misunderstanding of the functions of layers in network architecture and of the means by which the interactions among those layers traditionally have been defined. As depicted in table 1, the task of creating standards for the Internet has traditionally been divided among committees that operate in parallel. Parallel design supports standards modularity which is generally beneficial to standards upgrades and application diversity.

Modular standards permit subgroups to work on discrete problems in parallel, speeding up the standards development process. The Internet includes one transport layer standard for elastic applications, TCP, and another for real-time applications, RTP (86) It also includes one network layer standard for internetworking with small addresses, IPv4, and another for large addresses, IPv6.

Below the IP layer are many data-link protocols for wired, wireless, and satellite networks. Above the transport layer we find the myriad applications such as the Web, video streaming, and conferencing.

While these layers are distinct, they are meant to communicate with each other. RTP, for example, is a provider of services to applications but a consumer of the services provided at lower layers. Consumers in layered protocol models specify options and request services, and providers do their best to perform accordingly. This is not unlike the way the real postal network behaves: consumers request a service level and pay for it, and the system does its best to deliver the package (without the daydreaming Lessig imagines).

Hence, the integrated model of Internet policy does not concern itself with simply permitting or banning particular services without regard to their utility. Rather, it examines the conditions of sale of network services and the veracity of provider claims, as regulatory bodies do in most markets. This policy model is more consistent with the Internet's actual architecture and history than the "mother-may-I" firewall model.


All technology is dynamic, and none changes faster than information technology Moore's Law drives advances in networks, devices, and applications, and innovators develop clever combinations of IT systems and software into novel applications. The Internet and its constituent networks are a work in progress and always will be.

Throughout the 1990s, the shared view among network technologists, policy analysts, and regulators promoted Internet convergence: the migration of diverse applications such as the World Wide Web, telephone service, cable TV, and mobile applications to a common communication platform, the deregulated Internet. This consensus shattered because of market concentration fears and the difficulty of creating novel regulatory paradigms for the convergence economy. The current legal status quo, as promulgated in the Open Internet order, maintains deregulation for applications but regulates communication networks with traditional telephone network rules and constructs.

The new status quo is a counterproductive expression of a failure to comprehend the Internet's dynamic character. The technical struggle to develop QoS models for the multiservice Internet have been ongoing for 40 years and will probably never be finished.

QoS models such as DiffServ are vital parts of service provider networks today, and IntServ is an essential component of LTE mobile networks. The refusal of regulators to permit the full implementation of Internet standards is a barrier to innovation.

The FCC's 2015 Open Internet order is a policy blunder of epic proportions, echoing the agency's refusal to allocate spectrum to cellular networks in the 1950s and 1960s. Rather than reaching back to its history as an imperious telephone regulator, the FCC would be wise to adopt a more humble role that is more respectful of the Internet's own history and architecture. An expert analysis of Internet standards cannot support the FCC's rash action.

The Internet is a vital element of the economy and of modern liberal democracy generally. It cannot develop its full potential to enhance social welfare and quality of life if it remains crippled by arbitrary constraints. The integrated model of Internet policy recognizes the revolutionary nature of the Internet and will facilitate Internet convergence. The task of conceptualizing goals and expectations for the Internet is more challenging than simply force-fitting Internet services into legacy models, but it will ultimately yield greater rewards.

We need not fear the new applications made possible by evolving models of Internet service; to the contrary, the greatest risk to the continued development of the Internet economy is stagnation. If the Internet is not free, all of us shall suffer from lost opportunities for innovation because we will have romanticized the past and grown too comfortable with the status quo.


(1.) Vaclav Smil, "Moore's Curse," IEEE Spectrum 52, no. 4 (April 2015): 26.

(2.) Ibid.

(3.) Chris Mack, "The Multiple Lives of Moore's Law," IEEE Spectrum 52, no. 4 (April 2015): 30-33.

(4.) Cade Metz, "IBM's New Carbon Nanotubes Could Move Chips beyond Silicon," Wired Magazine, October 10, 2015,

(5.) "Researchers First to Create a Single-Molecule Diode,", May 25, 2015,

(6.) Sebastian Anthony, "Beyond Silicon: IBM Unveils World's First 7nm Chip," Ars Tecbnica UK, July 9, 2015,

(7.) Martin Blanc, "Taiwan Semiconductor Confirms Release of 7nm Processors by 2017," Bidness Etc, May 31, 2015,

(8.) Robert F. Service, "Breaking the Light Barrier," Science 348, no. 6242 (June 26, 2015): 1409-10,;348/6242/l409; and Anne Morris, "ITU Oudines 5G Roadmap towards 'IMT-2020,'" FierceWireless.Europe, June 22, 2015,

(9.) This observation has exceptions: some applications, such as search, domain name service, the web, and social networks, are both application and infrastructure because they offer services to other applications through application program interfaces (API). Such "infrastructure applications" are enablers of additional applications: Periscope would not exist without a Twitter to run on, for example.

(10.) Brett Swanson, Moore's Law at 50: The Performance and Prospects of the Exponential Economy, American Enterprise Institute, November 2015,

(11.) Nicolas Mokhoff "Semi Industry Fab Costs Limit Industry Growth," EE Times, October 3, 2012, 1264577.

(12.) Adam Cohen, The Perfect Store: Inside eBay (Boston: Little, Brown and Co., 2002); and Evan Carmichael, "Business Ideas: 3 Business Lessons from Pierre Omidyar,"

(13.) Katharine Gammon, "What We'll Miss about Bill Gates--A Very Long Good-Bye," Wired Magazine, May 19, 2008,

(14.) Nicholas Carlson, "At Last--The Full Story of How Facebook Was Founded," Business Insider, March 5, 2010,

(15.) Rck Broida, "Use SpeedTest to Help Diagnose Internet Problems," PC World, April 23, 2013,

(16.) Trevor Gilbert, "The Problem with Dumb Pipes," Pando Daily, February 27, 2012,

(17.) Hal Singer, Three Ways the FCC's Open Internet Order Will Harm Innovation, Progressive Policy Institute, May 19, 2015,

(18.) Richard Bennett, Designed for Change: End-to-End Arguments, Internet Innovation, and the Net Neutrality Debate, Information Technology and Innovation Foundation, September 2009,; and Richard Bennett, Remaking the Internet: Taking Network Architecture to the Next Level, Research Program on Digital Communications, Time Warner Cable, Summer 2011,

(19.) Jon Postel, "RFC 791: Internet Protocol," DARPA Internet Program, September 1981,; S. Blake et al., "RFC 2475: An Architecture for Differentiated Services," Internet RFC, December 1998,; and R Braden, D. Clark, and S. Shenker, "RFC 1633: Integrated Services in the Internet Architecture: An Overview," Internet RFC, June 1994,

(20.) Akami Technologies, Akamai's State of the Internet: Security Report, May 19, 2015,

(21.) S. Bellovin, "Security Requirements for BGP Path Validation," RFC Editor, August 2014,; and P. Mockapetris, "RFC 882: Domain Names: Concepts and Facilities," Network Working Group, November 1983,

(22.) P. Eardley "RFC 5559: Pre-Congestion Notification (PCN) Architecture," Network Working Group, June 2009,; Th. Knoll, "BGP Extended Community for QoS Marking," Inter-Domain Routing Working Group, January 22, 2015,; and R Geib and D. Black, "DiffServ Interconnection Classes and Practice," TSVWG, March 9, 2015,

(23.) Syed Ahson and Mohammad Ilyas, IP Multimedia Subsystem (IMS) Handbook (Boca Raton: CRC Press, 2009),

(24.) Federal Communications Commission, Report and Order on Remand, Declaratory Ruling, and Order in the Matter of Protecting and Promoting the Open Internet, February 26, 2015,

(25.) Broadband Internet Technical Advisory Group Inc., "Differentiated Treatment of Internet Traffic," October 2015,

(26.) Bob Metcalfe, "Ether Acquisition" (memo, Palo Alto, California, May 22, 1973),

(27.) Ross Mcllroy "History," Ethernet, 2003,

(28.) James Pelkey "4.10 ALOHANET and Norm Abramson: 1966-1972," in Entrepreneurial Capitalism and Innovation: A History of Computer Communications 1968--1988,

(29.) Butler W Lampson. A History of Personal Workstations, ed. Adele Goldberg (Addison-Wesley, 1988), 291-344.

(30.) Digital Equipment Corporation, Intel Corporation, and Xerox Corporation, "The Ethernet: A Local Area Network; Data Link Layer and Physical Layer Specifcations," 1980,

(31.) I was the vice chair of the StarLAN task force in 1984--85.

(32.) Urs von Burg, The Triumph of Ethernet: Technological Communities and the Battle for the LAN Standard (Stanford, California: Stanford University Press, 2001).

(33.) Institute of Electrical and Electronics Engineers et al., 802. ID: 2004: IEEE Standard for Local and Metropolitan Area Networks Media Access Control (MAC) Bridges, Institute of Electrical and Electronics Engineers, 2004,

(34.) K. Nichols et al., "RFC 2474: Definition of the Differentiated Services Field (DS Field) in the IPv4 and IPv6 Headers," Internet RFC, December 1998,

(35.) Cable Television Laboratories Inc., "Five Modem Makers' Systems Considered for Cable Data Specifications," September 23, 1996, https://web.archive.Org/web/20021021205l40/

(36.) Cable Television Laboratories Inc., "Cable RF Specification for High-Speed Data Finalized," March 16, 1997,

(37.) Braden, Clark, and Shenker, "RFC 1633: Integrated Services in the Internet Architecture: An Overview."

(38.) Cable Television Laboratories Inc., "Data-Over-Cable Service Interface Specifications: Converged Cable Access Platform: Converged Cable Access Platform Architecture Technical Report," June 14, 2011,

(39.) "Isochronous" roughly means "at the same time." The name addresses applications with regular and predictable traffic loads while running that may commence or terminate at any time; telephone calls have this property.

(40.) Bennett, Remaking the Internet.

(41.) Michael Cookish, "Video over DOCSIS," Communications Technology, November 1, 2008,

(42.) AT&T, "Testing the First Public Cell Phone Network," AT&T Archives, June 13, 2011,

(43.) International Telecommunication Union, "What Really Is a Third Generation (3G) Mobile Technology,"

(44.) Ibid.

(45.) Braden, Clark, and Shenker, "RFC 1633: Integrated Services in the Internet Architecture: An Overview"; Blake et al., "RFC 2475: An Architecture for Differentiated Services"; and E. Rosen, A. Viswanathan, and R. Callon, "RFC 3031: Multiprotocol Label Switching Architecture," Internet RFC, January 2001,

(46.) J. Postel, "RFC 795: Service Mappings," Internet RFC, September 1981,; and Nichols et al., "RFC 2474: Definition of the Differentiated Services Field (DS Field) in the IPv4 and IPv6 Headers."

(47.) Postel, "RFC 795: Service Mappings."

(48.) See, for example, Zuqing Zhu et al., "RF Photonics Signal Processing in Subcarrier Multiplexed Optical-Label Switching Communication Systems," Journal of Lightwave Technology 21, no. 12 (December 2003): 3155-66, = &arnumber = 1263734 &

(49.) This section repeated from Bennett, Remaking the Internet.

(50.) Federal Communications Commission, "2015 Open Internet Order."

(51.) Ibid.

(52.) Cisco Systems, "The Zettabyte Era--Trends and Analysis," May 2015, -provider/visual-networking-index-vni/VNI_Hyperconnectivity_WP.html.

(53.) Gregory L. Rosston and David Waterman, eds., The Internet and Telecommunications Policy: Selected Papers from the 1995 Telecommunications Policy Research Conference (Mahwah, New Jersey: Lawrence Erlbaum Associates, 1996), 1.

(54.) Ibid., 3-4.

(55.) David D. Clark, "Combining Sender and Receiver Payments in the Internet," in Gregory L. Rosston and David Waterman, eds., Interconnection and the Internet: Selected Papers from the 1996 Telecommunications Policy Research Conference (Mahwah, New Jersey: L. Erlbaum Associates, Publishers, 1997).

(56.) L. Jean Camp and Donna M. Riley, "Bedrooms, Barrooms, and Boardrooms on the Internet," in Gregory L. Rosston and David Waterman, eds., Interconnection and the Internet: Selected Papers from the 1996 Telecommunications Policy Research Conference (Mahwah, New Jersey: L. Erlbaum Associates, Publishers, 1997).

(57.) Robert M. Frieden, "Can and Should the FCC Regulate Internet Telephony?," in Gregory L. Rosston and David Waterman, eds., Interconnection and the Internet: Selected Papers from the 1996 Telecommunications Policy Research Conference (Mahwah, New Jersey: L. Erlbaum Associates, Publishers, 1997).

(58.) Lorrie Faith Cranor and Steven S. Wildman, eds., Rethinking Rights and Regulations: Institutional Responses to New Communication Technologies (Cambridge, Massachusetts: MIT Press, 2003).

(59.) Tim Wu, "Network Neutrality, Broadband Discrimination," Journal of Telecommunications and High Technology Law 2 (2003): 141,

(60.) Edward J. Valauskas, "Darwin's Orchids, Moths, and Evolutionary Serendipity," Chicago Botanic Garden, February 2014,

(61.) Blake et al., "RFC 2475 - An Architecture for Differentiated Services."

(62.) Susan P. Crawford, "The Looming Cable Monopoly," Yale Law & Policy Review, Inter Alia, June 1, 2010,

(63.) Leichtman Research Group, "About 385,000 Add Broadband in the Second Quarter of 2014," August 15, 2014,

(64.) Federal Communications Commission, "2015 Broadband Progress Report and Notice of Inquiry on Immediate Action to Accelerate Deployment," February 4, 2015,

(65.) IDC, "Smartphone OS Market Share 2015, 2014, 2013, and 2012,"

(66.) StatCounter, "Top 7 Desktop OSs from Oct 2014 to Oct 2015," StatCounter Global Stats, October 2015, http://gs.statcounter.eom/#desktop-os-ww-monthly-201410-201510.

(67.) Cade Metz, "Microsoft: An Open Source Windows Is 'Definitely Possible,'" Wired, April 3, 2015,

(68.) Emil Protalinski, "Google Reveals Fiber Plans for Provo, $30 Construction Fee," Next Web, August 15, 2013,; and Cameron Summerson, "T-Mobile Announces Uncarrier for Tablets, 200MB of Free Monthly LTE Data for All Compatible Devices," Android Police, October 23, 2013,

(69.) Comcast, "Comcast Begins Rollout of Residential 2 Gig Service in Adanta Metro Area," April 2, 2015,

(70.) Aaron Smith, "U.S. Smartphone Use in 2015," Pew Research Center, April 1, 2015,

(71.) Sprint, "Cell Phones, Mobile Phones & Wireless Calling Plans from Sprint," accessed July 15, 2015,!/.

(72.) Richard Bennett, G7 Broadband Dynamics: How Policy Affects Broadband Quality in Powerhouse Nations, American Enterprise Institute, November 2014, l/G7-Broadband-Dynamics-Final.pdf

(73.) Bret Swanson, Internet Traffic as a Basic Measure of Broadband Health, American Enterprise Institute, November 20, 2014,; Christopher S. Yoo, U.S. vs. European Broadband Deployment: What Do the Data Say?, Center for Technology, Innovation and Competition, June 2014; Roger Entner, Spectrum Fuels Speed and Prosperity, Recon Analytics LLC, September 25, 2014,; and Roslyn Layton and Michael Horney Innovation, Investment, and Competition in Broadband and the Impact on America's Digital Economy, Mercatus Center, August 12, 2014,

(74.) Danielle Kehl et al., The Cost of Connectivity 2014, New America Foundation, October 30, 2014,

(75.) Ookla, "Global Download Speed," July 2015,

(76.) Bennett, G7 Broadband Dynamics, 7.

(77.) Ben Scott, Stefan Heumann, and Jan-Peter Kleinhans, Landmark EU and US Net Neutrality Decisions: How Might Pending Decisions Impact Internet Fragmentation?, Global Commission on Internet Governance, July 2015,

(78.) Broadband Internet Technical Advisory Group, Differentiated Treatment of Internet Traffic, October 2015,

(79.) Lawrence Lessig, Code: And Other Laws of Cyberspace (New York: Basic Books, 1999).

(80.) Wu, "Network Neutrality, Broadband Discrimination."

(81.) Ibid.

(82.) Alex McKenzie, "INWG and the Conception of the Internet: An Eyewitness Account," IEEE Annals of the History of Computing 33, no. 1 (January 2011): 66-71; Louis Pouzin, The Cyclades Computer Network: Towards Layered Network Architectures (New York: North-Holland, 1982); and James Pelkey Entrepreneurial Capitalism & Innovation: A History of Computer Communications 1968--1988, 2007,

(83.) Postel, "RFC 795: Service Mappings"; and Postel, "RFC 791: Internet Protocol." The exemplary mappings are Network Control, Internetwork Control, CRITIC/ECP, Flash Override, Flash, Immediate, Priority, and Routine.

(84.) Postel, "RFC 795: Service Mappings."

(85.) Blake et al., "RFC 2475: An Architecture for Differentiated Services."

(86.) H. Schulzrinne et al., "RFC 3550: RTP: A Transport Protocol for Real-Time Applications," Network Working Group, July 2003, https://tools.ietf.Org/html/rfc3550#section-6.4.


Executive Summary                              1
The Science of Network Innovation              3
 Moore's Law                                   3
 Network Innovation                            6
 Device Innovation                             6
 Application Innovation                        6
The Technology of Network Convergence          8
 Technical Constraints on Network Innovation   8
 Structural Issues                             9
 Overcoming Interconnection Bias10
 Signs of Progress                            10
 Freeing the Untapped Potential               11
 Case Studies                                 11
Network Innovation Policy                     22
 How We Got Here                              22
 Excusing Policy Inequity                     24
 Arguments for Differential Regulation        25
 An Example of Differential Regulation        27
 The Firewall Model of Internet Regulation    28
 The Integrated Model of Internet Policy      29
Conclusions                                   30
Notes                                         31
About the Author                              36

Table 1. Layered Architecture and Standards

Layer  Function      Standards Organization

7.     Application   Program-to-Program W3C, Broadband Forum
6.     Presentation  Data Formats W3C/IETF
5.     Session       User to Internetwork IETF/3GPP
4.     Transport     End-to-End Interactions IETF
3.     Network       Routing and Addressing IETF
2.     Data Link     Single Network Behavior IEEE,
                     3GPP, Cable Labs, Broadband Forum
1.     Physical      Coding and Signal Processing IEEE,
                     3GPP, Cable Labs, Broadband Forum

Source: Author.

Table 2. Wi-Fi Service Classes

ACI  AC     Description

00   AC_BE  Best Effort
01   AC_BK  Background
10   AC_VI  Video
11   AV_VO  Voice

Source: EEEStandardsAssociation, "IEEE Standard 802.11 e-2005," 2005.

Table 3. 802.1 p Priority Levels

Priority Code Point  Priority Level

          1           0 (lowest)
          0           1
          2           2
          3           3
          4           4
          5           5
          6           6
          7           7 (highest)

Priority Code Point  Identifier  Usage

          1             BK       Background
          0             BE       Best Effort (Default)
          2             EE       Excellent Effort
          3             CA       Critical Applications
          4             VI       Video, Less Than 100 ms Latency
          5             VO       Voice, Less Than 10 ms Latency
          6             IC       Internetwork Control
          7             NC       Network Control

Source: IEEE Standards Association, "IEEE Standard 802.1 D- 2004," 2004.

Table 4. Priority, Delay, and Loss for LTE Bearers

QCI  Bearer Type  Priority  Packet Delay

1                    2         100 ms
2         GBR        4         150 ms
3                    3          50 ms

4                    5         300 ms
5                    1         100 ms
6                    6         300 ms

                     7         100 ms
3                    8         300 ms
9                    9

QCI  Packet Loss                Example

1      10 (-2)                  VoIP call
2      10 (-3)                  Video sall
                         Online Gaming (Real Time)
4                             Video streaming
5      10 (-6)                 IMS Signaling
                  Video, TCP based services e.g. email,
                              chat, ftp etc
      10 (-3)        Voice, Video, interactive gaming
3      10 (-6)     Video, TCP based services e.g. email,
9                             chat, ftp etc

Source: Adrian Basir, "Quality of Service (QoS) in LTE," 3GPP Long Term
Evolution (LTE), January 31,
2013, -service-qos-

Table 5. Mapping 802.1D to 802.11e Access Classes

              UP (Same as
              802.1D User  802.1D               Designation
Priority      Priority)    Designation    AC   (Informative)

Lowest           1           BK         AC_BK    Background
                 2           -          AC_BK    Background
                 0           BE         AC_BE    Best Effort
[down arrow]     3           EE         AC_BE    Best Effort
                 4           CL         AC_VI    Video
                 5           VI         AC_VI    Video
                 6           VO         AC_VO    Voice
Highest          7           NC         AC_VO    Voice

Source: IEEE Standards Association, "IEEE Standard 802.11e-2005," 2005.

Table 6 IEEE 802.11e TSPEC

                  Continuos      Controlled
                  time Qos       access CBR    Bursty
TSPEC             traffic        traffic       traffic
parameter         {HCCA}         (HCCA)        (HCCA)

Nominal MSDU
Size                S              S            DC
Service Interval    S            Nominal MSDU   Mean data
                                 size/mean      nominal
                                 data rate,     MSDU
                                 if specified   size, if mean
                                 (VoIP)         data rate
                                 typically      specified
                                 uses this)
Maximum             S            Delay bound/   Delay bound/
Service                          number of      number of
Interval                         retries (AV    retries. if
                                 typically      delay
                                 uses thisb     bound
Inactivity                              Always specified
Interval                                        DC
Minimum Data        Must be      Equal to        X
Rate                Specified    mean data
                    if peak      rate
                    Data rate
Mean Data           S              S            DC
Burst Size          X              X             S
Minimum PHY                                     Always
Rate                                            Specified
Peak Data           Must be      Equal to       DC
Rate                Specified    Mean Data
                    if Minimum   Rate
                    Data Rate
Delay bound         S              S            DC
Bandwith            Must be specified if the delay bound
Allowance           is present
Medium Time                                        X
                                  (not specified by non-AP QSTA; only
                                   an output from the HC)

                    non-Qos      Contention-based
TSPEC               traffic      CBR traffic
parameter           (HCCA)       (EDCA)

Nominal MSDU
Size                  DC            S
Service Interval      DC           DC

Maximum               DC           DC

Inactivity                         DC
Minimum Data          DC           DC

Mean Data             DC            S
Burst Size            DC           DC
Minimum PHY
Peak Data             DC           DC

Delay bound            X            X
Bandwith                DC          S
Medium Time        (not specified by non-AP QSTA; only
                    an output from the HC)

Source: IEEE Standards Association, "IEEE Standard 802.11e-2005," 2005.

Table 7. Net Index from Ookla Download Speeds, July 14, 2015

Region          Average Download
                 Speed, Mbps

United States         37.4
G8                    33.4
European Union        31.8
OECD                  31.4
APEC                  27.6

Source: Net Index from Ookla.
COPYRIGHT 2015 The American Enterprise Institute
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2015 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Bennett, Richard
Publication:AEI Paper & Studies
Article Type:Report
Geographic Code:1USA
Date:Dec 1, 2015
Previous Article:Improving health and health care: an agenda for reform.
Next Article:Macroeconomic effects of a 10% cut in statutory marginal income tax rates on ordinary income.

Terms of use | Privacy policy | Copyright © 2020 Farlex, Inc. | Feedback | For webmasters