Broadband Edge Services Power New Internet.
This article is the first in a two-part series. The second part will appear in the July issue of CTR.
Much of the trade (and, increasingly, mainstream) press about the Internet tends to focus on two distinct growth areas: broadband access and broadband content. The reasons for this coverage are plain, even a bit vanilla. First, new multimedia content is driving--and being driven by--high-speed access at the curb wire: DSL and cable modems are moving in, dial-up is moving out. Music, Webcasting, and increasingly, video, are becoming common applications that users expect.
Second, the big backbone providers are moving to--or in some cases upgrading existing--fibre, adding new technologies like DWDM to increase bandwidth for the expected spike in demand for broadband services, and to deal with exponentially increasing network traffic. And finally, the media sees glowing projections like this one from Forrester Research: the firm predicts that the market for U.S. broadband Internet access will be $33 billion as soon as 2003.
But many expert observers take a wider view on the meaning of these changes. This view contends that, rather than a need for the existing infrastructure of the Internet to expand vertically (via fatter and fatter bandwidth), the design of the underlying architecture needs to go horizontal, via so-called edge services. Of course, this model does not discount the need for more bandwidth; it simply puts more of that bandwidth in a different place. There are a number of industry heavyweights behind the push for broadband edge services, from networking vendors to router and chip makers; these will be discussed later. First, it's worth exploring how such a significant change in the Internet infrastructure might come about.
The Internet is basically a wheel, or a "hub-and-spoke" network. Servers located at the center of the wheel (the hub) serve requests coming from the edges via the spokes, which are the Internet backbones. This design is both incredibly clever and horribly over utilized. The cleverness is in the redundancy: if one path (spoke) is down or busy, the edge request can move to another edge server and then use its spoke to connect to the server at the hub. The problem is that this design means that the access points where the backbones (spokes) connect to one another are incredibly congested. For example, a machine serving up video content to end users of one backbone may have to send that data across other backbones, depending on the 'net's traffic at any given time.
The major backbone providers have thus far tried to deal with this problem by making the backbones bigger, to accommodate more packets. This solution, however, is nothing but a stopgap measure: the fatter the pipe gets, the more packets it has to carry. The Internet is a akin to a pie that keeps getting bigger while more and more people keep demanding more and more slices, ad infinitum. When you factor in the knowledge that most of the world is still not connected to the Internet, you begin to see that scalability is a serious problem. But the major backbone providers (MCI/UUNet, Sprint, AT&T) have been relatively good at keeping ahead of traffic thus far. The real problem, many observers contend, is in "peering points" the places where the backbones connect to one another.
"Peering is the Achilles' heel of the whole system," contends Lloyd Taylor, VP of Operations at Internet traffic monitoring company Keynote Systems. "New technologies like DWDM are really staying ahead of the Web's traffic, and new routers are able to handle the loads for the foreseeable future. The bottleneck is in how the backbones connect to one another," Taylor says. "Like many things in the industry, it's unfortunately a business versus technology issue: the technology is there, but the business case is not." Taylor notes that the backbone providers are competing for the same customers; they simply do not have the financial incentive to make it easier for their customers to connect to their competition.
Is there evidence to support the existence of a peering problem? Actually, there is. While monitoring high-level, backbone-to-backbone traffic has been historically difficult, in May Keynote introduced a real-time reporting system on peering point and backbone latency. Fig 1 and Fig 2 show one of the earliest reports, from May 8, 2000 (see www.keynote.com for the latest statistics). Note that the red problem (critical latency) points occur primarily for throughput over backbone-to-backbone (peering) connections, rather than for throughput over the same backbone.
The other problem with the current structure of the Internet is economic: as the pipes get bigger, broadband access expands, and prices go down. This is a serious problem for Internet service providers, who are seeing their margins shrink at an increasingly unacceptable rate. For example, Jim Metzler, widely recognized as an expert in the infrastructure of the Internet, says that the cost of providing network access to the Internet is plummeting. As few as two years ago, Metzler contends, corporations were paying as much as $700 a month for T1 access to a POP. Today, they pay around $200. As margins for physical Internet access disappear, providers have only one real choice: expand service offerings, where value adds and QoS can create opportunity for wider margins and continued profits.
Metzler notes that large service providers are caught in another bind: broadband access equipment like DSL access concentrators are increasingly being supplied by ISPs at low or no cost. "Access is becoming a commodity. How much can you really charge for DSL when T1 access is $200? The issue is what services you are running on top of that access," Metzler says. Another worry for service providers is that access equipment itself is moving away from centralized control (at the telco CO, for DSL) and onto the customer premises (see related story on DSL CPE).
Commoditization And Regionalization
The nature of technology tells us that Internet access will get both faster and cheaper, to the point that service providers, like TV and cable providers, will have to make their profits on content and customization, not equipment (Metzler calls this concept "Personal Content Tunnels"). Just as local sports channels on cable and local news programming bring in big revenue, Internet service providers will need to tailor and customize information for local audiences. Some have called this the "regionalization" of the Internet, where providers offer very specific services to their customers, rather than today's pipe into the vastness with a browser on top.
How will this affect the infrastructure of the Internet? That depends on the application. In the short term (two to three years), the importance of the edge service will increase, with static content becoming decentralized and moving closer to the consumer. For example, static content like movies may reside on the ISP's local network, giving local users inexpensive rentals with guaranteed throughput. (If you don't think the location of such services are important, try watching video over the Internet, even with a broadband connection.) Other service providers like ASPs may also move closer to the customer to provide service and performance guarantees.
Metzler argues that this change is virtually inevitable, because it follows computing's historical pattern of centralization to decentralization to centralization again (mainframe to client server to Web server and datacenter). We are nearing the next decentralization stage, because we are approaching the limits of centralization under current technology. However, decentralization of Internet services will work only with particular forms of data, at least initially.
Decentralization And Replication
The problem with moving content to edge networks is in the nature of the content itself. Today's Internet demonstrates a key benefit of centralized content servers: database synchronization. It's relatively easy to keep track of and replicate one, or two, or even ten Web servers when their databases change. But replication of database stores is incredibly complex when the servers are spread all over the country (or the world) and the data needs constant updating. Technologies like server caching and load balancing are today's answer to such problems, because they only need to query a few servers to see when data has changed. If all content services move to the edge of the network, the traffic among caching servers would be astronomical.
So, the solution will probably be a combination of relatively static content in regionalized networks and highly changeable data residing closer to the hub, as it does today. One additional difference may be that providers begin to apportion services and charge customers based on usage, a model more like today's phone network. While consumers may believe that unlimited dial-up access is a right, not a privilege, this model is likely to become less and less viable as more demanding content services are added to the network. Further, as the Internet moves into the wireless space, service-level agreements among network and content providers are likely to be important, as users will pay for specific content and ignore that which is not valuable to them.
But even with the proper equipment on the network edge, Keynote's Taylor argues that COs and cable head ends are not ready for the influx of new bandwidth. "All of these places have very little room for additional equipment, so what we may see is edge-services co-located in buildings close to the CO or head end, with fibre running between them," Taylor says.
Thus far, the last mile continues to have serious service and congestion problems. RBOCs have been plagued with outages--some lasting days--which can also bring down phone service, depending on the DSL configuration. Taylor contends that the RBOCs are struggling to find qualified technical people to install and troubleshoot DSL service, which is adding to the problem.
Edge-based solutions inevitably will be based not on fatter bandwidth but on traffic prioritization via QoS (Quality of Service) intelligence. However, QQS (that is, packet-based) intelligence adds tremendous cost and complexity to routers, a problem that is currently being addressed by the IETF. According to officials at Sitara Networks, a maker of QoS software, evolving QoS standards like Differentiated Services (Diff-Serv) and MPLS, while potentially useful, assume that some entity besides the router is prioritizing packets. Today's edge-based QoS solutions come in the form of specific boxes built to handle particular tasks: traffic shaping, policy management, queuing, caching, and so on. Networks of the future are likely to have more integrated solutions at the edge, whereby a single piece of software provides end-to-end traffic prioritization, without the need for standalone QoS modules.
Next month in Part Two: Satellites, Lasers, and Regulation. The companies, technologies, and business models driving broadband Internet.
|Printer friendly Cite/link Email Feedback|
|Publication:||Computer Technology Review|
|Date:||Jun 1, 2000|
|Previous Article:||Mirroring Your Way To A Fault-Tolerant Storage System Beyond RAID 5.|
|Next Article:||Continuous Availability: Either You're Online Or You're Out Of Business.|