Printer Friendly

ATM concepts, architectures, and protocols.

Asynchronous transfer mode (ATM) is often described as the future computer networking paradigm that will bring high-speed communications t the desktop. What is ATM? How is it different from today's networking technologies? This article is intended to acquaint readers with this emerging technology and describe some of the concepts embodies within it. In order to understand why ATM was created and how it works, we first need to review a bit of computer networking history.

In the 1960s a worldwide effort began to upgrade public switched telephone systems from all-analog systems to systems supporting a combination of analog and digital signals. The North American effort resulted in a system in which 24 64-Kbps voice signals(1) are multiplexed (with framing) into a single 1.544 Mbps signal. In Europe, a system emerged that multiplexed 30 voice channels (plus signaling and framing channels) for a total rate of 32 x 64 Kbps = 2.048 Mbps. Table 1 depicts the digital signal hierarchies used in North America, Europe, and Japan.

With the introduction of low-loss fibers in the 1970s, optical fiber became the physical transmission medium of choice for high-speed wide-area communication systems. This development, along with research on fast-packet networks, made it possible to transmit digital data faster and more reliably (with fewer bit errors) than ever before. In 1984, an international effort began to standardize worldwide optical signal levels. The proposal that resulted from this effort suggested a hierarchical family of digital signals whose rates are multiples of some basic signal rate. This proposal eventually led to a draft standard of optical rates and formats [2]. The base optical rate and format chosen were a compromise between existing digital hierarchies in use throughout the world. The history of this standardization effort is quite interesting, since the international standards bodies had to develop a universal hierarchy of signal rates that would support existing digital rates (e.g., T1 rates of 1.544 Mbps in the U.S. and 2.048 Mbps in Europe) as well as support higher optical rates in the future. A discussion of how this was accomplished can be found in [2, 21].

The internationalization of signal levels by the standards bodies of the International Telecommunications Union (ITU) resulted in a series of recommendations for a broadband integrated services digital network (B-ISDN). The B-ISDN efforts were driven by the emerging needs for high-speed communications and enabling technologies to support new services in an integrated fashion. The optical data rates, synchronization, and framing format chosen for B-ISDN are called the synchronous digital hierarchy (SDH) in Europe and the synchronous optical network (SONET) in North America. Table 2 shows the rates chosen.

Once the transmission hierarchy for optical signal levels was established as a worldwide standard, work began on a universal multiplexing and switching mechanism to support integrated transport of multirate traffic. The major objective was to support the diverse requirements of multiple-bit-rate traffic sources and provide flexible transport and switching services in an efficient and cost-effective way [10]. In 1988, ATM was chosen as the switching and multiplexing technique for B-ISDN. The ATM standard is designed to efficiently support high-speed digital voice and data communications. The expectation is that by the next decade, most of the voice and data traffic generated in the world will be transmitted by ATM technology [5, 7, 10, 17].
Table 1. Digital Signal Hierarchies (in Mbps)

Level    North American    Europe     Japan

1           1.544 (DS1)     2.048     1.544
2           6.312 (DS2)     8.448     6.312
3          44.736 (DS3)    34.368    32.064

The motivation for ATM is based on the following observation:

In order to provide real-time transport capabilities necessary for future multimedia applications incorporating voice, video and high speed data, networks with high bandwidth and low latency are required. For an application requiring the transmission of large amounts of data in real-time, new network architectures and protocols must be designed which support multiple service classes of data in an efficient and cost effective way.

The ATM paradigm has many objectives. First, ATM must be cost-effective and scalable. It must support applications with diverse traffic characteristics (e.g., multimedia applications) and be able to support multiple data streams with acceptable (guaranteed) delay bounds. Also, ATM must be able to perform a multicast operation efficiently, as many collaborative applications will require frequent use of this kind of operation. Finally, ATM must be interoperable with existing local-, metropolitan-, and wide-area networks (LANs, MANs, and WANs), and should use existing standards and protocols whenever possible.

Many application areas will benefit from the flexibility in switching and high speed that ATM provides. For example, digital medical imaging may be one of first applications to utilize such a network [4]. A medical imaging application, such as the rendering and transmission of a diagnostic x-ray, often involves 10 to 50 images, amounting to between 2 and 10 gigabits (Gb) of information. In many cases, real-time access is required to enable collaborative discussions to occur between physicians separated by physical distance. The sheer volume of data that must be transmitted in such a short time can be carried out only by networks running at multimegabit speeds.

A second application area that will benefit from ATM networks is that of applications normally carried out in a supercomputer center. Massive amounts of computer data are generated (and stored) by supercomputers during the visualization of scientific data. Scientific visualization allows a user to interact with high-definition three-dimensional images in real time. For example, a user may wish to rotate, scale, or change other parameters of the image while simultaneously viewing the effects of the change on the screen. Providing this interactive capability will require a network with very high data rates and low latency.

A third potential application for an ATM network is in the area of distributed network computing. Computers that are geographically distributed (either locally or across a wide area) can be used together to cooperatively solve problems previously requiring large and costly supercomputers. An application program can be partitioned and distributed across a network of computers using the specialized capabilities of each machine (e.g., a vector machine to perform vector operations). In fact, the virtual path concept of ATM will allow virtual connections to be set up between the various processing nodes in the computational pattern that best suits the application (e.g., in the form of a mesh-connected or hypercube configuration). This will reduce the time required to route messages between nodes and allow a constant pipelined data flow to exist between application programs running on different computing nodes.

High-bandwidth applications, such as those described previously, will generate a heterogeneous mix of network traffic. The diversity of traffic generated by these applications is currently very difficult to efficiently support with existing networks. Existing networks simply do not provide the transport facilities necessary to efficiently support these increasingly important user applications. Conceptually, ATM is capable of supporting data traffic with widely varying service requirements. As opposed to other networking techniques, ATM needs minimal network node functionality and thus allows very high network speeds and low delay to be attained. Furthermore, ATM is a universal methodology that is being specifically designed for this purpose and will be supported internationally.

Small Cells, Virtual Paths, and Virtual Channels

ATM is based on a fixed-size virtual circuit-oriented packet (or cell)-switching methodology. A cell consists of five bytes of header information and a 48-byte information field.(2) The header field contains control information for the cell (such as identification, cell loss priority, and routing and switching information). ATM breaks all traffic into these 53-byte cells. The proper size of a cell was the subject of much debate in the standards committees [16]. The telephone companies wanted a small cell (to reduce delay for voice packets), while the data communications people wanted a big cell (to minimize the amount of segmentation and reassembly that had to be carried out). The committees, after much discussion, narrowed the cell size debate into two choices: 32-byte cells or 64-byte cells. As a compromise, a 48-byte ATM payload was chosen [16].
Table 2. SONET Optical Signal Level Hierarchy

Level                     Line Rate

OC-1                          51.84
OC-3                         155.52
OC-9                         466.56
OC-12                        622.08
OC-18                        933.12
OC-24                       1244.16
OC-36                       1866.24
OC-48                       2488.32

Two parts of the header field are of particular interest: the virtual path identifier (VPI) and the virtual channel identifier (VCI). These identifiers are used to determine which cells belong to any given connection. The VPI and VCI values are used by the routing protocol to determine the path(s) and channel(s) a cell will traverse. On each incoming link of an ATM switch, an arriving cell's VPI and VCI values uniquely determine the new virtual identifier to be placed in the cell header and the outgoing link over which to transmit the cell. Two hosts can use a virtual path to multiplex many individual application streams together, using the VCI to distinguish between these streams. The concept of a virtual path was introduced to provide the capability to manipulate a set of ATM connections as one unique channel [6].

Figure 1 illustrates how ATM carries out cell switching. When a connection is set up between two or more hosts on the network, a virtual path is defined between the source and the destination. The connection establishment procedure initializes internal routing tables in the switches. Upon entering an ATM switch, a cell's VPI field is used to select an entry from the routing table that determines which output port the cell should be forwarded to. At the same time, a new VPI value may be placed in the cell and the cell forwarded to the next switch. Over a single virtual path, two hosts may multiplex many individual application streams, using the VCI fields in cells to distinguish among these streams. Thus, virtual paths are essentially bundles of virtual channels that can be multiplexed together. The VPIs are used to establish virtual paths on a semipermanent basis between network endpoints, and VCIs are used to establish virtual links over a given virtual path connection. The network does not interpret or modify the VCI fields of cells on virtual path connections, so the hosts can set up new virtual channels on an established virtual path without having to request them from the network.

It is worthwhile to note that cells belonging to a connection do not always have to appear after one another in the data stream. Rather, the cells are statistically multiplexed with the amount of bandwidth allocated to a connection determined by the traffic requirement of the connection. ATM allows very efficient utilization of network bandwidth, as statistical multiplexing allows the total band-width available to be dynamically distributed among a variety of user applications. This is achieved by selecting virtual channel paths according to the anticipated traffic and allocating the network resources needed. For guaranteed bandwidth applications, users must specify (before the virtual connection is set up) the amount of network resources they require. Users may specify their peak and average data rates, as well as maximum burst size. Using this information, it is up to the network to allocate resources in such a way that almost all information bursts are received intact. (It is possible that some traffic may not be serviced and the packets will be either discarded or retransmitted by the network.) In theory, ATM should be able to ensure consistent performance to users in the presence of stochastically varying traffic.

ATM Protocol Reference Model

The ATM protocol reference model is based on standards developed by the ITU. The protocol reference model for ATM is divided into three layers: the physical layer, the ATM layer, and the ATM Adaptation Layer (AAL). The physical layer defines a transport method for ATM cells between two ATM entities. It has a medium-dependent sublayer (responsible for the correct transmission and reception of bits on the physical medium) and a transmission convergence sublayer (responsible for mapping of the ATM cells to the transmission system used). The ATM layer is where transparent transfer of fixed-size 53-byte cells or ATM-layer service data units (ATM SDUs) between communicating upper-layer entities is defined. The ATM layer mainly performs switching and multiplexing functions. The AAL defines a set of service classes to fit the needs of different user requests and converts incoming user requests for services into ATM cells for transport. The ATM-layered network architecture is depicted in Figure 2. These three layers will now be described in more detail.

Physical Layer

The physical layer encodes and decodes the data into suitable electrical/optical waveforms for transmission and reception on the communication medium used. The physical layer also provides cell delineation functions, header error check (HEC) generation and processing, performance monitoring, and payload rate matching of the different transport formats used at this layer.

The physical layer can transfer ATM cells from one user to another in two ways. At the user-network interface (UNI), ATM cells may be carried in an externally framed synchronous transmission structure or in a cell-based asynchronous transmission structure. In North America, SONET, a synchronous transmission structure, is often used for framing and synchronization at the physical layer. The basic time unit of a SONET frame is 125 microseconds.(3) The SONET frame structure is depicted in Figure 3.

The SONET standard defines the optical signal levels, a synchronous frame structure for multiplexed digital traffic, and the operations procedures for the physical layer interface for use in optical networks [2, 5, 10]. The SONET format is currently supported by single-mode fiber, multi-mode fiber, and twisted pair. The basic building block of SONET is synchronous transport signal level 1 (STS-1) with a bit rate of 51.84 Mbps. The STS-1 frame structure can be drawn as 90 columns and 9 rows of 8-bit bytes. The order of transmission of the bytes is row by row, from left to right, with one entire frame transmitted every 125 microseconds. The first three columns of STS-1 contain section and line overhead bytes used for error monitoring, system maintenance functions, synchronization, and identification of payload type. The remaining 87 columns and 9 rows are used to carry the STS-1 synchronous payload envelope. The payload area can carry a DS3/T3 signal, a variety of lower-rate signals (such as several T1 signals), or, alternatively, several ATM virtual circuits.


Higher-rate SONET signals are obtained by byte-interleaving n frame-aligned STS-1's to form an STS-n (e.g., STS-3 has a bit rate of 155.52 Mbps). Optical carrier (OC) levels are obtained from STS levels after scrambling (to avoid long strings of 1s and 0s and allow clock recovery at the receivers) and electrical-to-optical conversion (i.e., STS-n is scrambled and converted to OC-n). The SONET transmission convergence sublayer for an ATM network with a 155.52-Mbps interface is based on the SONET STS-3 structure. An STS-3 carries a single 149.76-Mbps payload in a 155.52-Mbps stream.

ATM Layer

The ATM layer is a unique layer that carries all the different classes of services supported by B-ISDN within a 53-byte cell [5]. The ATM layer is responsible for cell relaying between ATM-layer entities, cell multiplexing of individual connections into composite flows of cells, cell demultiplexing of composite flows into individual connections, cell rate decoupling or unassigned cell insertion and deletion, priority processing and scheduling of cells, cell loss priority marking and reduction, cell rate pacing and peak rate enforcement, explicit forward congestion marking and indication, cell payload type marking and differentiation, and generic flow control access.

The functionality of the ATM layer is defined by the fields present in the ATM cell header. The cell header contains a generic flow control (GFC) field, the VCI/VPI fields, a payload type indicator (PTI) field, a cell loss priority (CLP) field, and a header checksum field.(4)


The GFC field is used by the UNI to control the amount of traffic entering the network. This allows the UNI to limit the amount of data entering the network during periods of congestion. The VCI/VPI fields are used for channel identification and simplification of the multiplexing process. The PTI field is used to distinguish between user cells and control cells. This allows control and signaling data to be transmitted on a different sub-channel from user data (i.e., separation of user and control data). The CLP field is used to indicate whether a cell may be discarded during periods of network congestion. For example, voice data may be able to suffer lost cells without the need for retransmission, whereas text data cannot. In this case, an application may assign the CLP field for voice traffic a higher cell loss priority. The header checksum field is used to protect the header field from transmission errors.

It is important to realize that the functions performed by the ATM layer are designed to be carried out in hardware at very high data rates. At Gbps speeds, cells will have to be processed at fractions of a microsecond. Whether this can be achieved with today's electronics at reasonable cost and high reliability is yet to be determined.

In order for several ATM channels to be supported in a single SONET STS-n frame, the rate of valid cells must be adapted to the capacity of the transmission payload. To achieve proper alignment, idle bytes are inserted and extracted into the synchronous frame structure at the end-points of the network. A pointer (carried in the overhead bytes of the STS header) is used to indicate the position of the cell within the payload frame. Thus, cells do not have to be strictly frame-aligned with the underlying payload signal. This is precisely the meaning of the "A" in ATM. This also explains the need to identify a certain time slot by having an additional header field containing a VPI and VCI.

ATM Adaptation Layer

The purpose of the AAL is to provide a link between the services required by higher network layers and the generic ATM cells used by the ATM layer. Four service classes have been defined. The classification is performed according to three parameters: time relation between the source and the destination, constant or variable bit rate, and connection mode. The service classes are:


1. Class A - a time relation exists between the source and the destination, the bit rate is constant, and the service is connection-oriented (e.g., a voice channel).

2. Class B - a time relation exists between the source and the destination, the bit rate is variable, and the service is connection-oriented (e.g., a video or audio channel).

3. Class C - no time relation exists between the source and the destination, the bit rate is variable, and the service is connection-oriented (e.g., a connection-oriented file transfer).

4. Class D - no time relation exists between the source and the destination, the bit rate is variable, and the service is connectionless (e.g., LAN interconnection and electronic mail).

Initially, the ITU recommended four types of AAL protocols to support the four service classes defined, called types 1, 2, 3, and 4. Types 3 and 4 were later merged into a single type (called AAL type 3/4), since the differences between them are minor. A fifth AAL type has also been proposed, due to the high complexity of AAL type 3/4 [7, 22]. The AAL type 5 protocol is sometimes called the simple and efficient adaptation layer (SEAL). Thus, class A traffic will use the AAL type 1 protocol, class B traffic the AAL 2 protocol, and class C and D traffic either AAL 3/4 or AAL 5.

The AAL consists of a sublayer that provides cell segmentation and reassembly to interface to the ATM layer and also a more service-specific convergence function to interface to the bearer services being carried [5]. For example, to multiplex voice at 64 Kbps, data from a workstation at 10 Mbps, and video at 45 Mbps onto an STS-3 ATM network means that approximately one-third of the band-width contains video information, 1/15 contains data from the workstation, and 1/2,340 is allocated for voice. The AAL also plays a key role in the internetworking of different networks and services. To interconnect LANs and WANs, an AAL type 3/4 protocol, which supports connectionless data services, might be used.


An ATM LAN consists of a set of switches connected in a local area. There are several components of an ATM LAN [1]:

* hosts,

* ATM switches,

* internetworking devices, such as routers and gateways, and

* interfaces to the public network.

Each of these components has several interfaces, allowing them to be interconnected in a variety of ways. For example, a host may be connected to one or more ATM switches (provided it has more than one ATM interface) and at the same time can be interfaced with a public ATM network. Similarly, an ATM switch has three interfaces:

1. a host computer interface,

2. a point-to-point link interface allowing the switch to be connected to other ATM switches, and

3. a public ATM network interface.

A typical ATM LAN would use a mesh or hierarchical topology, high-speed cell switching, and standard ATM protocols. Deploying ATM technology and standards in a local networking environment will have a significant impact on the way we view LANs. The bandwidth of traditional LANs is usually on the order of tens of megabits per second, while ATM LANs will support Gbps speeds. Today's LANs also lack scalability. Tomorrow's LANs must operate in an environment in which computing devices are so inexpensive and readily available that there are hundreds or even thousands in a typical office. With such a large number of devices, any attempt to interconnect them with traditional shared-media LANs would be impossible. The limitations of existing bus and ring LANs, the demand for higher bandwidths, and larger user populations are the major reasons for the growing interest in ATM LANs. ATM LANs will also have protocol support for a mixture of high-level communication services (e.g., TCP/IP, UDP/IP, BSD Sockets, and RPC) and may be used as "backbone networks" to interconnect existing networks.

The physical architecture of a local ATM network is a mesh-star architecture. This mesh-connected network provides data-transport ATM connections between one transmitter and one or more receivers. It also provides signaled ATM connections between one transmitter and one receiver with a single associated adaptation layer. This requires the ATM layer to provide both point-to-point and point-to-multi-point connections. With its virtual connections and meshed architecture, ATM is an excellent technology that can overcome the limited scalability of today's LANs.

Motivation and Goals for an ATM LAN

The motivation and goals for an ATM LAN are based on the following observations [1, 3, 4, 15]:

* In order to provide the real-time transport capabilities necessary for future multimedia applications incorporating voice, video, and high-speed data, we require LANs with high bandwidth and low latency.

* An ATM LAN and its associated software should be easier to implement in a local area than in a wide area. Clearly, congestion control and network management will be easier to handle when all traffic is under local control. Likewise, transmission latency and its effect on the underlying communication protocol will not be as big a problem in a local-area environment. Also, a local ATM network will have relatively few switching elements, or shallow switching hierarchies, allowing simpler routing algorithms to be used.

* Many applications originally conceived for wide-area ATM-based networks are likely to be developed in a local-area environment first anyway, then extended to WANs later on.

* Any new network architecture should take maximum advantage of existing standards whenever possible. An ATM LAN should be based on standards proposed by both national and international standard bodies.

* The total aggregate throughput capability of an ATM LAN should reach several Gbps, thus far exceeding the total throughput capability of current shared-media LANs, such as Ethernet and fiber distributed data interface (FDDI).


Several contrasts can be drawn between LANs and WANs based on ATM. Most notably, an ATM LAN will consist of a small number of ATM packet switches with only a few ports per switch. On the other hand, an ATM WAN will most likely contain a large number of ATM packet switches with many ports per switch. Clearly, this will be one of the primary factors responsible for determining the cost of these networks. Switches that contain only a few ports should be cheaper and easier to build than switches containing many ports. Table 3 summarizes the characteristics of ATM LANs and WANs.

LAN Emulation over ATM

In order to use the installed base of existing LANs and the application software that runs on them, it is important that the ATM paradigm be able to emulate the services of existing LANs [13, 19]. The transition to ATM networks will require that the services provided by today's LAN be supported in this new environment. A set of ATM services is currently being defined to achieve interoperability with existing LANs, called LAN emulation over ATM. LAN emulation over ATM will enable existing applications to be run over ATM networks in the same manner (interacting with a similar set of service primitives) as they are over traditional LANs. These services will also support the inter-connection of ATM networks with traditional LANs, as customers expect to continue to use existing LAN applications as they migrate to ATM [13].
Table 3. Characteristics of ATM LANs and WANS

ATM LANs                                 ATM WANs

Small, cheap switches                   Large, expensive switches
(10-256 ports)                          ([greater than]1000 ports)

Need not be ultrareliable               Reliability and
and fully redundant                     redundancy a must

Traffic policing is
unnecessary, as the traffic             Traffic Policing
sources are under local control         Required

Transmission latency is                 Transmission latency is a
not a major issue                       major issue

Can have many slower links
(e.g., 155 or 622 Mbps);
every link need not operate             Must have gigabit links
at gigabit speeds                       to handle aggregate traffic

The aggregate traffic is a              The aggregate traffic is a
very bursty arrival process             nonbursty arrival process

To emulate LAN services, different types of emulation can be defined. Emulation can be performed at the medium access control (MAC) layer by providing all of the service primitives that exist in today's 802.x LANs (e.g., IEEE 802.3 Ethernet, IEEE 802.5 - token ring), up to emulating the services of the network and transport layers. The dominant ATM LAN emulation model is based on the ATM Forum's emerging LAN emulation (LE) specification [13]. This specification defines a MAC service emulation that supports a large number of existing applications.

LAN emulation defines two major software components: the LAN emulation client (LEC), which acts as a proxy ATM end-station for LAN stations, and a LAN emulation server (LES), which resolves MAC addresses to ATM addresses. LECs are assigned an ATM address for each attached LAN, and the MAC addresses of locally attached LAN stations are registered at a LES. When a LEC wants to forward a LAN frame over the ATM network to a target LAN station, it sends the LES a MAC ATM address resolution query containing the target station's MAC address. The LES responds with the ATM address of the LEC that is attached to the target station. The originating LEG than sets up an ATM-switched virtual circuit, converts the MAC frames to ATM cells, and transmits the cells over the network. At the receiving LEC, the ATM cells are converted to MAC frames and forwarded to the appropriate host.

The LEC and LES allow services provided by today's IEEE 802.x LANs to be supported in an ATM environment. These services include:

1. the ability to send data without previously establishing a connection,

2. support for LAN multicast and broadcast MAC addresses,

3. standardized MAC driver service primitives,

4. the ability to define a group of devices logically analogous to a LAN segment (called a virtual LAN), and

5. the interconnection of existing LANs (i.e., LAN-to-LAN bridging over ATM).(5)

Problems and Challenges

ATM networks have become the focus of intense research over the past few years. The high speed associated with ATM offers a whole new set of problems and challenges to network designers and users. The problems surrounding the ATM paradigm, and high-speed networking in general, must be better understood before applications are developed to take full advantage of these networks. In the following subsections, we review some of the problems that need to be addressed. This is not a comprehensive list but serves to highlight some of the important challenges that are under investigation.

SONET and ATM Layer Issues. The B-ISDN COMPASS/Mercuri field trial involved a study of the interconnection of ATM equipment at the physical and data link layers of the OSI reference model [18]. This field trial involved the interconnection of ATM equipment from three different vendors. The interface among the various pieces of equipment was SONET OC-3, via single-mode fiber. It was discovered that the process of interconnecting ATM equipment from different vendors is not an easy one. Several technical issues must be addressed to achieve ATM interoperability with different vendor equipment. The findings of this field trial are summarized here:

* Many SONET devices have different optical transmitter strengths and receiver sensitivities. It is important to compare the minimum and maximum optical component specifications on both ends, so the optical signal is strong enough to be detected but not so strong it saturates the receiver. It is also important to consider the loss characteristics of fiber connectors. Several types of connectors can be used to terminate the fiber, and each may potentially have a different loss characteristic.

* Cell delineation turned out to be one of the most troublesome areas during the field trial. This was due primarily to the fact that the standard for ATM cell delineation changed after vendors had already begun to manufacture SONET equipment. Vendors who provided earlier equipment used the original standard for cell delineation, while most equipment today uses the new approach.

* The standard is ambiguous regarding the format of idle cells. In the ATM Forum, two types of cells were defined and each could be thought of as the correct idle cell. The difference between the two cell formats can cause equipment not to recognize idle cells from normal cells and most likely cause overflows of the buffers and/or crash the system.

* Some ATM equipment is virtual circuit (VC) based, some is virtual path (VP) based, and some handles both. It is important for equipment that is only VC based to carry the VPI field transparently and for equipment that is only VP based to carry the VCI field transparently.

* The ATM standard defines a set of ATM operations, administration and management (OA and M) functions for exchanging alarms and revealing status information between different equipment. Not all equipment on the market supports these functions. This led to some difficulties during the field trial, and OA and M functions had to be disabled on some equipment so that it would not cause alarms on other equipment.

* A common signaling protocol is required for switched virtual channel applications. The ATM forum calls for the use of Q.93B, but several vendors use their own proprietary signaling protocols.

Achieving a Desired Quality of Service. ATM is based on the assumption that a network may carry many different kinds of traffic (synchronous, asynchronous, and isochronous). An important issue that must be resolved is how application programs determine and indicate their expected bandwidth requirements to the network. Applications may specify parameters such as peak and average data rate, maximum delay, and cell-loss probability. It is then up to the network to allocate its resources to satisfy the application's requests. It is still unclear how this will be achieved.

Network and Switch Complexity. Lea [14] questions whether many of the performance goals currently set for ATM may make the system too complex to implement in a high-speed environment. He points out that supporting statistical multiplexing and a continuous bit rate requires elaborate congestion- and rate-control schemes that must be executed on a real-time basis inside the network. At a speed of 1 Gbps, the cell processing time is less than 0.5 microseconds. Special hardware is required to carry out these functions at such high speeds. This is necessary to ensure cell processing and buffering at the switches is carried out quickly; otherwise it will become the bottleneck of the entire network. Whether all of the design goals set for ATM can be realized in practice is still an issue for debate [14].

Bandwidth Management and Congestion Control. ATM is based on the premise of a homogeneous network where all traffic is transformed into a uniform 53-byte packet or cell. This allows the network to carry a wide variety of different traffic types more suitable to the expected amount of heterogeneous user data that is expected to be present in these networks. Many papers have been published that describe the problems involved in allocating bandwidth and controlling congestion in ATM networks [8, 9, 20, 23-25]. Although a complete survey of these approaches is beyond the scope of this article, we do provide the following ideas, described in a recent article by Turner [24].

The first approach is peak rate allocation. In this approach, the user simply specifies the maximum rate at which cells are to be sent to the network. The network must assign VCs so that on every link the sum of the rates on the VCs is no more than the link's maximum cell rate. If the traffic exceeds the specified rate, cells are simply discarded. Peak rate allocation offers a strong performance guarantee and is easy to implement, but it may make poor use of the network bandwidth in the case of bursty traffic.

In a second approach, called minimum throughput allocation, the user specifies the throughput that is needed when the network is congested and the network guarantees the specified throughput. This approach can provide high efficiency, but the performance guarantee is weak.

The third approach, called bursty traffic specification, allows the user to specify the peak cell rate, the average cell rate, and the maximum burst size. These parameters are then used to configure the network to ensure that the specified allocation can be met. The main drawback of this approach is that it may take a long time to compute when a new VC can be safely multiplexed with other VCs (i.e., the procedure is computationally intensive).

There are two additional limitations inherent in all three approaches. First, since end-to-end protocols normally operate on data units comprising several cells, the loss or discard of a single cell can cause the retransmission of the entire end-to-end data unit, resulting in lower protocol throughput. The second limitation is that these approaches do not adequately handle multicast virtual circuits. Additional mechanisms must be developed to handle these problems.

Speed of Light Limitation. Kleinrock [11] observes that communication latency is going to be a major design issue for WANs running at Gbps speeds. The propagation delay or speed of light over a wide area is several magnitudes greater than the time it takes to transmit an ATM cell. Consider, for example, the transmission of ATM cells across the U.S. The propagation delay across the U.S. is roughly 15 milliseconds. At 1 Gbps, this time is more than 35,000 times greater than the time required to transmit a single ATM cell into the link. This means that thousands of cells can be sent before the first bit even arrives at the other end! Up to this point, networks have been primarily bandwidth limited. In the era of gigabit networks, networks will be limited by the propagation delay of the channel - a physical limitation due to the speed of light. Thus, many network issues will require re-examination in this new environment. Areas that require particular attention include flow control, buffering, and congestion control.

Interoperability. Continued attention must be paid to the interconnection of ATM networks with existing and emerging network protocols and infrastructure. To facilitate interoperability, all traffic offered to an ATM network must go through a conversion by the adaptation layer at the boundary of the ATM network. This means that large information units must be divided into 53-byte cells at the source and then reassembled back to the original size at the destination. This will require considerable processing and buffering at the interconnection points. The ATM Forum is working on these issues and has recently drafted specifications for LAN emulation over ATM (as discussed earlier). The LAN emulation service will be important to the acceptance of ATM, since it provides a simple and easy means of running existing LAN applications in an ATM environment [13, 19].


Asynchronous transfer mode has received a lot of attention recently. Will ATM live up to its promise as a universal switching and multiplexing methodology? Will it be the technology that finally brings networked multimedia capabilities to our desk-tops? Much effort has already been spent on developing ATM. ATM's success depends, in part, on its ability to provide high-speed networking solutions that surpass (in terms of price/performance) today's technologies. Over the next few years, you will no doubt hear much more about ATM as the national information infrastructure unfolds.

1 A 64 Kbps voice channel results from pulse code modulation sampling of 8,000 8-bit samples every second.

2 The reason for choosing a fixed-size packet (cell) was to ensure that the switching and multiplexing functions could be carried out quickly and easily.

3 This time corresponds to the voice digitizing rate of one 8-bit sample 8,000 times per second.

4 The checksum does not protect the data but only the header.

5 Readers interested in learning more about LAN emulation can refer to [13, 19]. Since LAN emulation over ATM is currently under development, these specifications are subject to change.


1. Apple Computer, Bellcore Sun Microsystems, and Xerox. Network compatible ATM for local network applications. FTP Document pub/nclatm/

2. Ballart, R., and Ching, Y-C. SONET: Now it's the standard optical network. IEEE Commun. Mag. 27, 3 (Mar. 1989), 8-15.

3. Boudec, J-Y., Port, E., and Truong, H. Flight of the FALCON. IEEE Commun. Mag. 31, 2 (Feb. 1993), 50-56.

4. Chipman, K., Holzworth, P., Loop, J., et al. Medical applications in a B-ISDN field trial. IEEE J. Sel. Areas Commun. 10, 7 (Sept. 1992), 1173-1187.

5. Day, A. International standardization of B-ISDN. IEEE Lightware Transm. Syst. 2, 3 (Aug. 1991), 13-20.

6. Delisle, D., and Pelamourgues, L. B-ISDN and how it works. IEEE Spectrum 28, 8 (Aug. 1991), 39-42.

7. DePrycker, M., Peschi, R., and Landegem, T. B-ISDN and the OSI protocol reference model. IEEE Network 7, 2 (Mar. 1993), 10-18.

8. Eckberg, A. B-ISDN/ATM traffic and congestion control. IEEE Network 6, 5 (Sept. 1992), 28-37.

9. Hong, D., and Suda, T. Congestion control and prevention in ATM networks. IEEE Networks 5, 4 (July 1991), 10-16.

10. Kawarasaki, M., and Jabbari, B. B-ISDN architecture and protocol. IEEE J. Sel. Areas Commun. 9, 9 (Dec. 1991), 1405-1415.

11. Kleinrock, L. The latency/bandwidth tradeoff in gigabit networks. IEEE Commun. Mag. 30, 4 (Apr. 1992), 36-40.

12. Kung, H.T., Gigabit local area networks: A systems perspective. IEEE Commun. Mag. 30, 4 (Apr. 1992), 79-89.

13. LAN emulation over ATM. draft specification-Revision 5 (ATM FORUM 94-0035R5), LAN Emulation Sub-Working Group of the ATM Forum Technical Committee, August 16, 1994.

14. Lea, C-T. What should be the goal for ATM. IEEE Network 6, 5 (Sept. 1992), 60-66.

15. Lyles, J., and Swinehart, D. The emerging gigabit environment and the role of local ATM. IEEE Commun. Mag. 30, 4 (Apr. 1992), 52-58.

16. Malamud, C. STACKS: Interoperability in Today's Computer Networks. Prentice-Hall, Englewood Cliffs, N.J., 1992.

17. McKinney, R., and Gordon, T. ATM for narrowband services. IEEE Commun. Mag. 32, 4 (Apr. 1994), 64-72.

18. Midani, M.T., Guha, A., Cavanaugh, J.D., and Pugaczewski, J.T. Interoperability considerations between ATM equipment based on the B-ISDN COMPASS/Mercuri trial. In Proceedings of the Third International Conference on Computer Communications and Networks (Sept, 1994, San Francisco, Calif.), pp. 190-194.

19. Newman, P. ATM local area networks. IEEE Commun. Mag. 32, 3 (Mar. 1994), 86-98.

20. Okada, T., Ohnishi, H., and Morita, N. Traffic control in asynchronous transfer mode. IEEE Commun. Mag. 29, 9 (Sept. 1991), 58-62.

21. Omidyar, C., and Aldridge, A. Introduction to SDH/SONET. IEEE Commun. Mag. 31, 9 (Sept. 1993), 30-33.

22. Suzuki, T. ATM adaptation layer protocol. IEEE Commun. Mag. 32, 4 (Apr. 1994), 80-83.

23. Trajkovic, L., and Golestani, S. Congestion control for multimedia services. IEEE Network 6, 5 (Sept. 1992), 20-26.

24. Turner, J. Managing bandwidth in ATM networks with bursty traffic. IEEE Network 6, 5 (Sept. 1992), 50-58.

25. Yazid, S., and Mouftah, H.T. Congestion control methods for B-ISDN. IEEE Commun. Mag. 30, 7 (July 1992), 42-47.

About the Author:

RONALD J. VETTER is an assistant professor of computer science at North Dakota State University. Current research interests include high-performance computing and communications, multimedia systems, and high-speed optical networks.

Author's Present Address: North Dakota State University, IACC Building, Room 258, Fargo, ND 58105; email: rvetter
COPYRIGHT 1995 Association for Computing Machinery, Inc.
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 1995 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:Issues and Challenges in ATM Networks; asynchronous transfer mode
Author:Vetter, Ronald J.
Publication:Communications of the ACM
Date:Feb 1, 1995
Previous Article:Standards: free or sold?
Next Article:ATM network: goals and challenges.

Terms of use | Privacy policy | Copyright © 2022 Farlex, Inc. | Feedback | For webmasters |