Printer Friendly

ATM connection and traffic management schemes for multimedia internetworking.

The rapid advances being made in microprocessor technology have stimulated significant interest in distributed multimedia applications supported by high-speed desktop computers and high-speed networks. Examples of distributed multimedia applications include multimedia database retrieval, distributed multimedia documents, and video mail. Due to the large amount of multimedia traffic for audio/visual applications, these applications require high-speed networks to retrieve data in real time, instead of huge local disk storage.

Because of the real-time characteristics of audio/visual applications, networks must guarantee stringent quality of service (QOS), such as for throughput, delay, and litter, as well as conventional QOS for connectionless data transfer. A star/mesh topology asynchronous transfer mode local-area network (ATM LAN) is expected to be more efficient at supporting bandwidth allocation for each terminal than shared media networks. With this star/mesh topology, the total network capacity can be increased in a scalable manner by interconnecting multiple switches, while the speed and cost for an individual terminal interface remain the same. The basic functions for priority control and bandwidth reservation have already been provided by ATM switches. Therefore, ATM LANs offer big advantages when used in multimedia service integration.

However, several technical issues must be considered for the realization of multimedia applications that fully utilize such ATM capabilities.

(1) how to provide seamless ATM connections with QOS guarantee, without any intermediate routers even over multiple ATM subnetworks, while maintaining compatibility with conventional network layer protocols,

(2) how to guarantee QOS with appropriate ATM traffic management functions for a wide variety of multimedia applications, and

(3) how tn integrate actual multimedia applications with an effective ATM application interface.

One problem deals with ATM connectivity, which is based on ATM addressing and routing. Currently, the Internet Engineering Task Force (IETF) [11] and ATM Forum [1] specify a standard ATM addressing model, based on the "subnet model" concept [6], in which ATM subnetworks are separated by network-layer multiprotocol routers. With this model, an ATM LAN looks like an appropriate replacement for a data-link-level network, such as Ethernet or FDDI LANs. Regarding the IP protocol, this model is called a "classical IP model" [6]. When multimedia communications are handled over multiple ATM subnetworks, a severe performance bottleneck and service quality degradation is possible at the routers, even though ATM subnetwork segments can support guaranteed QOS. This is because intermediate routers carry out network-layer protocol processing through the use of packet-by-packet routing between high-speed ATM subnetworks. To solve this problem, a new addressing/routing model is necessary one that can provide seamless end-to-end ATM connections (VCCs) even between ATM subnetworks, in order to guarantee a high throughput and appropriate service quality. Then, the signaling/routing scheme must also need to be extended, according to the new addressing concept.

Another key issue for managing ATM connections in multimedia application environments is how to guarantee the QOS. ATM traffic management control is responsible for this function. Traffic service classification, bandwidth reservation, and congestion control strategies for each class are the main problems that need to be solved in order to guarantee the multiple QOSs required for multimedia applications over ATM. Considering the simplicity and the cost-effectiveness of traffic control in the LAN environment, a simple bandwidth reservation control, using fewer traffic parameters while achieving high network utilization, would be preferable. One approach is to utilize adaptive/reactive control schemes, which enable each user to dynamically adjust the cell transmission rate according to the network congestion status. These schemes should be developed so as to eliminate the difficulty of having individual applications predict many source traffic parameters, which has been required for most conventional ATM traffic control protocols.

The final problem we address in this article is the question of what kinds of applications can fully utilize ATM's capabilities? Few applications have yet been developed that can make good use of ATM's broadband, seamless and multiple QOS functionality. Integration of multimedia applications and high-performance ATM protocols is one of the authors' main goals. From this point of view, we decided to develop a multimedia application network testbed based on the proposed ATM protocol architecture. The application interface over ATM is a core part of the integration problems, and should be constructed so that multimedia applications are capable of fully and flexibly utilizing high-performance ATM functionality. For example, if an application requires reserving bandwidth, the application itself has to recognize the required bandwidth and declare the traffic parameter to the ATM interface. The prototype system should be developed so as not only to verify the proposed ATM protocol architecture, but also to provide an attractive multimedia application development platform.

In this article, we present a new ATM LAN connection and traffic management scheme and the current ATM LAN prototype system with a highly effective multimedia application, which supports multimedia data retrieval, called "multimedia on demand." In the following sections, we describe an overview of the proposed schemes; propose a connection management scheme that includes a new addressing model and a new signaling/routing scheme, and compare it with the existing addressing model under consideration by the ATM Forum and IETF; propose a traffic management scheme that can support effective bandwidth management for individual QOSs, and compare it with the current proposed scheme under consideration by the ATM Forum; present the developed ATM LAN prototype system; and make a concluding assessment.

Overview of ATM-Oriented Multimedia Network Control

In order to solve the problems we have described, we employ a new connection management scheme and a new traffic management scheme. The proposed schemes are validated by the ATM testbed system, called "multimedia on demand over ATM," implemented with a new multimedia application interface.

Connection Management Scheme

ATM is a connection-oriented protocol that has two different phases: a connection setup phase and a data transfer phase. The connection management scheme concerns both routing and internetworking performances achievable during the connection setup phase. Figure 1 shows the current ATM LAN basic model, called the "classical IP model." In this model, the ATM subnetworks (LIS A, LIS B, and LIS C) are separated by routers, terminating VCCs. During the data transfer phase, the routers reassemble ATM cells into packets and then perform packet-by-packet routing. That is, ATM address-based routing is performed by ATM switches during the connection setup phase, and network-layer address-based routing is carried out by routers during the data transfer phase. Therefore, the current model cannot provide end-to-end ATM connections over multiple ATM subnetworks. It cannot provide high-throughput data transmission and at the same time guarantee an appropriate QOS.

On the other hand, the proposed connection management scheme aims to provide VCCs without terminating them at any intermediate router, even over multiple ATM subnetworks. In order to achieve this, a new addressing model, "gateway model" [20, 21], is proposed, instead of the classical IP model, WATM subnet model, and peer model [6], described in the next section. The proposed scheme is intended to achieve intrasubnetwork/intersubnetwork routing by simultaneously using both the ATM address and the network layer address during the connection setup phase. During the data transfer phase, data can be sent directly through a VCC in an end-to-end manner, without any routing. Therefore, the proposed scheme requires extending the current user-to-network interface/network-to-network interface (UNI/NNI) signaling scheme to handle both addresses for connection setup, and also extending the NNI routing scheme to exchange routing information based on both addresses between switches.

To allow easy UNI/NNI signaling and routing, the proposed gateway model architecture uses two kinds of switches: a local ATM switch, located inside the ATM subnetwork; and a gateway ATM switch, located at the border of the ATM subnetwork. ATM local switches play a role in intrasubnetwork routing, based on the ATM address, and ATM gateway switches play a role in intersubnetwork routing, based on the network layer address, such as IP and OSI protocols. In the extended NNI routing scheme, both ATM switches exchange routing information based on the ATM address, while gateway ATM switches exchange other network-layer address-based routing information. In the extended UNI/NNI signaling scheme, the local ATM switch forwards the connection setup signaling message "SETUP" by referring to the ATM address-based link-state information, and the gateway ATM switch does it by referring to both the network-layer and ATM-layer address-based link-state information.


Traffic Management Scheme

An ATM LAN is expected to offer not only existing LAN data services but also multimedia services, such as image transfer, scientific visualization, and real-time audio and video. In order to provide such a variety of services, an ATM LAN has to support multiple traffic classes with different QOSs and different traffic controls for individual classes. Regarding traffic characteristics for the LAN application services, the authors propose three traffic classes: best effort class, guaranteed burst class, and guaranteed stream class.

Regarding the traffic control policies for LAN and Internet environments, using only preventive congestion controls, such as call admission control (CAC) and usage parameter control (UPC), would not result in good performance, because it would be difficult for the network to predict QOSs using only source traffic parameters declared by the users, such as average rate and burst duration. Therefore, to simplify traffic management, the authors propose reactive congestion control based on the three different classes named previously.

The guaranteed stream class handles stream-type data traffic, such as video and audio. This class is simply controlled by a peak rate bandwidth, allocated only at a call setup. However, since the best effort class and the guaranteed burst class traffic are statistically multiplexed at the cell level and the burst level, respectively, reactive congestion control is required for both classes. The best effort class handles conventional computer communication services, such as the current TCP/IP. In this class, we propose a combined scheme for adaptive end-to-end peak rate control [23] with link-by-link backpressure control [3]. Rate control with feedback control information, one of the most promising means of adaptive congestion control, is now being studied intensively in the ATM Forum. Backpressure control is combined with rate-based control to solve the problem of cell losses in a heavily loaded situation that arises when rate-based control alone is used.

The guaranteed burst class, on the other hand, handles high-speed file transfer applications. In this class, we propose a method to reserve only a peak bandwidth with fast reservation protocol (FRP) [4, 19] in sending each burst. Once a peak bandwidth is reserved in the network, no cell loss can occur during transmission of the entire burst. The authors enhanced the FRP ("adaptive FRP") so that users reserve the bandwidth for each burst by adaptively adjusting the peak bandwidth to the degree of network congestion.

Multimedia on Demand over ATM

The prototype system was developed to realize the integration of both the proposed control architecture and multimedia-on-demand applications. The multimedia-on-demand application is intended to show the following usage scenarios: distributed multimedia document, multimedia database retrieval, video on demand for entertainment, and home shopping. The basic concept of this application design is that multimedia documents located at a user workstation can be linked to video or image media data located at remote servers. In order to make good use of ATM functionality, a flexible object-oriented approach is supported. Each media category, such as image, video, audio, and text, can be described by means of different objects, which are associated with appropriate QOSs and flow control parameters at the transport and network layer (shown later in Figure 14). In addition, a remote media object can be linked to user documents with user-friendly buttons and windows. Therefore, it is easy to develop a multimedia-on-demand application, which fully utilizes ATM functionalities with a simple ATM application interface.

Connection Management

As described previously, multimedia data networking requires a connection that can provide high throughput data transmission and guarantee appropriate QOS. For this purpose, VCCs should be established in an end-to-end manner without terminating at the intermediate routers, even between different ATM subnetworks. In this section, we discuss problems in the existing addressing model and propose a new addressing model, the gateway model [20, 21]. We also describe the proposed scheme's extended routing information exchange procedure (NNI routing procedure) and extended connection setup procedure (UNI/NNI signaling procedure) and compare them with the NNI routing [18] and UNI/NNI signaling [1] discussed at the current ATM Forum.

Problems in the Existing Addressing Model

Addressing plays an important role in defining network structure and greatly affects data transfer performance. When considering how to map conventional LAN protocols onto ATM, it is very important to choose an appropriate addressing model. Several addressing models - classical IP, WATM subnet, and peer models - have been proposed to realize the IP protocol, in particular, over ATM (IP-over-ATM). However, some issues remain to be solved for individual addressing models. We will discuss what kinds of problems arise with each method, especially for IP-over-ATM.

Note that a closed logical ATM subnetwork, within which a separate IP protocol administrative entity configures its hosts and routers, is defined as a logical IP subnetwork (LIS) [11]. Each LIS operates and communicates independently from other LISs on the same ATM network. The LIS concept can be easily extended to any network layer protocols, such as IPX, Appletalk, and OSI.

Classical IP Model. Figure 1 shows the classical IP model [6, 11] ATM LAN architecture. As shown, this model simply consists of cascading LISs with network layer protocol routers. An ATM LAN appears to be an appropriate replacement for a data-link-level network, such as Ethernet or FDDI. In this model, since VCCs are terminated at each router, every transmitted ATM cell always has to be reassembled into packets at the transit router. Then packet-by-packet processing must be carried out to forward packets to the next LIS by the network-layer routing function in this transit router. Therefore, throughput is limited by the router's processing capacity.

Figure 2 shows the data flow, including the address resolution procedure (ARP), connection setup procedure, and data transfer procedure. IP protocol entities for terminals (ES1, ES2) and routers (IS1, IS2) periodically exchange IP routing information with one another. By using this IP routing information, the source (ES1) recognizes the next hop router's IP address (IS1-IP) toward the destination (ES2) and resolves its ATM address (IS1-ATM) from its IP address (IS1-IP) an address resolution protocol (ARP). The source (ES1) sets up a connection to the next hop router (IS1) and then begins to send data to it through the established connection. Once the router receives the data, it resolves the next hop router's ATM address (IS2-ATM) again, sets up a connection to the next hop router (IS2), and begins to send data to it. The same kind of procedure is implemented successively to the destination terminal.

Therefore, when the transit router (IS1) does not have a connection to the next hop router (IS2) or terminal, or when it doesn't know the ATM addresses, a number of succeeding packets have to be stored at the transit router (IS1) until the connection has been established to the next hop router (IS2)/terminal. As the distance to the next hop router/terminal gets longer, the source router needs more buffer capacity to hold incoming packets. This problem becomes more serious in the building of wide-area ATM networks with high-speed links. Therefore, other models (to be described), which achieve VCCs through several LISs, are required.

WATM Subnet Model. Figure 3 shows the wide-area ATM (WATM) Subnet Model [6] and its data flow. This model can separate several LISs without routers and can provide a VCC through several LISs by using a next hop resolution protocol (NHRP) [8]. NHRP protocol depends on next hop servers (NHS). NHSs, which are located at each ATM LIS, cooperatively resolve the next hop ATM address in the whole ATM network, from any destination network layer address. Each NHS serves a set of destination terminals, which are directly connected to the same LIS or are located on conventional LANs behind a router, directly connected to the same LIS. NHS has a function similar to that of the domain name system (DNS) in the Internet.

For example, when the source terminal (ES1) tries to send data to the destination terminal (ES2) in another LIS, the source terminal (ES1) sends an NHRP request packet to the neighbor NHS (NHS1) to resolve the ATM address for the destination terminal (ES2). Since the NHS (NHS1) doesn't know the destination terminal (ES2) information, it forwards the request packet to the destination's NHS (NHS2), which resolves the proper ATM address (ES2-ATM) of the destination terminal and returns it to the source terminal (ES1), while the acquired ATM address (ES2-ATM) is cached on intermediate NHSs (NHS1 and NHS2) on the way back. The source terminal (ES1) then sets up a VCC with the acquired ATM address (ES2-ATM) and begins to send data to the destination terminal (ES2) through it.

On the other hand, when the source terminal (ES1) tries to send data to another destination terminal (ES3) behind a router (Router 1), NHSs (NHS1 and NHS2) resolve the router (Router1)'s ATM address as the next hop ATM address. Then the source terminal (ES1) sets up a connection to the router (Router1) and begins to send data through it. In both cases, each terminal can establish a VCC even between different ATM LISs, by specifying the required ATM address and the desired QOS.

Note that the address resolution data, administered by an NHS, is collected by periodic address registration of each terminal or each router, directly connected to the same LIS. No terminals on conventional LANs behind the router need to register their addresses. Instead, the router (Router1) needs to register aggregated addresses, or a group of addresses, that are representative of every terminal behind it. However, NHRP may not work well in configurations where the rest of the network behind the router (such as Router1) are very big; when more than two such routers are connected to the ATM network, it is too difficult to aggregate terminal addresses. Another problem is additional round-trip time delay when the NHRP request is forwarded to the destination LIS in order to resolve the next hop ATM address. This problem causes significant latency for delay-sensitive interactive applications. Additionally, NHRP has a security problem: it cannot provide perfect screening/filtering capability for the higher layer service access points, such as the network layer and the transport layer.

Peer Model. Figure 4 shows the peer model [6, 12, 15] and its data flow. In this model, the ATM address format is completely different from the standard one, which consists of the hierarchical network prefix part and the host address part, described by each terminal's physical medium access layer (MAC) address. Since the peer model directly maps multiple network layer protocol addresses, such as an IPng address or an IP address, algorithmically into an ATM address [15], it can simply set up a connection with this new ATM address without going through the ARP. Therefore, each ATM switch (P-SW) has to handle the multiple-network-layer routing protocol in order to route signaling messages.

When the source terminal (ES1) tries to send data to the destination terminal (ES2), it simply sets up a connection without ARP and begins to send data. Therefore, this model can eliminate the ARP delay. Furthermore, since each ATM switch establishes a connection based on the network-layer address, this model can provide security control by using call screening based on the network-layer address. That is, this model can solve two problems that are drawbacks for the WATM subnet model.

However, several issues remain to be solved. The peer model requires terminals to support a completely different IP over ATM and ARP from the current standard, defined by [11]. Additionally, since each P-SW has to process routing for multiple network layer protocols, the cost for each ATM switch increases and it also takes a lot of time to carry out the routing calculation. Furthermore, within an actual administrative network consisting of multiple physical switches, even if the terminal moves into another switch on the same administrative network, the terminal has to update its network layer address frequently.

Therefore, in order to solve problems for both the WATM subnet model and the peer model, we propose the following gateway model, regarded as a middle model between the two [20, 21].

Proposed Gateway Model

This section explains the proposed addressing model and compares it with the three models already discussed. Figure 5 shows the proposed ATM LAN architecture based on the gateway model [20, 21]. The proposed model simply replaces the router function for the classical IP model with the gateway switch function. The gateway ATM switches play a major role in inter-LIS routing for a connection setup message based on the network-layer addresses, but local ATM switches play a role in intra-LIS routing based on only the ATM address. The local ATM switches are the same as the ATM switches for the classical IP model except for the distributed ARP function, as will be described.

Therefore, in terms of the extended NNI routing scheme, the local ATM switch has a routing protocol capability based on the ATM address. The gateway ATM switch has a routing protocol capability based on both the network layer address and the ATM address. The terminal, directly connected to the LIS, also has only the same network layer routing protocol as the gateway ATM switch. As shown in Figure 6, terminals (ES1, ES2, ES3, ES4) and gateway ATM switches (GW-SW1, GW-SW2) exchange the network layer routing information with one another, and local ATM switches (SW1, SW2) and gateway ATM switches (GW-SW1) exchange the ATM-address-based routing information with one another in LIS A.

In terms of the UNI/NNI signaling scheme, the signaling message is extended to include both the ATM address and the network layer address. Local ATM switches (SW1, SW2, SW3, SW4) forward a signaling message "SETUP" by referring only to its ATM address field. On the other hand, the source terminal (ES1, ES2) or gateway ATM switches (GW-SW1, GS-SW2) send or forward the signaling message "SETUP," as follows: First they cheek its network layer address field and recognize the network layer address for another next hop gateway/terminal toward the destination terminal with the network layer routing information described above. They try to find its ATM address by ARP carried out locally within each LIS. Then they replace the value of the ATM address field in the signaling message "SETUP" with the acquired next hop ATM address. Finally, they forward the signaling message "SETUP."

In terms of ARP, both local ATM switches and gateway ATM switches have a distributed ARP server function, as shown in Figure 6. They have the same ARP table, including the relationship between the network layer address and the ATM address for every terminal within each LIS. This ARP table is synchronized periodically by flooding the ATM switches within each LIS (LIS A, LIS B, LIS C). Therefore, when the source terminal (ES1) sends an ARP request, it can get the ARP reply from the local ATM switch (SW1) with a short time (as shown in Figure 6). Additionally, when the gateway ATM switch (GW-SW1, GW-SW2) has to perform an ARP request, it needs only to look at the local ARP table, without sending any ARP request to the network. Therefore, in the proposed scheme, ARP duration is almost negligible.

In this way, once the end-to-end connection has been established between the source and the destination through several LISs, the source can send data at high throughput and assumed QOS, using the hardware cell relay technique.

Therefore, the proposed scheme has the following features:

1. End-to-end connection capability. The proposed scheme can provide high throughput data transfer through an end-to-end connection between source and destination, like the WATM subnet model and the peer model.

2. Short connection setup delay. The proposed scheme can reduce ARP delay by using distributed ARP function. The delay is almost the same as that of the peer model. It is appropriate for delay-sensitive interactive applications.

3. UNI compatibility. The proposed scheme has almost the same compatibility as the current standard UNI classical IP model, except that the connection setup message is extended to include network layer protocol address. Therefore, it does not considerably affect the current standard UNI.

4. Reasonable routing processing time. In the proposed scheme, two kinds of routing protocols have to work cooperatively during a connection setup phase, as is done by the classical IP model. Since multi-protocol functions exist only at the gateway ATM switch, the amount of processing of the proposed scheme is almost the same as that of the classical IP model.

5. Easy host configuration within the same LIS. In the proposed scheme, since the current standard ATM address is used within the same LIS, each host address does not have to consider its location within the LIS.

6. Good security capability. In the proposed scheme, call screening can be achieved with the network-layer address as well as with the ATM address, only at the connection setup phase at gateway ATM switches, like the peer model. For enhanced security control, an upper-layer service access point, such as a transport-layer service ID, may also be described as a call screening discriminator in the signaling message.

In the following sections, both the NNI routing scheme and the UNI/NNI signaling scheme for the gateway addressing model are described in more detail.

Enhanced NNI Routing Scheme

The private network-to-network interface (P-NNI) routing issue deals with how to exchange routing information among multiple ATM switches. It basically recognizes a topology of multiple ATM switch interconnections, a link-state routing protocol for both ATM address and network-layer address, and makes a routing table for connectivity and link cost. The UNI/NNI signaling scheme, described in the next section, finds the best path for the signaling message by looking up this routing table, with a required destination terminal address and a required QOS, during a connection setup phase. That is, the NNI routing scheme is involved with the QOS routing issue as well as the connectivity issue.

Figure 7 shows two-layered NNI routing for both ATM addresses and network-layer addresses between several LISs. ATM address-based routing is achieved within the same LIS, such as A, B, and C. Network-layer address-based routing is achieved between different LISs. Since the individual routing scheme supports different kinds of QOSs, it is important to combine both schemes effectively. In this section, we discuss the connectivity issue and the QOS routing issue.

The proposed routing scheme consists of the following procedures:

1. LIS region discovery, ATM switch topology discovery within each LIS, and terminal address registration;

2. network-layer routing information exchange between different LISs;

3. ARP for setting up a relationship between both addresses.

In procedure 1, each individual ATM switch, including both local ATM switches and gateway ATM switches, exchanges "hello packets" with immediate neighboring switches and knows adjacent switch IDs, LIS region IDs, and authentication IDs. By referring to the LIS region ID and the authentication ID, each ATM switch recognizes which neighboring switches are in the same LIS. LIS A, B, and C regions can be determined after this LIS region discovery.

Then, each ATM switch periodically distributes its link state routing information only to the same LIS region ATM switches with flooding. The link state routing information is gradually synchronized at every switch in the same LIS. In the end the topology and link state routing information within each LIS can be discovered independently. This procedure helps to reduce any complicated location management for ATM switches and ATM terminals within each LIS. The topology data for ES1, ES2, SW1, SW2, and GW-SW1 are discovered for LIS A; data for ES3, ES4, SW3, SW4, and GW-SW2 are discovered for LIS B; and data for SW5, SW6, GW-SW1, and GW-SW2 are discovered for LIS C.

Once the ATM switch topology is discovered within each LIS, an individual ATM terminal (ES1, ES2, . . .) tries to register its ATM address (ES1L, ES2L, . . .) through the standard interim local management interface (ILMI) protocol [1]. With the ILMI address registration, an individual ATM switch can detect the location (switch-port number) of the directly connected ATM terminals. Then ATM-address-based-routing can be achieved within the same ATM LIS, comparing the desired QOS with the link state information.

Once the ATM switch topology is discovered within each LIS (A, B, C), every gateway ATM switch (GW-SW1H, GW-SW2H) exchanges network-layer link-state routing information with neighbor gateway ATM switches, routers, and terminals within the same LIS (C) through a point-to-multipoint connection, a multicast server, or flooding. In particular, in case of network-layer multicast routing, every gateway ATM switch supports multicast link state routing protocols, such as multicast open shortest path first (MOSPF), and group management control protocols, such as the Internet group management protocol (IGMP). The routing information is distributed in the same manner as the unicast routing. Multicast connections within the same LIS are established by whole ATM addresses joining in the same multicast group, and multicast connections between the different LISs are established by network-layer multicast address.

In procedure 3, an ATM switch makes a local ARP table that indicates the relationship between the network layer address and the ATM address for terminals directly connected to it. This local ARP table is distributed to every other ATM switch within the same LIS in a flooding manner, so every ATM switch can share the ARP table within the same LIS. Since this approach is not a server approach, it is not affected too much by any network trouble, such as a server failure. Figure 6 shows the ARP sequences. In case of the source terminal, it sends the ARP request to the ingress ATM switch. Since every ATM switch shares the ARP table, related to every terminal within the same LIS, the ingress switch responds with the transit gateway switch's ATM address directly to the source terminal. In case of the gateway ATM switch, since it already shares the ARP table, it can determine the ATM address without any ARP request to any other ATM switch. Therefore, the ARP delay can be limited to a sufficiently short value.

On the other hand, regarding the QOS routing issue, since the link-state routing information, based on the ATM address, can describe information such as the individual link bandwidth and delay in the same LIS, it can provide the QOS routing at least within the same LIS. However, since the current network layer routing protocol cannot guarantee QOs, the gateway ATM switch also supports Resource ReSerVation Protocol (RSVP) [24] and ST-II as a QOS-guaranteeing network layer routing protocol. Bandwidth can be reserved between individual gateway ATM switches, and a delay is guaranteed end-to-end. Therefore, the gateway ATM switch has to make a metric abstraction of link bandwidth, delay, and other information within each LIS and make such information visible outside the LIS. The method involved in achieving a reasonable abstraction technique is very important for coherence of the two-layered routing protocol.

Enhanced UNI/NNI Signaling Scheme for Connection Management

Signaling is a control procedure to establish connections, which ensure the various desired QOS, such as bandwidth, delay, and cell loss, for each VCC. The proposed signaling for connection management involves the extended one from the ATM Forum standard UNI/NNI signaling scheme, due to both the extended addressing and QOS parameter and the extended filtering key for security enhancement.

In terms of extended addressing, the ATM Forum decided to use the 20-byte OSI network service access point (NSAP) address to specify a destination terminal for private ATM networks. The upper 13 bytes, called the network prefix part, include the administrative organization and routing hierarchy. The lower seven bytes consist of the system identifier (SID) part (six bytes), which specifies the terminal address itself, and the selector part (one byte). The ATM Forum agreement contains two schemes. One is to use the IEEE 48-bit MAC layer address for the SID. The other is to use a network layer address, such as IP, CLNP and Appletalk, by directly mapping it to the SID [12] with its protocol discriminator in a portion of the network prefix part [20, 21]. It was agreed that the former would be mandatory and the latter optional. However, the proposed signaling scheme uses both addresses for the connection setup at the same time. The first is employed for specifying the ATM-level attachment point in the same LIS and the second for specifying the IP level attachment point among several LISs.

In order to set up a connection to the destination terminal, the source sends the message "SETUP" to the destination through gateway ATM switches and local ATM switches. This SETUP message includes the connectivity information and the QOS information field. The connectivity information field consists of the destination network-layer address and the ATM address for the next hop gateway ATM switch within the same LIS. The latter is filled in the "called party number" information element (IE) as in the ATM Forum standard, and the former is filled in the "broadband higher layer information (B-HLI)" IE [1], that is extended from the ATM Forum standard. The QOS information field also consists of ATM-layer-level QOS and network-layer-level QOS. The ATM-layer-level QOS is filled in the "ATM user cell rate," "broadband bearer capability," and "quality of service parameter" IEs, as in the ATM Forum standard.

As Figure 7 shows, during the connection setup, the ATM address in the SETUP message needs to be translated into a new next hop gateway ATM address at every gateway ATM switch, while the network layer address and its QOS remain the same. That is, the ATM address and its QOS are effective for routing within the same LIS, but the network layer address and its QOS are effective for end-to-end routing. Each ATM switch decides the best path routing according to the specified QOS.

The multicast signaling control procedure depends on the location of multicast group hosts, considering whether they are located inside or outside of the same LIS. Within the LIS, the source terminal recognizes each multicast group host ATM address and simply adds each ATM address into a multicast connection by the point-to-multipoint setup signaling message "ADD_PARTY" [1]. On the other hand, if multicast group hosts are located outside of the LIS, the source terminal adds the multicast network layer protocol address into a multicast connection, by extending the "ADD_PARTY." This "ADD_PARTY" message is processed at the gateway ATM switch, and the succeeding procedure is performed according to the location of other multicast group hosts.

In terms of the extended filtering key for security enhancement, since VCCs are used over multiple LISs, call screening has to be accomplished for security enhancement. Unlike the case with conventional LANs, call screenings are carried out by the gateway ATM switch at the connection setup phase, with the use of information for various levels, such as the source and destination address pair and the service access point identifier pair for the higher layer, included in the SETUP signaling message. In order to describe this kind of transport layer information, the proposed signaling scheme adds a transport-layer protocol description into the B-HLI IE. The gateway ATM switch decides whether the connection should be accepted or not by comparing the B-HLI IE with its filtering table. However, during the data transfer the source terminal might possibly send different kinds of packets which cannot be admitted for transmission through the established VCC. To prevent this, the receiver terminal or touter also requires a filtering function to drop out any unadmitted kind of packet, by analyzing every packet and consulting the relationship table between the established VCC and the upper-layer protocol attributes.

Traffic Management Scheme

Another key issue for managing ATM connections in a multimedia application environment is how to effectively guarantee the QOS for a wide variety of multimedia applications. In this section, we define the service classes for a simple connection/traffic control and present an overview for individual traffic control schemes for these classes. Then we discuss the proposed reactive congestion control scheme in detail and compare it with the current ATM Forum congestion control discussion.

Service Classes

An ATM LAN is expected to offer not only existing LAN data services but also various multimedia services. To provide such a variety of services, an ATM LAN has to support multiple traffic classes with different service qualities and different traffic controls for individual classes. Considering the traffic control simplicity and the cost-effectiveness of the control in the LAN environments, a simple bandwidth reservation control, based on peak rate shaping, would be appropriate.

Three traffic classes - best effort service class, guaranteed burst class, and guaranteed stream class - are considered to be supported in ATM LAN. Figure 8 shows the individual control schemes, characteristics, and suitable services.

Best Effort Service Class. This class is for conventional computer communications, such as in the current TCP/IP. For this class, no bandwidth is required to be reserved in the network, and data is transmitted through the network with lower priority than that of the guaranteed class. When the network is lightly loaded, the bursts can be transmitted with low latency, as in conventional LANs. Once the network becomes congested, however, cells may be lost by the buffer overflow from the ATM switch. In order to prevent such cell losses, it is necessary for ATM switches to provide a high-capacity buffer [7]. However, since even such a buffer cannot avoid cell losses completely, in overload situations, reactive congestion control (described later) is required.

Guaranteed Burst Class. This class is for high-speed file-transfer-type applications. Since high-speed, long-burst transmission easily causes overload in the networks, such traffic should be transferred with the bandwidth reservation. In this class, VCC is initially established without bandwidth reservation. When a source has a burst to be transmitted with an established VCC, only the peak bandwidth for each burst is declared to the network, in order to be reserved on a burst-by-burst basis with FRP [4, 19]. Once the peak bandwidth is reserved in the network, no cell loss transfer results during the transmission of the entire burst. In this class, when the network is heavily loaded, the reservation blocking rate is large and throughput/delay degradation occurs, as in the best effort class. Therefore, reactive congestion control is required.

Guaranteed Stream Class. This class is for stream-type data traffic, such as in ST-II [22] and RSVP [24]. Real-time services, such as video and audio services, belong to this class. When a VCC is established, the route and peak rate for the call are set up at the same time. Once the VCC is set up, low latency and transmission with no cell loss are guaranteed. Instead, the VCC setup delay becomes longer than that for best effort service and guaranteed burst service. Regarding a variable bit rate source, at least peak rate multiplexing can be simply implemented, but the pertinent statistical muitiplexing is currently under study.

On the other hand, in the ATM Forum, the four traffic classes, constant bit rate (CBR), variable bit rate (VBR), available bit rate (ABR), and unspecified bit rate (UBR), have been proposed. CBR and VBR can be mapped into the guaranteed stream class. ABR can be mapped into the best effort and guaranteed burst classes. We consider the suitable traffic control method to be dependent on the burst length. For short bursts, whose transmission time is shorter than the round-trip time (RTT), simple feedback control may be enough. However, we believe a large file transfer, whose transmission time is longer than the RTT, requires the application of an explicit rate control to reduce cell losses and also to guarantee transmission time. Based on such reasoning, we propose two different classes for ABR class data transmission.

Reactive Congestion Control

Best effort class traffic and guaranteed burst class traffic are statistically multiplexed in a cell level and a burst level, respectively. Therefore, if no congestion controls are applied to these classes, cell losses or burst reservation blockings become higher during overload conditions. This causes a lot of packet retransmissions from source terminals. When such traffic increases, especially in multi-hop networks, nonpreferable congestion and serious throughput degradation appear, to the point where no burst can get through the network. These results are more likely to occur when individual source terminals send data at a high peak rate.

Regarding control policies for ATM LANs, preventive congestion controls, such as CAC and UPC, which have been investigated for public ATM networks, are not preferred in the LAN environments, since it would be difficult to predict QOSs in a network using source traffic parameters declared by the users, such as average rate and burst duration. As in conventional congestion control schemes in LANs, such as TCP's window control and source quench control, traffic control in ATM LANs should be accomplished by individual terminals reactively, rather than preventively. In this section, we propose reactive congestion controls for both classes.

In the ATM Forum, two kinds of congestion control schemes for ABR services have been proposed: closed-loop rate-based control [2] and link-by-link credit control [10]. Rate-based control is a kind of feedback control mechanism that modulates the volume of traffic admitted to the network according to the status of the network. This control method, which was proposed in the ATM Forum, is now called the "enhanced proportional rate control algorithm" (EPRCA). In EPRCA, not only the binary information concerning the congestion status but also the rate information, which indicates the maximum rate the source can transmit without overload, are transmitted with the control cell, or reservation management (RM) cell, to the source terminal by the network.

In the credit control method, a receiver site of the network (ATM switch) indicates an available buffer space to a sender by credit. No cell can be transmitted, unless the sender knows in advance that the ATM switch has room to buffer cells. With these approaches, the cells are never lost, and ideal performance may be obtained. However, these schemes require a large buffer space and high-speed buffer management for each virtual connection in individual ATM switches. This means the switch hardware would become large and the switch would become expensive. This is not preferable for LAN switches.

In the ATM Forum's September 1994 meeting, rate-based control was chosen for ABR services, for the following reasons. The credit control scheme requires extremely largebuffers in each switch in the wide-area network (WAN) environment, where the number of VCs and the propagation delay for the link are large. On the other hand, the rate-based control scheme can operate not only LANs but also WANs, without such per-VC queuing. In addition, rate-based control defines only end-system behavior and leaves flexibility to the switch architecture and traffic management strategy.

Although the rate-control scheme has been examined very carefully, some problems remain to be solved. Since the source rate is changed when the source receives the RM cells that are sent back from the destination, the rate change may not be carried out fast enough, especially in a large network. If many terminals start to send data at the same time, the sizes of the queues in the switches suddenly increase. However, at a steady state, the queue sizes stay small, because rate-based control works effectively. In order to solve such problems, we propose a congestion control scheme for each traffic class, as described in the following section.

Congestion Controls for the Best Effort Class. For the best effort class, simple rate-based control is carried out. However, rate-based control alone cannot avoid cell losses due to sudden growth in queue sizes when many terminals start to send data at the same time. Because the reduction in the sender transmission rate cannot be achieved during such a short period, to avoid cell losses caused by such a sudden queue increase, we propose combining link-by-link back-pressure control with rate-based control.

It has been pointed out that only link-by-link backpressure control may introduce head-of-line (HOL) blocking among switch nodes, which may block traffic that does not pass through the congested nodes. However, this problem can be solved by combining rate based control with link-by-link backpressure control. When the network is heavily loaded, the transmission rate can be reduced by rate-based control, while link-by-link backpressure control avoids cell losses. At slower rates, the number of cells stored in the buffers is reduced, and backpressure control is not needed so frequently. Therefore, the two types of control complement each other. In the following discussions, simple FECN is used for rate-based control instead of EPRCA, because EPRCA has not yet been completely specified.

Figure 9 shows an example of flow control for this combination method. Congestion is detected by monitoring the queue size at an individual switch. Two kinds of thresholds are specified, one for FECN control and the other for backpressure control. When the cell passes the switch, if the queue length is above the FECN threshold, the switch sets the explicit forward congestion indication (EFCI) state in the payload type field for the cell header to "congestion experienced" and forwards the cell to the destination. At the destination terminal, when the cell set to "congestion experienced" arrives, the congestion information is sent back to the source terminal. A resource management (RM) cell (with PT = 110) is used for encoding this control. If the queue length reaches the backpressure threshold, transmission from previous switches or terminals is stopped to prevent buffer overflow. The GFC bit in the cell header or special control cell is used to deliver the back-pressure signal. Note that backpressure control should be carried out only for best effort service connections. The FECN threshold chosen should be smaller than the backpressure thresholds, so that ECN-based rate control can be activated when the queue length reaches the backpressure threshold. This makes the back-pressure-controlled periods short and should reduce HOL blocking among the nodes. At each terminal, the peak rate is changed according to the RM cell information. The rate change policy is multiplicative decrease and additive increase, since this combination gives the best performance with regard to fairness and efficiency [5].

Figure 10 shows the throughput comparisons among no control, FECN-based rate control alone, and the proposed FECN + backpressure scheme. In this simulation, FECN + backpressure control obtains the best throughput. The details of the simulation model and results were presented in [9].

Congestion Control for the Guaranteed Burst Class. For the guaranteed burst class, FRP is carried out. Figure 11 shows the procedure. At the call setup, only the VCC path is set up, and no bandwidth is reserved. Before each burst transmission, a bandwidth reservation control cell is sent along the VCC to reserve the peak bandwidth. Each intermediate node checks to see whether or not the residual amount of the bandwidth in the attached link is sufficient to handle the requested peak bandwidth. If it is sufficient, the bandwidth is reserved, and an ACK cell is sent back to the source terminal.

Once the peak bandwidth is reserved, burst transmission without cell loss is guaranteed. Otherwise, a NACK cell is sent back to the source terminal and the reserved bandwidth for the intermediate links is released. The blocked source tries another reservation after some back-off interval. For bandwidth reservation control cells and ACK/NACK cells, the fast resource management (FRM) cells are used. In the LAN, the terminals tend to transmit bursts with as high a peak rate as possible. Sending bursts with a high peak rate, however, causes a high blocking ratio, and high network throughput cannot be obtained. Bandwidth reservation blocking probability rapidly becomes higher as the peak rate for the terminal and the number of hops in the network increase [19]. For example, with peak rate/link speeds of 1.0 and 0.1 for the one-hop model, the maximum throughput/link capacities to satisfy 0.1 blocking probabilities are only 0.1 and 0.7, respectively.

In order to obtain high throughput, it is better to reserve a bandwidth with a lower peak rate. On the other hand, transmission of bursts with smaller peak rates results in longer transmission time. Thus, we propose methods of adaptively controlling peak rates so that they will be reserved for each burst, corresponding to the network status. ACK/NACK cells are used to detect the degree of congestion. In the proposed adaptive rate control scheme, the source reduces and increases the peak rate to be reserved for each burst on receipt of the bandwidth control cell (ACK/NACK), such as in the ECN method. The rate change policy is multiplicative decrease and additive increase, as in the proposed scheme for best effort service. Figure 12 shows the end-to-end delay performance with adaptive rate control. In this simulation, adaptive rate control can achieve low end-to-end delay at both high-link and low-link utilizations. Details of this simulation model and results were presented in [9].

Multimedia on Demand over ATM

It is important to consider real-world applications that can fully utilize ATM's capabilities. Few applications have yet been developed that can make good use of ATM broadband/ seamless and multiple QOS functionality. This section focuses on how to effectively integrate actual multimedia applications with several ATM protocols described earlier. The distributed server and client multimedia system was developed as a prototype system, called "multimedia on demand over ATM." It can be used to demonstrate both next-phase ATM LAN features and multimedia applications, such as workstation-based distributed multimedia document processing and home terminal/digital TV-based entertainment or information retrieval. In this section, we present an overview of the current prototype system implementation and explain the key modules in this system. The multimedia application interface issue for ATM networks, which manages ATM connections and controls the desired ATM service quality for that application, is included as a key module, but is still at the discussion and experimentation stage.

Prototype System Overview

A diagram showing the hardware and software architecture for the experimental multimedia-on-demand system is shown in Figure 13 [13]. The basic concept of the application software design is to permit any user's multimedia documents to be linked to video or image media files located at remote distributed servers connected to the high-speed network. Therefore, it allows users to avoid transmission and storage of large multimedia files, retrieving only what is required for actual viewing or processing of a document. That is, the users can retrieve the video, image, and text with audio on demand from several remote multimedia servers. This application is also intended to be built in a flexible, object-oriented manner. This object-oriented approach provides considerable flexibility in constructing multimedia-on-demand application interfaces with user-friendly buttons and windows for media display. In addition, this approach promotes efficient, high-quality network delivery of multimedia information, since media types may be identified and associated with appropriate QOS and flow-control parameters at transport and network layers.

The experimental prototype system employs a high-speed (2.4 Gbps) ATM switch with appropriate addressing/routing/signaling and congestion control, and application software to enable fast and reliable workstation access to video/image information stored in several multimedia servers. The current system implementation consists of the following key modules:

1. XATOM architecture ATM switch [7];

2. Unix workstations with a JPEG compression board and an ATM interface;

3. ATM addressing/routing/signaling and congestion control;

4. application software for distributed multimedia documents and video on demand; and

5. multimedia application interface for setting up a relationship between other modules. We will describe each module in detail.

ATM LAN Switch

The prototype system uses several NEC ATM switches, based on expandable ATOM (XATOM) switch architecture [7]. It is a combination of an output buffer switch and an expandable input buffer. To avoid cell loss at output buffers and simplify the implementation of cell transmission scheduling, an internal backpressure mechanism is implemented between the input and output buffers. The switch has 16 x 100 (or 155) Mbps ports and a total throughput capacity of 2.4 Gbps. It adheres to ATM Forum standards for cell format, etc. It has a control port with a Sparc processor for implementing signaling/ routing/congestion control functions.

Workstations Platform

Standard Sun Sparc-10 workstations are used as platforms for this demonstration, at both the terminal and the multimedia file server ends. Each of these stations is equipped with a prototype ATM interface card [16], which supports AAL5. To achieve high throughput (almost 100 Mbps) end to end, this card supports a copyless architecture that can transfer ATM cells directly to the main memory space in the operating system. In addition, the client terminal stations are equipped with Parallax XVideo cards (JPEG cards) for video/image encoding and decoding. Both workstations operate under a standard Sun Solaris 2.3 OS and X-windows software environment.

ATM Addressing/Routing/Signaling and Congestion Control

This prototype system supports IP protocol over ATM as one of the network layer multiprotocols. As described in the section on the connection management scheme, in order to support the gateway model addressing scheme, signaling and routing are extended to handle both ATM address and IP address. As indicated in Figure 13, it is assumed that every ATM switch behaves as the gateway ATM switch, to which a different IP subnetwork number is allocated. In the current implementation, each gateway ATM switch recognizes its subnetwork region by neighboring ATM switch discovery and detects the ATM terminal location by ILMI address registration. The ARP tables on the gateway ATM switches are configured independently by this ILMI address table.

In terms of the NNI routing scheme, although an ATM-address-based link state routing protocol has already been implemented in these ATM switches, only simple distance vector IP routing protocol has been implemented, instead of link state IP routing protocol. Therefore, each ATM terminal can establish the connection with the required QOS hop-by-hop, according to the proposed signaling scheme, specifying the routing path by both next hop transit gateway ATM address and destination IP address.

Regarding ATM congestion control, as described in the section on the traffic management scheme, the prototype system has already supported adaptive rate control for FRP in the guaranteed burst class, because ATM traffic in the present multimedia-on-demand scenario is characterized by large bursts of information from server to client terminal. This control is implemented as a kind of signaling message, not as a special ATM layer management cell, "FRM" cell, for FRP. The bandwidth allocation algorithm assigns an available peak bit rate to bursts on the FCFS basis, blocking those that would exceed the switch port capacity. Experimental syntax, consisting of a burst "request-to-send" message (terminal to switch controller) and a "clear-to-send" allocation message (switch controller to terminal), has been defined. On the other hand, congestion control for best effort service is under development.

The control function implemented in the prototype is verified in the presence of interfering traffic produced by two load-generator workstations (Sun SPARCstation 2). Each load generator is programmed to produce up to 40 Mbps of traffic with selectable burst length and interarrival parameters. An additional CBR traffic load of up to 75 Mbps is provided by injecting a cell through the control port and then using the loop-back mechanism available in the switch.

Multimedia Application Software

A diagram showing the multimedia server-client approach is shown in Figure 14. At the client terminal, a multimedia document is parsed into media elements (objects), each of which is handled by its own presentation and control object ("widget"). Each object is implemented by extending the TCL/TK script language [14] to handle video and audio. A control path and another data path are separately established between the server and the client. The control path associates a location index in the multimedia server with each media, and retrieval is accomplished via appropriate conversions of the index to storage address, memory location, etc. Upon locating a media element, a separate data path is set up with appropriate QOS parameters, and the information is transferred to the client terminal displaying the multimedia document. Furthermore, a simple groupware function is also added to the TCL/TK script, so that several terminals can share the same multi-media-on-demand view by passing a message among group hosts. This groupware function is effective for use as an education and training tool.

In the present implementation, compressed National Television Systems Committee (U.S. Standard) quality (640 x 480 pixels) JPEG video segments are stored on the server disk and may be accessed by either a multimedia document or a video-on-demand session. Multiresolution uncompressed image files (typically 4K x 4K max resolution), also stored on the same disk, are retrieved for the multimedia document application. By accessing the multimedia server from six Sun workstations, quality degradation cannot be seen, while the number of frames per second is almost 30, as much as normal TV quality. Figure 15 is a photograph of the actual prototype system.

Multimedia Application Interface

Current transport protocols, such as TCP, may be used for some multimedia applications, but are known to suffer from serious limitations in a general high-speed network environment. These include lack of media-specific QOS support, window-based flow control, and slow implementation speed. When dealing with applications handling voice, video, and image data, it is preferable to provide QOS and flow-control features tailored to the requirements of stream-type sources, while also providing faster implementation than TCP. The current implementation includes a prototype multimedia transport protocol (MTP) with selectable QOS options, such as peak bandwidth, for different media. In the MTP layer, in order to recognize the type of traffic for each application (video/audio) and its QOS automatically, without any declaration from the application, ATM service managers, which can monitor traffic and select traffic classes and renegotiate parameters dynamically, are being developed. Therefore, the applications will not need to know about traffic classes and required bandwidth. Note that the prototype includes only a partial MTP implementation, since the QOS-mapping and flow-control features are still at the discussion and experimentation stage.

At the ATM network interface, the signaling protocol obtains the destination terminal address for each application and sets up a connection if there are no r. elated connections. Then it acquires the traffic parameters for each application, declared by the MTP protocol, on a burst-by-burst basis. It can renegotiate a band-width on the related connection. Then, the source sends packets directly or in IEEE 802.2 LLC/SNAP encapsulation [11] through the VCC.

The current implementation has some limited cell-loss-handling features for video and image data, which are not retransmitted by the MTP protocol. As mentioned, the prototype application software is still evolving in terms of functionality and performance. Future improvements planned for the application software include throughput enhancement, media, synchronization. quantitative QOS support and rate adaptation, and better error recovery/concealment.


We have presented new connection and traffic management schemes for ATM multimedia internetworking and have shown a current prototype system with a multimedia-on-demand application. We proposed a new connection management scheme, based on gateway model addressing. The proposed addressing model specifies the routing path by both the transit gateway switch's ATM address and the destination network layer address. It can provide VCCs, which overcome the disadvantages of the router interconnection over multiple ATM subnetworks, in terms of both throughput and QOS. In order to achieve this addressing model, the standard signaling/routing is extended. Then we proposed a new traffic management scheme, which effectively guarantees QOS according to three different classes: best effort class, guaranteed burst class, and guaranteed stream class. Since preventive congestion control, like CAC and UPC, is difficult to use in predicting QOS for most LAN services, an adaptive/reactive congestion control is employed for them. The rate-based control + backpressure mechanism is used for the best effort class, and the adaptive rate control for FRP is used for the guaranteed burst class. Finally, we presented the current prototype system, which is being developed to integrate with actual multimedia applications. This system has a multimedia application interface, which helps to automatically indicate the desired QOS for that application.

As a result, the proposed architecture can easily support high-speed multimedia services as well as conventional connectionless services, and will lead to a distributed multimedia computing environment. In the future, an effective multicast routing/ signaling and congestion control protocol will be considered in detail. As for integration of a multimedia application with an ATM control scheme, further improvements for the multimedia transport protocol and multimedia application interface are planned. Additionally, the performance of both the proposed ATM control scheme and the multimedia application are being evaluated and will be reported in future work.


We are very grateful to M. Yamamoto, T. Takeuchi, and T. Nishida, of NEC Corporation, and to K. Watanabe, D. Raychaudhuri, and G. Ramamurthy, of NEC USA Inc., for furnishing invaluable suggestions and for participating in intensive discussions.


1. ATM Forum. ATM user-network interface specification Ver. 3.0. ATM Forum (Sept. 1993).

2. ATM Forum. Closed-loop rate-based traffic management. ATM Forum/94-0438R2, Sept. 1994.

3. Boudec, J.L., et al. Flight of FALCON. IEEE Network Mag. (Feb. 1993).

4. Boyer, P.E. et al. A reservation principle with applications to the ATM traffic control. Computer Networks and ISDN Systems, 24 (1992), 321-334.

5. Chiu, D.M., and Jain, R. Analysis of the increase and decrease algorithms for congestion avoidance in computer networks. Comput. Networks and ISDN Sys. 17, 1 (June 1989),

6. Cole, R.G. IP over ATM: A framework document. Internet draft, draft-ietf-atm-framework-doc-00,ps, Jan. 1994.

7. Fan, R., et al, Expandable ATOM switch architecture (XATOM) for ATM Lans. In Proceedings of the IEEE Supercomm ICC'94 1 (May 1994), 402-409.

8. Heinanen, J. NBMA next hop resolution protocol (NHRP). Internet draft, draft-ietf-rolc-nhrp-02.txt, Aug. 1994.

9. Ikeda, C., et al. Adaptative congestion control schemes for ATMLANs. In Proceedings of IEEE Infocom'94 2, 6d. 4 (June 1994), 829-838.

10. Kung, H.T., et al. The FCVC (flow controlled virtual channels) proposal for ATM networks. In Proceedings of the International Conferences on Network Protocols (Oct. 1993).

11. Laubach, M. Classical IP and ARP over ATM. RFC1577, Jan. 1994.

12. Lyon, T. Network layer architecture for ATM networks. ATM Forum/92-119, July 20-21, 1992.

13. Ott, M., et al. A prototype ATM network based system for multimedia-on-demand. In Proceedings of the 5th IEEE Comsoc Multimedia '94 3-2, (May 1994).

14. Ousterhout, J.K. An Introduction to TCL and TK.

15. Perkins, D. Beyond classical IP - integrated and ATM protocol specification. ATM Forum/94-0936, Sept. 1994.

16. Sakamoto, H. et al. Device driver for ATM host interface. In Proceedings of Frill Conference of IEICE, Sept. 1994.

17. Saleh, A. Message set for the P-NNI call control protocol. ATM Forum/94-0269, Mar. 1994.

18. Sullivan, E., and Gouguen, M. PNNI draft specification. ATM Forum/94-0471, Apr. 1994.

19. Suzuki, H., Togabi, F.A. Fast bandwidth reservation scheme with multi-link and multi-path routing in ATM networks. In Proceedings of Infocom'92 10A.2, (May 1992).

20. Suzuki, H. Addressing and routing for private and public ATM networks. ATM Forum/92-302, Dec. 9-11, 1992.

21. Suzuki, H. et al. Routing schemes for multiple network address types. ATM Forum/93-396 (Jan. 8, 1993).

22. Topolcic, C. Experimental internet stream protocol, Version 2 (ST-II). RFC1190, October 1990.

23. Wernik, M. et al. Traffic Management for B-ISDN Services. IEEE Network Mag. (Sept. 1992).

24. Zhang, L., et al. Resource ReSerVation protocol (RSVP) version i functional specification draft, Internet draft,, May 1994.

About the Authors:

A. IWATA is a member of the technical staff in the C&C Research Laboratories at the NEC Corporation. Current research interests include ATM LAN architecture, ATM LAN protocols, and computer communication protocol issues.

N. MORI is a member of the technical staff in the C&C Research Laboratories at the NEC Corporation. Current research interests include ATM LAN architecture, ATM LAN signaling protocols and switch control firmware.

C. IKEDA is a member of the technical staff in the C&C Research Laboratories at the NEC Corporation. Current research interests include traffic control for ATM networks.

H. SUZUKI is a member of the technical staff in the C&C Research Laboratories at the NEC Corporation. Current research interests include switch architecture, traffic control, signaling and routing, network management and internetworking architectures for ATM networks,

Authors' Present Address: C&C Research Laboratories, NEC Corporation, 4-1-1 Myazaki Miyamae-ku, Kawasaki, Kanagawa, Japan 216; email: {iwata, mori, ikeda, hiroshi}

M. OTT is a member of the technical staff in the C&C Research Laboratories at the NEC Corporation. Current research interests include multimedia-friendly hardware architectures, real-time operating systems, new user-centered approaches to navigating large "chaotic" data spaces. Author's Present Address: C&C Research Laboratories, NEC USA Inc., 4 Independence Way, Princeton, NJ 08540; email: max
COPYRIGHT 1995 Association for Computing Machinery, Inc.
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 1995 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:Issues and Challenges in ATM Networks; asynchronous transfer mode
Author:Iwata, A.; Mori, N.; Ikeda, C.; Suzuki, H.; Ott, M.
Publication:Communications of the ACM
Date:Feb 1, 1995
Previous Article:Building future medical education environments over ATM networks.
Next Article:Improvements to TCP performance in high-speed ATM networks.

Terms of use | Privacy policy | Copyright © 2019 Farlex, Inc. | Feedback | For webmasters