Some of my pictures!  

Welcome to the world of ATM!


I am Mrityunjay and I am your host on this page, which I have put together after shrugging off much of my laziness, and after I finished off an almost similar page in my company!

I would like to talk more about myself, if you don't mind J !

I have been working for the past 2 and a half years in Network Machines Inc at Noida, India. The company works in the telecommunication field, and we have built switch servers, Set-Top Boxes, and edge switches for our Japanese and US clients. I have been mainly involved, since the start of my career, in ATM protocol stack design and development, and in implementing related network drivers on various platforms, including Windows NT, 95, Linux, and a proprietary embedded OS. For more details, you are welcome to look at my resume in text or in html.

It was a nice, long and rich experience in NMI. If any of you is interested in doing good work in India, NMI is the place. Write to me and I will give you more details about the advantages of working at NMI.

After a short stint at a company based in MoorPark, California, called Accelerated Networks, Inc.(ANI for short), I have moved on to Microsoft, where I am now working in the networking group of Windows. Hope to do some exciting stuff here that changes the world!

Also, thank you for your responses to the little quizlet I put on this page! It was heartening to know that you think ATM over Desktop to be viable, I think the same!

What follows now is a brief talk about ATM things, and I hope that it will prove useful to some of you.

Basic Concepts of ATM

What is Hot in ATM

A small tutorial on ATM

Some cool links to ATM stuff


Basic Concepts of ATM

Asynchronous Transfer Mode, or ATM, is a communication architecture based on the switching of small fixed length packets called cells. Confusing? Initially yes, but that is only because it is a new networking technology. ATM has the potential of carrying all kinds of traffic, like voice, video and data, without distinguishing between the traffic types, which is not the case with current networks. Also ATM can provide guarantees on the Quality of Service that you are going to get, and the network ensures that you get the same quality throughout the life of your traffic flow; something which the current networks can't even dream of providing. ATM is a high-speed networking technology, which means it adds much less delays to your traffic compared to the existing networks. In the current network, main point is that it is a shared network, and many networking protocols were developed keeping this in mind. So, to deploy ATM, we need to develop new protocols or modify existing ones, because ATM is not a shared network. It is a point-to-point, connection-oriented network, which means stations need to set up a (virtual) connection before communication.

To bring about fast deployment of ATM, and to keep the different vendor offerings in sync so that they can inter-operate, ATM Forum was formed by a large group of ATM vendors, and other concerned entities including universities, government institutions and software companies. ATM Forum has come out with a large number of inter operability agreements which are being used by the vendors all over the world to deploy ATM.

Back to top

What's hot in ATM

Everything about ATM is hot! But, as is the case with most of the technologies there are some aspects that are crucial for the working of ATM, while there are other aspects we can live without. Important constituents of a networking technology are: Physical media supported, signaling protocols, traffic mix and data models supported, flow control protocols, routing protocols, testing mechanisms and methodologies, security considerations and protocols, network management, and most of all, inter operability concerns.

Traditionally, signaling protocols have been hot. To set up a connection between two entities (which you must do for any kind of communication), the two of them(and the intermediate entities, if any) must exchange control messages, on a pre-set channel. This is what is signaling protocol all about. Standards and inter operability agreements have been finalized, and currently optional features, like Calling Party Identification, third party call setup etc. Are being added.

Another hot area is the data model currently being proposed and deployed. Classical IP over ATM has been used for carrying IP traffic over ATM, LAN Emulation is used for carrying all kind of traffic over ATM, including IPX, AppleTalk, etc. Recently, ATM Forum has come out with Multi-Protocol over ATM (MPOA) since it was felt that IPoA and LANE didn't utilize the ATM advantages judiciously. There have been other proposals too, especially for multicast and broadcast requirements. This is currently the hottest field in my opinion.

Traffic Management is another area which is requiring attention from a lot of vendors and research communities, because of the complexities of congestion and flow control protocols in ATM. Special consideration is for Available Bit Rate(ABR, see below) service, since it is going to be the service that will require the most extensive flow control and traffic management techniques.

Security is still in the infancy stage in ATM Forum, so is Wireless ATM, but both have the potential to be very hot in about a year.

Back to top

A Small Tutorial on ATM

ATM stands for Asynchronous Transfer Mode. To understand what ATM is all about, a brief introduction to STM is in order. "ATM is the complement of STM, which stands for Synchronous Transfer Mode. STM is a circuit switched networking mechanism, where a connection is established between two end points before data transfer commences, and torn down when the two end points are done. Thus the end points allocate and reserve the connection bandwidth for the entire duration, even when they may not actually be transmitting the data. The way data is transported across an STM network is to divide the bandwidth of the STM links (familiar to most people as T1 and T3 links) into a fundamental unit of transmission called time-slots or buckets. These buckets are organized into a train containing a fixed number of buckets and are labeled from 1 to N. The train repeats periodically every T time-period, with the buckets in the train always in the same position with the same label. There can be up to M different trains labeled from 1 to M, all repeating with the time period T, and all arriving within the time period T. The parameters N, T, and M are determined by standards committees, and are different for Europe and America. T is a historic legacy of the classic Nyquist sampling criteria for information recovery which translates to a value of 125 usec. This is the most fundamental unit in almost all of telecommunications today." [from Zahir Ebrahim]

Important thing to note here is that buckets have been pre-assigned to the endpoints, and each train is for one circuit. This means that even if a train is going empty, another endpoint can't use it to carry his data. Also, number of simultaneous connections is constrained by the number of trains and not by the amount of traffic going through. And then came ATM, the asynchronous version of STM.

In ATM, above two problems are solved, in addition to solving the problems in the traditional networks. Thus, rather than identifying a connection with the train it is in, we keep the connection identifier in the packet itself. Thus any bucket can be used by any connection if it is empty. This also solves the problem of limited number of connections. Now one can have as many connections as desired, as long as total traffic doesn't exceed the link capacity. One of the most significant and interesting decision in ATM is to keep the packet size small and fixed. A variable size packet involves more queuing overheads and switching delay compared to a fixed size packet. Thus ATM switching will be much faster than traditional switching. A small size packet enables fast recovery in case of cell loss/error since only a smaal amount of data is lost, and also incurs less queuing delay for the waiting packets. This scores over the traditional networks in another way too: it can be used to carry delay-sensitive traffic like voice and video over the same network along with data. Other features include end to end error detection in contrast to hop-by-hop error correction in existing network, fast table lookup for switching due to the small identification tags, and Quality of Service Guarantees, unavailable in the existing network.

In ATM, all the data is transferred in a 53 byte cell, out of which 5 bytes are header, and 48 bytes are payload. Following figure shows the significant fields in a cell.

CLP: Cell Loss Priority

Header consists of 5 bytes. Following is a brief description of the various fields shown above:

  1. Generic Flow Control: GFC occupies first four bits of the cell header and it needs to be filled with zero and is reserved. Potential uses for GFC are flow control or for building a multiple access, a shared medium ATM network on the local access facilities.
  2. Virtual Path Identifier: VPI is used for identifying the connection between two end points. Along with the Virtual Channel Identifier, it uniquely identifies a connection at a node within the network. See below for an explanation of the role of VPI, VCI, pair. The point to be noted is that a VPI value is significant only at the given interface, same connection can have a different VPI value at a different interface in the path to the end system. The field is of 8 bits, so 256 different VPIs can be there at an interface.
  3. Virtual Channel Identifier: VCI, like VPI is used for identifying the connection. Also, like VPI, it has a local significance, and a connection may have different VCIs while passing through different switches in the path. Its size is 16 bits, so 64K different VCIs are possible at an interface.
  4. Payload Type Identifier: PTI consists of 3 bits, and is used for passing some control information between the hops, like whether it is a data cell or not, and whether the cell experienced congestion in the path. In case of AAL5, it is also used to identify the end of a packet. Out of the 3 bits, first bit is used to identify data cells from operations and maintenance cells. Second bit is "congestion experience" bit. Third bit is currently passed transparently across the network, and only defined use is by AAL5.
  5. Cell Loss Priority: CLP is perhaps the most important bit in the header. It is used to identify the cells whose loss priority(dropped if need be) is high(by setting the bit) from those which should be carried as far as possible(the bit is unset). This can be used by the end system to mark his cells with CLP=1, if the carrier gives him discount for "expandable" data. Mainly, it will be used by the network to mark some cells which would otherwise have been dropped but now they are given a chance to survive(if another node needs to drop some cells, it will chose to drop marked cell rather than the unmarked cell).
  6. Header Error Check: HEC is used to protect the content of the header from bit errors. It contains CRC-8, calculated over the header bytes, and can be used to detect multiple bit errors. In another mode, it can also be used to correct single-bit errors, which is useful for media where errors are isolated. HEC is also used by some media like SONET to synchronize the incoming data at the cell boundary. Typically, the device calculates and matches the CRC-8 over the incoming 5 bytes; if it matches, it assumes that it got a cell. Once a cell is identified, system goes into Pre-SYNC mode. If n successive cells match the check, it goes into SYNC mode, and can deliver the cells to the upper layers.

VPI and VCI together identify a connection uniquely at an interface, and is used by the network to switch the call to the correct end system. VPI and VCI are allocated to the end system by the immediate switch, and a switch gets the VPI VCI values from the next switch in the network. The switch that allocates these values makes an entry in its switching table corresponding to the connection. An entry in the switching table consists of the 6-tuple <inport, invpi, invci, outport, outvpi, outvci. The entry is made at the time of signaling in case of switched virtual connections(SVC) or by management entity for permanent virtual connection(PVC). In case of signaling, the switch uses its routing protocols to find out the out port, chooses a free VPI/VCI pair to be given to the previous node, gets the VPI and VCI from the next node and creates the entry <inport, invpi, invci, outport, outvpi, outvci for the switching table. Now, when data flows over this connection, the switch looks up this table, and switches the cell to the specified outport, putting the outvpi, outvci as the new VPI VCI values.

The requirement for two identifiers VPI and VCI, rather than a single 24 bit identifier, arises because we wish to ease the task of switching and maintaining the switching table. Note that a 24-bit identifier requires more than 16 million entries in the switching table. To make this task easier, the concept of Virtual Path switching(VP-switching) is introduced. A virtual path is a bundle of connections, all of which have the same VPI, but have different VCIs. If all the VCIs need to travel to the same destination, the switch can same switching time, and space, by just switching on the VPI values. So effectively, the switch only handles thick bundles and doesn't care what goes inside. Switching is fast because only 8 bits are required to do the complete switching. Switching tables are small because even if the switch keeps all the VPIs in the table, it requires 256 entries only.

ATM claims to cater to all kinds of varying traffic requirements of the user. A number of traffic types are defined by the ATM forum:

  1. Constant Bit Rate, CBR: User gets a guarantee of the peak cell rate at which he can send his data, and the network allocates resources such that the user gets that rate at all times during the life of the network. It is suited for applications that are delay sensitive, like Voice applications, and uncompressed video.
  2. Variable Bit Rate, VBR: The traffic is of variable rate, and the user gets a guarantee of peak cell rate he will get, and the maximum duration for which he can send the data at peak cell rate(PCR), this is termed as Burst Size. He also gets a guarantee on the cell rate at which he will normally send the data(SCR). User applications include compressed audio and video. There are two flavors of this: rt-VBR, nrt-VBR. Rt stands for real time.
  3. Unspecified Bit Rate, UBR: The best-effort service, user gets no guarantee of any sort, and network is allowed to be unfair to an endsystem and not give him the bandwidth even though it(the network) gives the same to another similar user(see ABR). As such, it is a perfect junk service to be used, but it is defined so that ATM can be deployed early, and the early applications can be transparently run over it. Classical IP over ATM(RFC1577) and LANE 1.0 (ATM Forum LAN Emulation) use UBR.
  4. Available Bit Rate, ABR: It is also a best-effort service, but with some guarantee. The user is guaranteed a minimum cell rate(MCR) and also the network is guaranteed to be fair in the sense that all the available resource is shared equally among the ABR service users. This service is the most complex to implement as it has the greatest potential to cause congestion in the network and hence extensive flow control and feedback mechanisms are required. ABR is expected to be the most widely used service category, though its current deployment is very less.

Depending on his requirements from the network and the traffic type negotiated, the end system has to adapt his data in a different way. Therefore a number of adaptation layers(ATM Adaptation Layer, AAL) have been defined by ATM Forum:

  1. AAL1: Defined to be used for CBR traffic, it contains synchronization and clock recovery mechanisms.
  2. AAL2: Defined to be used for VBR traffic. It has been finalized very recently by ITU-T, and accepted by ATM Forum.
  3. AAL3/4: Defined for connection-oriented and connection-less data traffic, it has the ability to multiplex several users' data over a single channel, i.e. same VPI and VCI is used to carry data of multiple users.
  4. AAL5: Defined to be a lightweight adaptation layer protocol, so that it can be used in place of all the other AALs defined above. Other AALs add up much overhead without justification, so the trend now is to carry all the traffic over AAL5. It includes a connection-oriented protocol that is responsible for reliable data transfer. AAL5 is used by signaling protocols. Since all the entities in the ATM network wil be signaling-capable(if not now then in future), AAL5 will be on every system. This gives a strong case for using AAL5 for all traffic types.

ATM is being touted as a network that can give Quality of Service guarantees. This guarantee is given at the time of signaling to set up the connection. The QoS parameters are neogotiated by sending Information Elements(IEs) containing the parameters that are relevant to the traffic class being negotiated. While signaling procedures are going through, any node can release the connection if the parameters don't suit him. A successful connection setup will have the negotiated parameters for the specified QoS. The network keeps policing the connection based on this QoS, and if the source misbehaves by violating the contract, it may drop the cells, or mark the CLP bit(see CLP bit above). For details on signaling procedures, see here.

This concludes a very crude and primitive treatment of ATM. For more details, please refer the links given below.

Back to top

Some Cool Links Related to ATM

Here are some links to information regarding ATM and related aspects:

ATM Forum Home page

Cell Relay(excellent site)

Anthony Alle's paper(best I have seen so far)

Raj Jain's page(very comprehensive collection)

My bookmarks(contains links on a variety of topics, including ATM!)

Misc. Stuff(web sites I am going through recently)

Back to top

Last Revised: 19th Dec 1999.

Contact me for any queries, suggestions, criticisms.