Read Microsoft PowerPoint - Multiplexing.ppt text version

Multiplexing

Computer Communications Computer Engineering

© Dr. Dirk H Pesch, Electronics Dept., CIT, 2002/2003

1

Introduction

· The OSI model makes provisions for multiplexing in every layer of the layered network architecture · OSI terminology: Many service users in layer N+1 use the same service of layer N (except where layer N+1 is the physical layer) · Traditionally multiplexing resides in the physical layer · Types of Multiplexing

­ ­ ­ ­ ­ Frequency Division Multiplexing Time Division Multiplexing Code Division Multiplexing Space Division Multiplexing Polarisation Division Multiplexing

Economics of scale play an important role in telecommunications. It costs essentially the same amount of money to install and maintain a high-bandwidth cable (trunk) as a low-bandwidth cable between two nodes (switches) in a network. Consequently, telecommunication companies have developed elaborate schemes to allow many connections to share a single physical cable or trunk. Multiplexing is the process of aggregating multiple low-speed channels into a single high-speed channel. Traditionally, multiplexing resides in the physical layer. However, multiplexing is also used in the data link layer and the transport layer. Multiplexing at higher layers is different from multiplexing at the physical layer and is often called logical multiplexing. Examples of logical multiplexing will be presented later in the course.

© Dr. Dirk H Pesch, Electronics Dept., CIT, 2002/2003

2

M U X

D

1 channel 6 calls

E M U X

Multiplexing schemes can be divided into three basic categories, Frequency Division Multiplexing (FDM), Time Division Multiplexing (TDM), and Code Division Multiplexing (CDM), depending on how users share the available transmission medium. In FDM the available frequency spectrum is divided among the logical channels, with each user having exclusive possession of some frequency band. In TDM the users take turns (in a round robin fashion), each one periodically getting the entire bandwidth for a little burst of time. TDM requires digital transmission TDM can be further divided into synchronous and asynchronous TDM. Synchronous TDM is used in most digital telecommunication systems, in particular ISDN. Asynchronous TDM, also called statistical time division multiplexing makes better use of the available bandwidth than synchronous TDM in those cases where a users data rate requirements vary over time. CDM is also a digital multiplexing technique. Here, the entire bandwidth is used by all users at the same time. Users are separated by orthogonal or quasi-orthogonal signals. CDM is used in wireless and fibreoptic networks.

© Dr. Dirk H Pesch, Electronics Dept., CIT, 2002/2003

3

Frequency Division Multiplexing

· Frequency Division Multiplexing (FDM) is an analog technique · Transmission bandwidth is divided in frequency · FDM uses analog modulation and filtering to multiplex narrow band signals into a broadband channel

Frequency division multiplexing is an analog technique that can be applied when the bandwidth of a link is greater than the combined bandwidths of the signals to be transmitted. In FDM, signals generated by each sending device modulate different carrier frequencies. These modulated signals are then combined into a single composite signal that can be transported by the physical link. Carrier frequencies are separated by enough bandwidth to accommodate the modulated signal. Individual channels must be separated by strips of unused bandwidth (guard bands) to prevent signals from overlapping in the frequency domain. Example: Cable Television A familiar application of FDM is cable television. The coaxial cable used in a cable television system has a bandwidth of approx. 500MHz. An individual television channel requires approx. 6MHz of bandwidth. The coaxial cable, therefore, can carry many multiplexed channels (theoretically 83, but actually fewer because of guard bands between adjacent channels). A demultiplexer at the television allows to select which of those channels a viewer wishes to watch. A modification of FDM is used over fibre-optic cables, which is called Wavelength Division Multiplexing (WDM). It operates essentially the same way as FDM, but incorporates modulation of a range of frequencies in the visible light spectrum.

© Dr. Dirk H Pesch, Electronics Dept., CIT, 2002/2003

4

Frequency-domain representation of the FDM multiplexing process

S Mod f S S

Mod f

f

Mod f f

Multiplexer

The figure above depicts the FDM process in the frequency domain. Signals coming from individual telephones are modulated onto separate carrier frequencies using either AM or FM modulation. The modulation process results in a signal of at least twice the original bandwidth. In order to reduce the bandwidth of an individual modulated signal, the lower sideband is usually suppressed. In the figure above, the bandwidth of the composite signal is more than three times the bandwidth of each input signal, three times the bandwidth to accommodate the necessary channels plus extra bandwidth for guard bands.

© Dr. Dirk H Pesch, Electronics Dept., CIT, 2002/2003

5

Frequency-domain representation of the FDM demultiplexing process

S S

BP Filter

S Demod f

BP Filter

Demod f

f Demod f

BP Filter

f

Demultiplexer

The FDM demultiplexer uses a series of bandpass filters to decompose the multiplexed signal into its constituent component signals. The individual signals are then passed to a demodulator that separates them from their carriers and passes them to the receiver. FDM is widely used in analog voice telephony and for data transmission over analog voice circuits. In the following an example of the frequency division multiplexing hierarchy of ordinary analog telephony networks is presented.

© Dr. Dirk H Pesch, Electronics Dept., CIT, 2002/2003

6

FDM Hierarchy

Telephony channel 300Hz - 3400Hz

1 2 3 4 5 6 7 8 9 10 11 12

Carrier frequency Pre-group 1 12kHz 16kHz 20kHz 12kHz 16kHz 20kHz

4 1 2 3

Pre-group carrier 120kHz

Pre-group 2

5 6

108kHz

12

Primary group

11 1

12kHz 16kHz 20kHz 12kHz 16kHz 20kHz

10 7

Pre-group 3

8 9

78Khz 96kHz

126Khz

Pre-group 4

11 12

84kHz

Structure of primary groups in FDM carrier system

Voice telephony requires a narrow frequency spectrum of between 300Hz and 3400Hz. In order to utilise the available channel capacity, FDM is employed to transmit many of such narrow frequency bands at the same time. This is achieved by the frequency division multiplexing principles described above. Usually, amplitude modulation with upper sideband suppression is employed as the modulation scheme. The carrier frequency are chosen such that the individual telephony bands line up in the frequency spectrum. Initially, three telephony channels modulate carrier frequencies of 12Khz, 16kHz, and 20kHz. This results in a pregroup. Four such pre-groups modulate pre-group carriers each and sidebands are suppressed to form a primary group of 48kHz between 60kHz and 108kHz in the frequency domain. Each primary group contains 12 telephony channels. Alternatively, a slightly different system is employed in the US. 12 telephony channels can be combined directly into a group (equivalent to a primary group) of 48kHz bandwidth. Individual carrier are taken from the 112kHz to 156kHz range. In order to avoid overlapping in the frequency domain, individual channels occupy 4kHz bandwidth rather than the required 3.1kHz. 3000Hz are used for voice transmission with a 500Hz guard band at either side of the channel.

© Dr. Dirk H Pesch, Electronics Dept., CIT, 2002/2003

7

FDM Hierarchy

Primary groups Primary group carrier

420kHz

12

11

1

12

11

1

468kHz Secondary group

1 2 3 4 5

12

11

1

516kHz

396kHz 60 telephony channels

636kHz

12

11

1

564kHz

12

11

1

612kHz

The multiplexing hierarchy is extended into secondary, tertiary and quaternary groups as depicted above. Five primary groups modulate carrier frequencies between 420kHz and 612kHz and are frequency multiplexed into a secondary group with 60 telephony channels (see figure above). As outlined before, the upper sidebands of the modulated carrier are suppressed. The multiplexing scheme is continued into tertiary and quaternary groups. Five secondary groups constitute a tertiary group with 300 telephony channels (812kHz - 2044kHz) and 3 tertiary groups constitute a quaternary group with 900 telephony channels (8516kHz - 12388kHz). Individual channels are demultiplexed at the receiver by filtering and demodulation. Standards exist that specify multiplexing of groups for up to 230,000 voice telephony channels. The US system combines five groups into a supergroup (eq. to secondary group), 10 supergroups into a mastergroup and six mastergroups into a jumbo group with 3600 voice channels and a bandwidth of 16.984MHz (incl. guard bands). In order to transmit data over analog telephony networks, a group can be used with 48, 56, 64, and 72kbit/s modems according to ITU-T recommendation V.35 and V.36. ITU-T recommendation X.40 specifies how the frequency band of a primary group (60 - 108kHz) is to be divided into individual FDM data channels. Parameters of multiplexing schemes for international interfaces between synchronous data networks are specified in ITU-T recommendations X.50/X.50bis and X.51/X.51bis.

© Dr. Dirk H Pesch, Electronics Dept., CIT, 2002/2003

8

Time Division Multiplexing

· Time division multiplexing (TDM) is a digital technique · The available bandwidth is shared on a time slot basis in a round robin fashion · TDM can be implemented in two ways

­ Synchronous TDM ­ Asynchronous TDM

Time division multiplexing (TDM) is a digital process that can be applied when the data rate capacity of a transmission medium is much greater than the data rate required by individual sending and receiving devices. In such case, multiple transmissions can occupy a single link by subdividing them and interleaving the portions. This subdivision can vary from one bit for bit-interleaved multiplexers, trough a few bits for character-interleaved multiplexers, to a few thousand bits in the latest types of high bit-rate multiplexers, the Synchronous Time Division Multiplexers (STDM) designed for the synchronous transfer mode. TDM has become a cost effective method that is not only used on trunk circuits between digital switches but is today even starting to be used on local circuits to end customers. The basic rate interface of ISDN is one such example. All of the TDMs mentioned above are fixed slot time division multiplexers, in that they assign a fixed slot to each channel in a cyclic scan

© Dr. Dirk H Pesch, Electronics Dept., CIT, 2002/2003

9

Synchronous TDM

M U X

Frame n

Frame 2

Frame 1

Number of inputs: 5 Number of slots per frame: 5

In TDM the term synchronous has a different meaning from that used in other areas of telecommunications. Here synchronous means that the multiplexer allocates exactly the same time slot to each device at all times, whether or not a device has anything to transmit. Each time a device's allocated time slot comes up, the device has the opportunity to send a portion of its queued data. If a device is unable to transmit or does not have data to send, the time slot remains empty. Time slots are grouped into frames. A frame consists of one complete cycle (in round robin fashion) of time slots, including one or more slots dedicated to each sending devices, plus framing bits, which are used for frame synchronisation and alignment. In a system with n input lines, a frame has at least n slots, with each slot allocated to carry data from a specific input line. If all the the input devices sharing a link are transmitting at the same data rate, each device has one slot per frame. However, it is possible to accommodate varying data rates by allocating more than one slot to a specific device. The time slots dedicated to a given device occupy the same location in each frame and constitute that device's channel. In the figure above, five input lines are multiplexed onto a single path using synchronous TDM. In this example all input lines have the same data rate. However, if the data generated by users is not a continuous, constant data rate stream, but rather intermittent and bursty in nature, a fixed slot assignment per station can be wasteful with transmission resources. Therefore, the concept of statistical multiplexing was introduced in order to improve utilisation of transmission bandwidth in the case of bursty user traffic. The principle of statistical time-division multiplexing will be outlined below.

© Dr. Dirk H Pesch, Electronics Dept., CIT, 2002/2003

10

Types of Synchronous Multiplexers

· Access Multiplexers · Network Multiplexers · Aggregator Multiplexers · Add and Drop Multiplexers

Access Multiplexer Access, or channel bank multiplexers, provide the first level of user access to the multiplexer network. These devices typically reside on the user or customer premises or in public networks in the first concentrator point. Access multiplexers are characterised by · First level of access to multiplexer network · Devices reside on customer premises · Access multiplexer provide one or more T1/E1 lines/trunks · Can handle variety of circuit-switched interfaces: X.25, frame relay, etc. · Interface speeds include DS0, T1, DS1, E1 · Variants: Fractional Multiplexer, SubRate Data Multiplexer Network Multiplexer Network multiplexers accept the input data rates of access multiplexers, typically supporting T1/E1 lines on the access side and higher lines such as T3/E3 and above on the network side. Their trunk capacity is also much larger than access multiplexer's. Network multiplexers provide the additional functionality of network management systems and have local and remote provisioning and configuration capability. Network multiplexers also provide some routing functionality in software through routing tables at each node. Aggregate Multiplexer Aggregate multiplexer combine multiple T1/E1 channels into higher bandwidth pipes for transmission. These multiplexers are also sometime called hubs (not to be confused with LAN hubs). Aggregate multiplexers are used extensively in PDH networks. Add and Drop Multiplexer Add and drop multiplexers are special-purpose multiplexers designed to add and drop low-speed channels out of a high-speed multiplexed channel. This type of multiplexer is extensively used in he PDH and SDH multiplexing networks in order to add and extract channels out of aggregate channels.

© Dr. Dirk H Pesch, Electronics Dept., CIT, 2002/2003

11

Selection criteria for multiplexer

Access speed Protocols and interfaces supported Voice quantisation schemes supported (i.e. PCM 64kbps, ADPCM 32kbps) Transmission media supported (i.e. copper, fibre) Physical interface standards supported (RS-232, V.35, ISDN, G.703) Types of framing (D4, ESF, B8ZS) Topologies supported Timing control Degree of dynamic bandwidth allocation

Since many options are available in multiplexers, each requirement must be analysed to determine which type is the best fit for current and future applications. Some of the major decision criteria for all types of multiplexers are listed in the table above. Two major factors exert pressure on multiple vendors to modify their traditional support of low-speed asynchronous and synchronous traffic. The importance of public network interoperability in network standards and signalling influences multiplexer selection and design. Another important factor influencing multiplexer survival is the carrier pricing of switched services such as frame relay and SMDS.

© Dr. Dirk H Pesch, Electronics Dept., CIT, 2002/2003

12

Plesiochronous Digital Network

· · · · · · · First digital multiplexing network Basis of first digital PSTN ITU-T recommendation G.702/703 Based on 64kbps PCM encoded speech Transmission lines with 24 or 32 time slots. 32 slot transmission system based on ITU-T G.703 24 slot transmission system provides DS1 services on T1 lines in North America and Japan.

The plesiochronous network has served telecommunications since the advent of digital switching in the 1970s. However, as the demand for higher and higher bit rates occurs, the limitations of the system become apparent. PDH is built around PCM encoded digital voice at 64kbps data rate. PCM samples the analogue voice signal every 125µs (8000 times/sec) and generates an 8 bit representation of the voice signal's amplitude. It is essential that the voice samples from a particular telephone arrive at precisely 125µs intervals. The 64kbps data rate is fixed for standard PSTN. There are two transmission systems used worldwide, one with a 24 channel format and the other based on 32 channels. The former is used in North America and Japan and the latter in the rest of the world. The larger format operates with a frame of 32 time slots, each one equivalent to a speech channel, although only 30 of the slots actually carry speech. The other two are used for synchronisation and signalling. All 32 time slots fit into a 125µs frame. A time slot is therefore approximately 3.9µs long. Sixteen frames are combined to form a multiframe which takes 2ms to transmit. The network uses the signalling information to ensure that the speech channels are sent to the correct destination and that release, cleardown and metering are carried out at the appropriate times. Two modes of signalling are used in digital systems: channel associated and common channel signalling. Common channel signalling replates to signalling between exchanges and will be described later. Channel associated signalling is used on the PCM transmission path when traffic is arriving from different sources, such as analogue connections, or a multiplexer.

© Dr. Dirk H Pesch, Electronics Dept., CIT, 2002/2003

13

European PCM Multiplex Format - E1 Line

One multiframe (repeated every 2ms) Frames

0

One frame (125µs)

1

2

3

4

5

6

7

8

9 10 11 12 13 14 15

One frame

32 time slots 30 speech channels

0

1 2 1 2

3 4 3 4

5 6 7 5 6 7

8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30

Y 0 0 1

1 0

1 1

Alternate frames frame align. word Alternate frames no word

0

0 0

0 X X X

1

2 3 4

5 6

7 8

Y 1 X X X X X

Frame 0, first 4 digits multiframe-align. word

8 bits per speech channels

1 2

3 4

5 6 7

8

Gross data rate 2.048Mbits/s

Frames 1-15, signalling information

Digits 1-4, channels 1-15 digits 5-8, channels 16-30

The figure above depicts the 32 frame PCM multiplex structure. x, digits not allocated to any particular function and set to one; y, reserved for international use (normally set to one); , digits normally zero but changed to one when loss of frame alignment occurs and/or system-fail alarm occurs (timeslot 0 only) or when loss of multiframe alignment occurs (timeslot 16 only). In the case of channel associated signalling signals are associated with each channel by allocating in each multiframe one signalling half-word in time slot 16 to each channel; thus the 8 bit signalling slot contains signalling information of 4 bits for two channels in each frame. The figure above shows that in all but the first frame (frame 0), timeslot 16 is used to carry signalling. Henc in frame 1, timeslot 16 carries the signals realted to speech channels 1 and 16, in frame 2 it carries those for channels 2 and 17, and so on until frame 15 when it has signals for channels 15 and 30. The next frame is the first of the following multiframe and the sequence is repeated. Timeslot 0 in each frame, and timeslot 16 in frame 0 carry synchronisation and alignment words to ensure that the transmission and reception of the system are synchronous. When common channel signalling is used the above arrangement is not relevant. Instead, the capacity of timeslot 16 is made available to the signalling packets as required, and it is likely that some of the signalling information is not related to the traffic being carried on the voice channels.

© Dr. Dirk H Pesch, Electronics Dept., CIT, 2002/2003

14

E1 Line Application

© Dr. Dirk H Pesch, Electronics Dept., CIT, 2002/2003

15

PDH Hierarchy

· PDH hierarchy based on 2.048Mbps (E1) or 1.544Mbps (T1) PCM transmission lines

Multiplexing stage 24 channels Data rate x basic rate 1 4 28 84 32 channels Data rate 2.048 Mbps 8.448 Mbps 34.368 Mbps 139.264 Mbps x basic rate 1 4 16 64

Primary rate 1.544 Mbps Secondary rate 6.312 Mbps Tertiary rate 44.736 Mbps Quaternary rate 139.264 Mbps

In order to provide the required transmission capacity, and to exploit the bandwidth available on cable or fibre, channels are multiplexed. The basic multiplexed element is the 2Mbps multiframe described above. Multiframes can be multiplexed into superframes, and superframes into hyperframes and this sequence can continue until there is sufficient capacity, or the bandwidth of the transmission medium is exceeded. The table above shows the plesiochronous multiplexing hierarchy used in digital PSTN based on 24 and 32 channel frames. However, this multiplexing structure has limitations. The nature of the PCM multiframe structure is that the timeslots and frames have to be maintained in a synchronism by constant reference between exchange and network clocks. Signal propagation conditions can cause a drift in synchronisation between clocks in the network, which has to be corrected by inserting bits in order to adjust the time in a bit stream so that it regains network synchronisation. Although this process of inserting bits avoids slippage at the transmitter end, it causes other serious problems with regard to demultiplexing. For many applications it is desirable to be able to extract one of the component traffic streams without having to demultiplex all of the streams that make up the high-speed channel. In a plesiochronous system that is not possible because the individual streams cannot be identified easily, du to the effect of bit insertion to maintain synchronisation. In order to isolate one 2Mbps stream it is necessary to completely demultiplex the hierarchy and then multiplex up again. For POTS (plain old telephone service) this structure did not cause much of a problem, but in modern networks different traffic types such as voice, video, and several data services are linked on to the network there is a frequent need to be able to isolate a particular channel, or to insert a new one along the transmission path. The expense of having multiplexing equipment at each access point is prohibitive with regard to complexity and cost. SONET/SDH offers a cheaper, more convenient and flexible approach.

© Dr. Dirk H Pesch, Electronics Dept., CIT, 2002/2003

16

North American PCM Multiplexing Format - T1 System

Multiframe

F bit (odd frames) F bit (even frames)

1 0

0 0

1 1

0 1

1 1

0 0

Frame Frame Frame Frame Frame Frame Frame Frame Frame Frame Frame Frame Frame Frame 12 1 2 3 4 5 6 7 8 9 10 11 12 1

1 - 24 Time slots F 1 2 24 Speech Time slot

Bits 1 - 8 speech coding

1 - 24 Time slots F 1 2 24

1 - 24 Time slots F 1 2 24

Signalling/ Speech Time slot

Bits 1 - 7 speech coding SIG

Signalling/ Speech Time slot

Bits 1 - 7 speech coding SIG

© Dr. Dirk H Pesch, Electronics Dept., CIT, 2002/2003

17

SONET/SDH

· Synchronous Optical NETwork (SONET) is North American version of the ITU Synchronous Digital Hierarchy (SDH) · Transport interface and method of transmission · Devised as high-speed, low error-rate, fibre-optic, multiplexed transmission system for interface between operators, IXCs and LECs · SONET/SDH are used as transmission systems for SMDS and ATM

Synchronous Optical NETwork (SONET) is a Bellcore standard for the North American version of the ITU-T Synchronous Digital Hierarchy (SDH). SONET was conceived as a method of providing a highspeed, low error-rate, international, fibre-optic, multiplexed standard for interfacing between telecom operators, IXCs and LECs. SONET is a transport interface and method of transmission only, it is NOT a network in itself, but rather network infrastructures are built using SONET technology. SONET and SDH are eliminating the different transmission schemes and rates between North America, Japan and Europe through a common rates structure. SONET uses a transfer mode that defines switching and multiplexing aspects of a digital transmission protocol. The switching technology comprises of synchronous and asynchronous transfer modes. STM defines circuit switching whereas ATM defines cell relay. SONET supports both modes through the use of a fixed data transfer frame format including user data, management, maintenance, and overhead. SONET has a structure that is based on Optical Carriers (OC-N), which map existing electrical hierarchies, such as DSn, into an optical hierarchy. These OC-N levels are then multiplexed to form higher-speed transport circuits that range into the gigabits range and provide an alternative to aggregating multiple DSn transmission facilities. SONET solves many of the network management problems of previous digital transmission systems through a layered architecture similar to the OSI RM with specifications for management and maintenance functions. The basic or primary structure of SONET is built around Synchronous Transport Signal level 1 (STS-1) transport through an OC-N signal over fibre optic cables. An aggregate 51.84Mbps bit STS-1 bit stream, when converted from electrical to optical is called Optical Carrier-1 (OC-1), and is composed of a transmission of 810 byte frames sent every 125µs.

© Dr. Dirk H Pesch, Electronics Dept., CIT, 2002/2003

18

SONET System

Add/Drop Multiplexer

Regenerator

Regenerator

STS MUX

STS DEMUX

Section Line

Section

Section Line Path

Section

SONET transmission relies on three basic devices: STS multiplexers. Regenrators, and add/drop multiplexers. STS multiplexers mark the beginning and points of a SONET link. They provide the interface between a tributary network and SONET. Regenerators extend the length of the links possible between generator and receiver. Add/drop multiplexers allow insertion and extraction of SONET paths. STS Multiplexer: A STS multiplexer has a double function, it converts electronic signals to optical and at the same time multiplexes the incoming signals to create a single STS-N signal. Regenerator: A STS regenerator is a repeater that takes a received optical signal and regenerates it, i.e. amplifies the signal. Regenerators in a SONET system differ from those used in other physical layers. A SONET regenerator replaces some of the exisitng overhead information (header information) with new information. This is a layer 2 functionality. Add/drop multiplexer: A SONET add/drop multiplexer works in much the same way as described earlier when add/drop multiplexers were introduced. In the figure above, the various levels of SONET connections are called section, line, and path. A section is the optical link connecting two neighbour devices. A line is the portion of the SONET between two multiplexers. A path is the end-to-end connection of the network between two STS multiplexers. In a simple SONET of two STS multiplexers linked directly to each other, section, line and path are the same.

© Dr. Dirk H Pesch, Electronics Dept., CIT, 2002/2003

19

SONET Hierarchy

STS Level STS-1 STS- 3 STS-9 STS-12 STS-18 STS-24 STS-36 STS-48

Optical Carrier (OC) OC-1 OC-3 OC-9 OC-12 OC-18 OC-24 OC-36 OC-48

Data Rate (Mbps) 51.84 155.52 466.56 622.08 933.12 1,244.16 1,866.24 2,488.32

The table above shows the SONET speed hierarchy by STS-level. SONET provides direct multiplexing of both SONET speeds and current asynchronous and synchronous services into the STS-N payload. Payload types range from DS1 and DS3 to OC-3c and OC-12c ATM and SDH/PDH payloads. For example, STS-1 supports direct multiplexing of DS1, DS2, and DS3 channels into single or multiple STS-1 envelopes, which are called tributaries. Another advantage of SONET is that each individual signal down to the DS1 level can be accessed without the need to demultiplex and remultiplex the entire OC-N level signal. This is commonly accomplished through a SONET Digital Cross-Connect (DXC) switch or multiplexer. It is important to note that SONET multiplexing requires an extremely stable clocking source and the frequency of every clock in the network must be the same or synchronous with one another.

© Dr. Dirk H Pesch, Electronics Dept., CIT, 2002/2003

20

SONET Layers

Path Layer Line Layer Section Layer

Data Link Layer

Photonic Layer

Physical Layer

SONET defines four layers. The photonic layer is the lowest and corresponds to the OSI physical layer. The section, line and path layers correspond to the OSI data link layer. Photonic Layer: The photonic layer includes specifications for the optical fibre link, the sensitivity of the receiver, multiplexing functions, etc. SONET uses NRZ encoding with the presence of light representing 1 and absence of light representing 0. Section Layer: The section layer is responsible for the movement of a signal across a physical section. It handles framing, scrambling, and error control. Line Layer: The line layer manages the signal movement across a physical line. STS multiplexers and add/drop multiplexers provide line layer functions. Path Layer: The path layer is responsible for the movement of a signal from its optical source to its optical destination. At the optical source the signal is changed from an electronic form into an optical form, multiplexed with other signals, and encapsulated in a frame. At the optical destination the signal is demultiplexed, and the individual optical signals are transferred back into electronic signals. Path layer overhead is added at this layer. STS multiplexers provide path layer functions.

© Dr. Dirk H Pesch, Electronics Dept., CIT, 2002/2003

21

SONET STS-N Frame Format

N x 90 octets SOH STS POH STS-1 Synchronous Payload Envelope

LOH

3 x N columns POH

V T G R O U P 1 V T G R O U P 2 V T G R O U P 3 V T G R O U P 4 V T G R O U P 5 V T G R O U P 6 V T G R O U P 7

9 rows

N columns

86 x N columns

The basic SONET building block is the STS-1 frame, which consists of 810 octets and is transmitted once every 125µs, for an overall data rate of 51.84Mbit/s. The frame can logically be viewed as a matrix of 9 rows of 90 octets each, with transmission being one row at a time, from left to right and top to bottom. The first three columns (3 octets x 9 rows = 27 octets) of the frame are devoted to overhead octets. Nine octets are used to section related overhead and 18 octets are used to line overhead. The figure above shows the logical frame format and the arrangement of overhead octets based on the STS-N signal. The remainder of the frame is payload, which is provided by the path layer. The payload includes a column of path overhead, which is nor necessarily in the first available column position. The line overhead contains a pointer that indicates where the path overhead starts. This concept is the same as for SDH and will be explained below. The user part of the STS-N payload is the Synchronous Payload Envelope (SPE). This payload can take forms such as typical T-carrier channels (DS1, DS3, etc), FDDI, SMDS, ATM or Virtual Tributaries (VTs) of various sizes. Virtual tributaries are the building blocks of the SPE. The label VTxx designates virtual tributaries of xx Mbit/s. The table below shows the VT classes and their electrical channel equivalents. VTxx VT1.5 VT2 VT3 VT4 VT5 VT6 Data Rate (Mbit/s) 1.544 2.048 3.152 Open Open 6.312 Electrical Channel Equivalent DS1 E1 (CCITT G.703) DS1C Open Open DS2

© Dr. Dirk H Pesch, Electronics Dept., CIT, 2002/2003

22

Synchronous Digital Hierarchy (SDH)

· International equivalent to North American SONET · ITU-T recommendations G.707, G.708, G.709 · SDH multiplexing hierarchy

SDH Level Signal designation STM-1 STM-4 STM-16 Composite bit rate (Mbit/s) 155.52 622.08 2488.32 Comparable SONET level STS-3 STS-12 STS-48

1 4 16

In modern trunk multiplexing systems two trends can be noted. The first is the trend towards a high degree of flexibility; various bit rates can be combined in one trunk circuit. The second is that towrads more truly worldwide standards. The first trend is clearly visible in the North American SONET standard and the ITUT series of standards for the Synchronous Digital Hierarchy (SDH). The ITU-T recommendations G.707, G.708, and G.709 specify the SDH multiplexing hierarchy, its frame formats and multiplexing operation.. The ITU-T SDH recommendations also reach a high degree in compatibility to the North American SONET standard. Although SONET offers a much larger range of composite bit rates as seen earlier, the SDH composite bit rates are chosen to be identical to three of the eight lower SONET bit rates. These bit rates have become worldwide the most widely used ones. In many respects SONET and SDH standards are fully compatible, but they are certainly not identical. A problem remains that, in Europe, there is a need for 2Mbit/s channels and virtually no need for 1.5Mbit/s channels, whereas the reverse is true in the USA and Japan. Therefore the subdivision of composite bit rate into channels will be different for SONET and SDH. Due to these circumstances 1.5Mbit/s channels at the boundary of networks between North America and Europe are transmitted in 2Mbit/s channels in Europe. The aim of the SDH recommendations was to reach worldwide compatibility of equipment produced by different manufacturers in different countries. In order to achieve this goal the range of basic SDH recommendations (G.707-709) is extended with recommendations that specify details of equipment parts such as the optical interfaces (G.957) and SDH management (G.784). These recommendations leave less freedom for manufacturers than recommendations for more conventional multiplexing systems (G.702/703) have done in the past. The result should be full compatibility between systems of different manufacturers. This may seem quite restrictive and counterproductive to competition but experience has shown that it improves service to users and reduces prices because having one global system results in larger unit sizes for manufacturing and thus reduced cost per unit.

© Dr. Dirk H Pesch, Electronics Dept., CIT, 2002/2003

23

STM-1 Frame Format

F F F F

Frame sync byte

POH RSOH 9 rows

AU Pointer

Virtual Container VC-4 (payload)

MSOH

9 columns

Administrative Unit AU-4

270 columns of octets

Built around a basic 125µs frame, SDH can support both 64kb/s PCM channels and asynchronous cell transmission for the Asynchronous Transfer Mode (ATM). It is therefore an attractive transmission medium for a wide range of applications, from the POTS to broadband multimedia networks. The 125µs frame is known as STM-1and consists of 2430 bytes. The STM-1 frame format is depicted in the figure above. It consists of a matrix of 9 rows and 270 columns of octets (bytes). Transmission of bits in the frame is one row at a time, from left to right and top to bottom. The frame payload is restricted to 261x9 bytes, the first nine columns being an overhead used for control. The payload area is called virtual container (VC). Both synchronous and asynchronous operation can be used; in synchronous mode the byte in column 1 and row 1 is a start of frame byte, thus providing a reference for all other bytes. The overhead part of the frame is similar to SONET and contains the information need at regenerators and multiplexers in a Regenerator Section OverHead (RSOH) field and a Multiplexer Section OverHead (MSOH) field. The virtual container itself contains a Path OverHead (POH) field, which is only analysed by the equipment at the end of a path through the network (see SONET definition of section, line and path), where demultiplexing of the virtual container may be required. The virtual container of the type VC-4 fits into the Administrative Unit (AU-4) of the STM-1 frame. The VC-4, which represents the payload of a STM-1 frame, provides a channel capacity of 155.52x261/270 = 150.34Mbit/s. However, this 150.34Mbit/s capacity is not all available to the user. In order to ensure integrety of the VC-4 across many links between transmitting and receiving node, the path overhead is included in the VC-4, occupying the first byte of each row. Thus the payload capacity available to the user is reduced to 149.76Mbit/s. Evidently, yhe VC-4 can accommodate without difficulty the current PDH multiplex rate of 139.26Mbit/s. The VC-4 container can also be filled with smaller types of containers, for instance, by three containers of the type VC-3, each consisting of 9 rows and 85 columns of octets. In that case the VC-4 will carry some additional administrative information regarding its content, in the form of a Tributary Unit Group (TUG3). This process can go on with smaller containers of the type VC-2, and finally VC-1. The VC-1 has two versions; the VC-11 for 1.5Mbit/s signals and the VC-12 mainly intended for 2Mbit/s signals. This SDH concept allows the transfer of a large range of bit rates, including the bit rate that is chosen for handling ATM, 155.52Mbit/s.

© Dr. Dirk H Pesch, Electronics Dept., CIT, 2002/2003

24

RSOH

J1

Frame n

AU Pointer

MSOH

Virtual Container VC-4

RSOH Frame n+1

AU Pointer

MSOH

VC-4 spread over two STM-1 frames. J1 is the first byte of VC-4 and its location is indicated by a pointer in the AU pointer in the frame overhead section of frame n

One of the advantages of SDH/SONET over PDH is that SDH has introduced the use of pointers that describe the position of the VCs in their respective superstructure. By using a pointer within the STM-1 frame overhead field the start of the VC-4 can be indicated and provided that this pointer is updated, VC-4 can float within the STM-1 frame. Indeed, the VC-4 does not have to be completely contained within one STM-1 frame but can spread over frame boundaries, as shown in the figure above. By using this facility the timing between the STM-1 frame and the VC-4 can be adjusted to accommodate transmission delay at various points in the network. The pointers can also be used to provide timing adjustment at a synchronising element to modify the timing of several incoming SDH links so that they are properly aligned at the output.

© Dr. Dirk H Pesch, Electronics Dept., CIT, 2002/2003

25

STM-1 Frame R

STM-1 Frame S

STM-1 Frame T

STM-1 Frame U

Multiplexer (Byte interleaving

R S T U

Frame R VC-4 Frame S VC-4

Frame T VC-4

Frame U VC-4

4 x 9 columns interleaved

4 x VC-4 (1044 columns) interleaved

STM-1 is the basic SDH unit as shown in the tbale of the SDH hierarchy above, but as we have seen, it has a maximum nominal capacity of 140Mbits synchronous and 149.76Mbit/s asynchronous. Higher rates will be required for some applications and in principle n x STM-1 can be provided by byte interleaving n STM1 frames. In practice only two have been defined, STM-4 and STM-16. An STM-4 frame consists of 4 byteinterleaved STM-1 frames, as shown in the figure above. The result is a frame having an overhead field of 36 x 9 bytes, and a payload of 1044 x 9 bytes. The payload is therefore equivalent to 1044 x 9 x 8000 x 8 = 601.344Mbit/s of a total bit rate of the STM-4 signal of 622.08Mbit/s.

© Dr. Dirk H Pesch, Electronics Dept., CIT, 2002/2003

26

SDH Ring Network

Trunk exchange

DXC - Digital Cross-Connect System ADM - Add/Drop Multiplexer

DXC

ADM

ADM

ADM

The use of pointers as described above makes it easy to locate the bytes of a particular bit stream and to extract this particular part of the bit stream. This property is used in Add/Drop Multiplexers (ADM). This type of equipment makes it attractive to use ring topology to connect switching centres: The SDH ring as shown in the figure above. In an ADM one or more complete VCs are extracted from the main bit stream and/or are inserted into the bit stream periodically. This is in contrast to many older fixed slot PDH multiplexers that do not handle the same information in the form of containers but as time scattered bits (bit interleaved) or byte (character interleaved). This last method (fixed slot) does not leave any possibility for flexibility in the handling of the main bit stream. The main disadvantage of PDH compared to SDH is that the entire hierarchy has to be traversed for every multiplexing or demultiplexing operation.

© Dr. Dirk H Pesch, Electronics Dept., CIT, 2002/2003

27

Asynchronous TDM

· · · · · Also called Address or Label multiplexing Effectively Statistical multiplexing Often combined with packet switching Main applications in data communications Early applications: SNA, DECNET, X.25, packet radio networks, satellite communications · Modern applications: Frame Relay, ATM

As indicated above, synchronous TDM does not guarantee that the full transmission capacity of a link is used at all times. In fact, in data communications it is more likely that only a portion of the time slots is in use at a given instant. Because the slots are pre-assigned and fixed, whenever a connected station is transmitting the corresponding slot is empty and that much of the bandwidth is wasted. For example, if we have to multiplex the output of 20 stations onto a single transmission line. Using synchronous TDM the speed of the line must be at least 20 times the data generation rate of an individual station. What if only 10 stations are transmitting at any one time? Half the capacity of the line will be wasted. Asynchronous TDM, or statistical TDM, is designed to avoid this type of waste of resources. The term statistical multiplexing refers to the fact that the multiplexing mechanism adapts to the statistical nature of the generated data of all stations connected to the multiplexer. Like synchronous TDM, asynchronous TDM allows a number of lower speed input lines to be multiplexed to a single higher speed line. Unlike synchronous TDM, however, in asynchronous TDM the aggregate data rate of the input lines can be greater than the data rate of the single line. In synchronous TDM we have a fixed number of n slots per frame for n input lines. In an asynchronous system, however, the number of slots per frame can be less than the number of input lines. The multiplexer scans all input lines and accepts portions of the data being queued at each station until the frame is filled or no more data is being queued. Thus the full link capacity may not be used all the time. However, if the data generated by each station exceeds the link capacity for some time, the data queue at each station will increase and the data packets will experience delay. There will always be a trade-off between optimum utilisation of the single link and reasonable queuing delays. The optimisation process has to take requirement in terms of link utilisation and delay distribution into account.

© Dr. Dirk H Pesch, Electronics Dept., CIT, 2002/2003

28

Concept of Statistical Multiplexing

User 1

Synchronous TDM

User 2

M U X

145 Statistical TDM 235 124 523

User 3

User 4

User 5

The figure above illustrates the difference between synchronous and asynchronous TDM. Users 1 to 5 generate data packets in an intermittent fashion. In the case of synchronous TDM a frame requires 5 slots in order to multiplex the data of the five users onto the single line. This leads to the case where some of the slots per frame remain empty most of the time and transmission bandwidth is wasted. In the case of asynchronous TDM the generated data packets of users 1 to 5 are statistically multiplexed into a frame with only 3 time slots rather than 5. The figure illustrates how this is done. The multiplexer takes waiting data packets in a round robin fashion from the stations queues and transmits them in the time slots. In the first two rounds, only three users generate data packets and the transmission line provides sufficient capacity to transmit the packets. In the third round, four users have generated data packets and the required bandwidth exceeds the one of the single line. The packet in excess of the available data rate will remain in the station's queue and are transmitted in subsequent frames (see figure). There is one major weakness of asynchronous TDM: How does the demultiplexer know which slot belongs to which output line? In synchronous TDM each output line is associated with a particular slot position in the frame. But in asynchronous TDM data from a certain input station is transmitted in which ever time slot is next according to the round robin method. This means that in one frame the data can be transmitted in slot one and in the next frame in slot three. In the absence of a fixed time slot position each slot must carry additional information which tells the demultiplexer to which output line the data in a particular slot belongs. This address, for local use only, is attached by the multiplexer and discarded by the demultiplexer once it has been read. Adding address bits to each time slot increases the overhead of an asynchronous multiplexing system and somewhat limits its potential efficiency. However, in most cases the address can be kept short and the overhead is worth the effort compared to the usual increase in efficiency due to statistical multiplexing. The need for addressing makes asynchronous TDM inefficient for bit or byte interleaving. Imagine bit interleaving with each bit carrying an address. For this reason asynchronous TDM is efficient only when the size of the time slots is kept relatively large. A further advantage of asynchronous TDM is that it can accommodate variable length time slots. Stations transmitting at a faster data rate can be given longer slots. Managing variable-length fields requires that control bits be appended to the beginning of each time slot to indicate the length of the coming data portion. This can increase the overhead further and is only efficient with larger time slots. This method is used in data networks such as X.25.

© Dr. Dirk H Pesch, Electronics Dept., CIT, 2002/2003

29

Statistical Multiplexing

User 1

Synchronous TDM Statistical multiplexing

User 2

User 3

The figure above shows a further example of the advantages of statistical multiplexing. The data rate requirements of three users are displayed over time. User 1 and 2 have variable bit rate requirements and user 3 is a constant bit rate user. Rather than requiring 10 units/sec data rate for synchronous TDM, statistical multiplexing requires only 7 units/sec. This property also allows to multiplex traffic from several users with a variety of service requirements. As indicated above, two users with variable bit rate requirements such as for video traffic or general data traffic can be multiplexed into the same single transmission line with a constant bit rate user such as user 3 who may have a voice telephony service requirement. Statistical multiplexing does not only allow to reduce the requirements on bandwidth of the single transmission line but also to multiplex traffic from users with different types of communication services such as voice, video, computer data, etc. We will elaborate on this property further in the part of the course that deals with the network layer and packet switching. Weaknesses of this system in terms of congestion and transmission delay will be highlighted in the section on congestion control. The major application of this multiplexing approach is the Asynchronous Transfer Mode technology.

© Dr. Dirk H Pesch, Electronics Dept., CIT, 2002/2003

30

Problems associated with TDM

· General Problems

­ ­ ­ ­ Synchronisation Inter-symbol Interference Crosstalk Multipath fading in radio based transmission

· Statistical TDM

­ capacity dimensioning ­ buffer space ­ admission control

© Dr. Dirk H Pesch, Electronics Dept., CIT, 2002/2003

31

Synchronisation

· Time Division Multiplexing requires precise bit synchronisation between transmitter and receiver for correct clock recovery · Drifts in bit synchronisation result in bit errors or loss of connection · Two options to maintain synchronisation

­ separate line carrying clock signal ­ synchronisation (clock) information embedded in electrical representation of digital information

© Dr. Dirk H Pesch, Electronics Dept., CIT, 2002/2003

32

Example of Statistical TDM

Asynchronous Transfer Mode (ATM) · · · · · Multiplexing and switching technology Often referred to as cell relay Transmission of fixed length data packets (cells) Provides statistical multiplexing Intended to provide platform for integration of computer and telecommunication services · Underlying transport platform for B-ISDN

The Asynchronous Transfer Mode (ATM) will have an important role in the future B-ISDN. Although the highest levels of multiplexing in future infrastructure for ISDN and B-ISDN will be based on synchronous TDM within the range of bit rates specified in the SDH, ATM is the preferred switching and multiplexing principle for B-ISDN. The reason behind this preference is the suitability and efficiency of ATM for all types of digital traffic services. The traffic types can be classified as · Constant Bit Rate (CBR) service and · Variable Bit Rate (VBR) service Examples of CBR traffic are the 64kbit/s based services such as digital voice telephony based on PCM, videophony and telefax group 4, and higher continuous bit stream based services such as digital TV transmission. A good example for VBR traffic is interactive data, text, and image transfer. During short periods a constant bit-rate transfer is required for this type of traffic but during the longer intervening periods no information has to be transferred at all. The advantage of a transfer system like ATM that can handle all types of information flows is also very significant for the simplicity of the system users' connections in the future. Users at home or in small offices will want to be connected to low-speed data networks (Internet) for data and text handling, to the telephone network, and to broadband networks for digital TV, and maybe also facsimile. Instead of a number of separate connections, ATM will make it possible to use a single connection by multiplexing the different types of data streams into the single connection. For this purpose the data is split into fixed packets (ATM cells) of 53 bytes each with a 5 bytes header of address and other information and 48 byte of user data. The fixed length cells require a fixed transmission time on the single link and the performance of such system can be approximated by a M/D/1 queuing system. It can be shown that the delay in information transfer from one node to the next, consisting of queuing delay and transmission delay, is a lower bound on all other multiplexing systems with variable length packets. The address field in the 5 byte header is also used for switching purposes as will be outlined later.

© Dr. Dirk H Pesch, Electronics Dept., CIT, 2002/2003

33

ATM Concepts

· Transmission path · Virtual Path, Virtual Path Connection (VPC) · Virtual Circuit, Virtual Circuit Connection (VCC)

Transmission Path VC VC VC VP VP VP VP VP VP VC VC VC

The figure above depicts the relationship between the physical transmission path, a Virtual Path (VP) and Virtual Channel (VC). A transmission path contains one or more VPs, while each VP contains one or more VCs. Switching in ATM networks can be performed on either the transmission path, VP or VC level. At the ATM layer, which sits above the physical layer in an ATM system, users are provided a choice of either a VPC or a VCC. Logical connections in ATM are referred to as virtual channel connections (VCC). A VCC is analogous to the virtual circuit concept in X.25. Advantages of using the VP and VC concepts: · Simplified network architecture - network transport functions can be separated into those related to an individual logical connection (virtual channel) and those to a group of logical connections (virtual path). · Increased network performance - the network deals with fewer, aggregated entities. · Reduced processing and short connection setup time - much of the work is done when the virtual path is set up. By reserving capacity on a virtual path connection in anticipation of later call arrivals, new virtual channel connections can be established by executing simple control functions at the endpoints of a virtual path connection; no call processing is required at transit nodes. Thus, the addition of new virtual channels to an existing virtual path involves minimal processing. · Enhanced network services - the virtual path is used internal to the network but is also visible to the end user. As a result, the user may define closed user groups or closed networks of virtual channel bundles.

© Dr. Dirk H Pesch, Electronics Dept., CIT, 2002/2003

34

Virtual Path/Virtual Channel Characteristics

· Quality of service · Switched and semi-permanent virtual channel connections · Cell sequence integrity · Traffic parameter negotiation and monitoring · Virtual channel identifier restriction within a VPC (virtual paths only)

ITU-T recommendation I.150 lists the following as characteristics of virtual channel connections: · Quality of service - a user of a VCC is provided with a Quality of Service (QoS) specified by parameters such as cell loss ratio (ratio of cells lost to cells transmitted) and cell delay variation · Switched and semi-permanent virtual channel connections - both are switched connections, which require call-control signalling, and dedicated channels can be provided · Cell sequence integrity - the sequence of transmitted cells within a VCC is preserved · Traffic parameter negotiation and usage monitoring - traffic parameters can be negotiated between a user and the network for each VCC. The input of cells to the VCC is monitored by the network to ensure that the negotiated parameters are not violated. · Virtual channel identifier restriction within a VPC - one or more virtual channel identifiers, or numbers, may not be available to the user of the VPC, but may be reserved for network use. Examples include VCCs used for network management

© Dr. Dirk H Pesch, Electronics Dept., CIT, 2002/2003

35

ATM Network

UNI

ATM switch

SDH lines

© Dr. Dirk H Pesch, Electronics Dept., CIT, 2002/2003

36

ATM Cell

Transmission path Virtual circuit

H P H P H P H P

Header

Payload

CLP

GFC 4

VPI 8

VCI 16

PT

HEC 8 bits

3 1

GFC VPI VCI VCI PT HEC

VPI VCI CLP

GFC VCI CLP

Generic Flow Control Virtual Circuit Identifier Cell Loss Priority

VPI PT HEC

Virtual Path Identifier Payload Type Header Error Check

The ATM standard defines a fixed-size cell with a length of 53 octets (or bytes) comprising of a 5 octet header and a 48 octet payload. The bits in each cell are transmitted over the physical circuit from left to right in a continuous stream. Cells are mapped into a physical transmission path, such as the North American DS1, DS3, or SONET; European E1, E3 and, E4; or ITU-T STM standards; and various other local fibre and electrical transmission systems. All information is switched and multiplexed in an ATM network in these fixed-length cells. The cell header identifies the destination, cell type, and priority. The VPI and VCI hold local significance only, between any two switches, and identify the destination. The GFC field allows the multiplexer to control the cell generation rate of an ATM terminal. The PT indicates whether the cell payload contains user data, signalling data, or maintenance information. The CLP bit indicates the relative priority of the cell. Lower priority cells may be discarded before higher priority cells by the Usage Parameter Control (UPC) at the user-to-network interface (UNI) if the cell rate violates the agreed user contract, or by the network if congestion occurs. The cell HEC detects and corrects errors in the header. The payload field is not protected against errors by the ATM layer, but relies on higher layer protocols to perform error checking and correction. The fixed cell size simplifies the implementation of ATM switches and multiplexers. It also guarantees that longer packets cannot delay the transmission of shorter packets as in other systems, as data streams are always split into ATM cells. This enables ATM to carry real-time traffic such as voice and video in conjunction with non real-time traffic such as data traffic, without causing service degradation to real-time traffic.

© Dr. Dirk H Pesch, Electronics Dept., CIT, 2002/2003

37

ATM Protocol Reference Model

ISO/OSI Reference Model

User Plane

Management Plane Control Plane

Network Layer

Higher Layers

Higher Layers Layer Management

Plane Management

ATM Adaptation Layers (AALs)

Data Link Layer

ATM Layer

Physical Layer

Physical Layer

© Dr. Dirk H Pesch, Electronics Dept., CIT, 2002/2003

38

ATM Physical Layer

Management Plane User Plane Higher Layers Control Plane Higher Layers Plane Management Layer Management

ATM Adaptation Layers (AALs)

Transmission Convergence (TC) Sublayer

ATM Layer

Physical Layer

Physical Medium Dependent (PMD) Sublayer

The Physical (PHY) layer provides for transmission of ATM cells over a physical medium that connects two ATM devices. The PHY layer is divided into two sublayers: the Physical Medium Dependent (PMD) sublayer and the Transmission Convergence (TC) sublayer. The TC sublayer transforms the flow of cells into a steady flow of bits and bytes for transmission over the physical medium. The PMD sublayer provides the actual clocking of bit transmission over the physical medium. Physical Medium Dependent (PMD) Sublayer There are three standards bodies that have defined the physical layer in support of ATM: ANSI, CCITT/ITU-T, and the ATM Forum. Each of the standardised interfaces is summarised in terms of the interface clocking speed and physical medium as follows: ANSI standard T1.624 currently defines three single-mode optical ATM SONET-based interfaces for the ATM UNI STS-1 at 51.84Mbps STS-3c at 155.52Mbps STS-12c at 622.08Mbps ANSI T1.624 also defines operation at the DS3 rate of 44.736Mbps using the Physical Layer Convergence Procedure defined for IEEE802.6 DQDB. CCITT/ITU-T rec. I.432 defines two optical Synchronous Digital Hierarchy (SDH) based physical interfaces for ATM: STM-1 at 155.52Mbps STM-4 at 622.08Mbps ATM Forum has defined four physical layer interface rates. Two of those are the same as the ANSI DS3 and STS-3c and ITU-T STM-1 rates. FDDI, Fibre-channel and shielded twisted pair type interfaces are also available.

© Dr. Dirk H Pesch, Electronics Dept., CIT, 2002/2003

39

SDH-based ATM Cell Transmission

SDH STM-1 frame

Section Overhead Path Overhead

53 octets ATM cell

AU pointer

Line Overhead

Administrative Unit AU 4 (VC-4)

Transmission Convergence (TC) Sublayer The TC sublayer converts between the bit stream clocked to the physical medium and ATM cells. Upon transmission, TC basically maps the cells into the TDM frame format. On reception, it must perform "cell delineation" on the individual cells in the received bit stream, either from the TDM frame directly, or via the HEC in the ATM cell header. Generating the HEC on transmission and using it to detect and correct errors on receive are also important TC functions. Another important function the the TC sublayer performs is cell rate decoupling by sending idle cells when the ATM layer has not provided a cell. This is a critical function that allows the ATM layer to operate with a wide range of different speed physical interfaces. The TC sublayer employes several methods of mapping ATM cells into the bit stream of the physical transmission system. Most notable methods are direct mapping and mapping through a convergence procedure. Error detection and correction is performed using the Header Error Check (HEC). The HEC is a 1 byte code applied to the 5 byte ATM cell header. The HEC code is capable of correcting a single bit error in the header. It is also capable of detecting many patterns of multiple bit errors. The TC sublayer generates the HEC upon transmission. If errors are detected in the header, the received cell is discarded. Since the header tells the ATM layer what to do with the cell, it is very important not to have errors in the header; it may be delivered to the wrong user or an undesired operation of the ATM layer may be invoked. The TC also uses HEC to locate cells when they are directly mapped into a TDM payload. This is used for synchronisation and the process is called HEC-based cell delineation in the standards. The TC sublayer also performs a cell-rate decoupling, or speed-matching function. Physical media that have synchronous cell time slots (e.g. DS3, SONET, SDH STP, and the fibre-channel based method) require this function, while asynchronous media such as FDDI PMD do not.

© Dr. Dirk H Pesch, Electronics Dept., CIT, 2002/2003

40

ATM Layer

· ATM Layer consists of two parts

­ Virtual Path (VP) level ­ Virtual Circuit (VC) level

· A virtual path is identified by the Virtual Path Identifier (VPI) · A virtual circuit is identified by the Virtual Circuit Identifier (VCI) · Switching at VC level involves VPI and VCI · Switching at the VP level involves only VPI · A Virtual Channel Connection is a list of VC links · A Virtual Path Connection is a list of VP links

An ATM device may be either an end-point or a connecting-point for a virtual path or virtual channel. A Virtual Path Connection (VPC) or Virtual Channel Connection (VCC) exists only between endpoints. A VP link or VC link can exist between end-point and connecting-point or between two connecting points. A VPC or VCC is an ordered list of VP links or VC links, respectively. The Virtual Channel Identifier (VCI) in the cell header identifies a single VC on a particular VP. Switching at the VC connecting points is done based upon a combination of virtual path and VCI. A VC link is defined as a unidirectional flow of ATM cells with the same VCI between a VC connecting point and VC end-point or two VC connecting points. A Virtual Channel Connection (VCC) is defined as a concatenated list of VC links. Virtual Paths define an aggregate bundle of VCs between VP endpoints. A Virtual Path Identifier (VPI) in the cell header identifies a bundle of one or more VCs. Switching at the VC level is done based upon the VPI, the VCI is ignored. A VP link is analogous to VC link at the path level. A Virtual Path Connection is defined as a concatenated list of VP links.

© Dr. Dirk H Pesch, Electronics Dept., CIT, 2002/2003

41

ATM Switching

ATM switch

VPI/VCI 10/2 12/2 VPI/VCI 8/3 10/2

ATM switch

VPI/VCI 8/3 10/2 VPI/VCI 10/3 1/2

1/3 10/3

2/4 1/4

2/4 1/4

2/4 3/4

9/3 VP link/VC link

12/3 VP link/VC link

12/3

1/1 VP link/VC link

Virtual Path Connection/Virtual Channel Connection

© Dr. Dirk H Pesch, Electronics Dept., CIT, 2002/2003

42

Traffic Contract and Quality of Service

· Agreement between user and network regarding the QoS · Principle QoS parameters (ITU-T I.350)

­ ­ ­ ­ ­ ­ ­ ­ Average delay Delay variation Loss ration Error rate

CLP = 0, CLP = 1

· Traffic parameters

Peak Cell Rate (PCR) Sustainable Cell Rate (SCR) Maximum Burst Size (MBS) Cell Delay Variation Tolerance (CDVT)

A traffic contract is an agreement between a user and a network regarding the Quality of Service (QoS) that a cell is guaranteed. The principal QoS parameters are: average delay, delay variation, loss ratio, and error rate. The traffic parameters define at least the Peak Cell Rate (PCR), and may optionally define a Sustainable Cell Rate (SCR) and Maximum Burst Size (MBS). A Cell Delay Variation Tolerance (CDVT) parameter is also associated with the peak rate. A leaky bucket algorithm in the network checks conformance of cell flow from the user. The leaky bucket principle can be illustrated by pouring a cup of fluid for each cell into a set of buckets leaking at rates corresponding to the PCR, and optionally to the SCR. Leaking of the bucket is considered non-conforming, and its fluid (cell) is not added to the bucket (buffer in network). ITU-T I.371 and the ATM Forum UNI Specification v3.1 define the formal concept of a traffic contract. A separate contract exists for every VPC and VCC. The traffic contract covers the following aspects: QoS that a network is expected to provide The traffic parameters that specify characteristics of the cell flow The conformance checking rules used to interpret the traffic parameters The network definition of a compliant connection Quality of Service (QoS) is defined by specifying parameters for cells that are conforming to the traffic contract. QoS is defined on an end-to-end basis - a perspective that is meaningful to an end user. QoS classes are defined in terms of the following parameters defined by ITU-T I.350 and ATM Forum UNI v3.1 for each ATM VCP and VCC: Average delay Cell delay variation Loss on CLP = 0 cells for ATM Loss on CLP = 1 cells for ATM Error rate For those connections that do not specify traffic parameters and a QoS class, there is a capability defined by the ATM Forum as best effort where no QoS guarantees are made.

© Dr. Dirk H Pesch, Electronics Dept., CIT, 2002/2003

43

QoS Classes

ATM Forum QoS Classes QoS Class QoS Parameters 0 Unspecified 1 Specified 2 Specified 3 Specified 4 Specified Application "Best effort", At Risk" Circuit Emulation, CBR VBR Video/Audio Connection-oriented Data Connectionless Data

ITU-T I.362 Service Classes Service Class A: circuit emulation, constant bit rate video Service Class B: variable bit rate audio and video Service Class C: connection-oriented data transfer Service Class D: Connectionless data transfer

In order to make quality of service aspects easier for a user, a small number of predefined QoS classes are defined, with particular values of parameters, such as average delay, cell delay variation, etc, prespecified by a network in each of a few QoS classes. The ATM Forum UNI v3.1 specification defines five numbered QoS classes and applications as listed above. A QoS class is defined by at least the following parameters: · Cell loss ratio for CLP = 0 flow · Cell loss ratio for CLP = 1 flow · Cell delay variation for aggregate CLP = 0 + 1 flow · Average delay for aggregate CLP = 0 + 1 flow A specified QoS class provides provides performance to an ATM virtual connection (VPC or VCC) as specified by a subset of the ATM performance parameters. For each QoS class, the is one objective value for each performance parameter. Initially, each ATM network provider should define performance parameters for four service class, as shown above, defined in ITU-T I.362. The ATM Forum has linked the QoS classes shown above to a respective service class. The relationship is QoS Class 1 supports a QoS the meets service class A performance requirements, QoS class 2 supports service class B, and so on. There is also an unspecified QoS class, where no objective with regard to service parameters is specified. An example application of the unspecified QoS class is the support of a best-effort service. A typical example is the Internet. One component of this best-effort type service, however, is that the user application is expected to adapt to the time-variable, available network resources. The current name for this type of service is Unspecified bit Rate (UBR). An adaptive, flow-controlled service is currently being defined by ITU and the ATM Forum as Available Bit Rate (ABR).

© Dr. Dirk H Pesch, Electronics Dept., CIT, 2002/2003

44

ATM Adaptation Layer

· AAL provides means to support applications of existing networks or connect existing networks to ATM networks · AAL protocols map data from upper layers into ATM cells · Four data types are supported by AAL protocols

­ ­ ­ ­ Constant bit-rate data Variable bit-rate data Connection-oriented packet data Connectionless packet data

· AAL handles transmission errors, lost and misinserted cells · AAL performs convergence and segmentation/reassembly functions · AAL provides flow and timing control

The ATM Adaptation Layer (AAL) provides a means to support different applications and services of existing networks in an ATM network. There are many different existing networks ranging from circuit-switched networks such as PSTN for voice and simple data communications over packet switched networks such as X.25, X.75, frame relay and SMDS to LANs and MANs for computer communications. The characteristics of services offered by such networks vary greatly and in order to provide all these services in an integrated network, the transmission and switching platform has to provide a flexible means to accommodate the range of requirements. ITU-T recommendation I.362 lists the following general examples of services provided by AAL: · Handling of transmission errors (transmission errors do not occur very often when the physical layer is provided by SDH over fibre-optic cables) · Segmentation and reassembly, to enable larger blocks of data to be carried in the information field of ATM cells · Handling of lost and misinserted cell conditions caused by adaptive routing and flow control protocols · Flow and timing control to adapt the bit rate of a source to the bit rates offered by the ATM layer By providing the AAL, a ATM based network can provide services such as voice and video telephony, video on-demand, all kinds of data communication, ranging from X.25, frame relay and SMDS to TCP/IP over ATM, and more recently LAN emulation.

© Dr. Dirk H Pesch, Electronics Dept., CIT, 2002/2003

45

Service Classification for AAL

Class A Timing relation between source and destination Bit rate

Class B

Class C

Class D

Required

Not required

Constant

Variable

Connection mode AAL protocol Type 1

Connection-oriented Type 2 Type 3/4 Type 5

Connectionless Type 3/4

In order to minimise the number of different AAL protocols that must be specified to meet a variety of needs, the four classes of service specified earlier were considered by ITU-T when the requirements for AAL were drawn up. Class A providing circuit emulation for CBR services requires the maintenance of timing relation and the transfer is connection-oriented. Class B services, which would be used in a video teleconference, is also connection-oriented and requires timing relations but allows for a variable bit rate at the source. Classes C and D would be used for all kinds of packet data applications. Class C provides a connection-oriented service and class D a connectionless service. No particular timing relations are required by either class. A typical application for class D would be LAN emulation of TCP/IP over ATM.

© Dr. Dirk H Pesch, Electronics Dept., CIT, 2002/2003

46

AAL Sublayers

Management Plane User Plane Higher Layers Control Plane Higher Layers Convergence Sublayer (CS) Plane Management Layer Management

ATM Adaptation Layers (AALs)

Segmentation and Reassembly Sublayer (SAR)

ATM Layer

Physical Layer

To support the various classes of service, a set of protocols at the AAL level have been defined. The AAL layer is organised into two logical sublayers, the Convergence Sublayer (CS) and the Segmentation and Reassembly Sublayer (SAR). The convergence sublayer provides the functions needed to support specific applications using AAL. Each AAL user attaches to AAL at a service access point (SAP), which is simply the address of the application; more generally, a SAP is a software interface. This sublayer is therefore service dependent. The segmentation and reassembly sublayer is responsible for packaging information received from CS into cells for transmission and unpacking the information at the other end. As described ealrier, at the ATM layer, each cell consists of a 5 octet header and a 48 octet information payload. Thus, SAR must pack any SAR headers and trailers, plus CS information, into 48 octet blocks. Initially, ITU-T defined one protocol type for each service class, named Type 1 through Type 4. Actually, each protocol type consists of two protocols, one at the CS sublayer and one at the SAR sublayer. More recently, types 3 and 4 were merged into Type 3/4 because most of the functionality was the same and a new type, Type 5, was defined. The figure in the previous slide shows which services are supported by which type. In all of cases, a block of data from a higher layer is encapsulated into a protocol data unit (PDU) at the CS sublayer. Often this sublayer is also referred to as the common-part convergence sublayer (CPCS), leaving open the possibility that additional, specialised functions may be performed at the CS level. The CPCS PDU is then passed to the SAR, where it is broken up into payload blocks. Each payload block fits into a SAR PDU, which has a total length of 48 octets. Each 48 octet SAR PDU fits into a single ATM cell.

© Dr. Dirk H Pesch, Electronics Dept., CIT, 2002/2003

47

AAL 1

· AAL1 supports applications at constant bit rates · Allows ATM to connect to DS1/T1 or E1 (G.703)

.........10101110111000111001..........1001011100101110...... Constant bit rate CS .........10101110111000111001..........1001011100101110......

............ 47 octets SAR 1 octet header CSI

1 bit

47 octets

......... 47 octets

........

47 octets payload

CSI: Convergence sublayer identifier SC: Sequence count CRC: Cyclic redundancy check P: Parity

SC

3 bits

CRC

3 bits

P

1 bit

Convergence Sublayer: divides the constant bit stream received from the upper layer into 47 octet segments and passes them onto the SAR sublayer. Segmentation and Reassembly: The figure above shows the format of an AAL1 SAR PDU. As can be seen, the SAR PDU consists of a 47 octet payload, which contains the data received from the CS and the SAR sublayer adds a one octet header. The result is a 48 octet PDU that is then passed on to the ATM layer and is embedded in an ATM cell. The SAR PDU header consists of four fields: Convergence sublayer identifier (CSI): The one bit CSI field will be used for signalling purposes that are still under study. Sequence count (SC): The three bit SC field is a modulo 8 sequence number to be used for ordering and identifying the payloads for an end-to-end error and flow control. Cyclic redundancy check (CRC): The three bit CRC field is calculated over the first four bits using the four bit divisor x3 + x + 1. Three bits may look like too much redundancy. However, they are intended not only to detect a single or multiple bit error, but also to correct several single bit errors. In non-real-time applications, an error in a cell is inconsequential because the error control functions of the upper layer would attempt a retransmission of the erroneous information. In real-time applications, however, retransmission is not an option due to the stringent delay requirements. With no retransmission, the quality of the received data deteriorates. With erroneous cells being discarded, the loss of information may be audible or visible. The possibility to correct single bit header errors dramatically reduces the number of lost cells and therefore can maintain a good quality of service. Parity: The one bit P field is a standard parity bit calculated based on the first seven bits of the header. A parity can detect an odd number of errors but not an even number. This feature can also be used for error correction in the first four bits of the header. However, the CRC check provides more comprehensive error protection. The parity is a bonus in terms of error detection and correction.

© Dr. Dirk H Pesch, Electronics Dept., CIT, 2002/2003

48

AAL 2

· Supports real-time variable bit rate (VBR) applications · Video conferencing, or video on-demand services are supported

.........10101110111000111001..........1001011100101110...... Variable bit rate CS .........10101110111000111001..........1001011100101110...... ............ 45 octets 1 octet header CSI

1 bit

45 octets

......... 45 octets

........

SAR

45 octets payload IT

4 bits

2 octet trailer LI

6 bits

SC

3 bits

CRC

10 bits

CSI: Convergence sublayer identifier SC: Sequence count IT: Information type

LI: Length indicator CRC: Cyclic redundancy check

Convergence Sublayer: The format for reordering the received bit stream and adding overhead is not defined here. Different applications may use different formats.. Segmentation and Reassembly: The figure above shows the format of an AAL2 data unit at the SAR layer. Functions at this layer accept a 45 octet payload from the CS and add a one octet header and two octet trailer. The result is again a 48 octet data unit which is passed on to the ATM layer and encapsulated in an ATM cell. The overhead at this layer consists of three fields in the header and three fields in the trailer. Convergence sublayer identifier (CSI): The one bit CSI field will be used for signalling purposes that are still under study. Sequence count (SC): The three bit SC field is a modulo 8 sequence number to be used for ordering and identifying the payloads for an end-to-end error and flow control. Information Type (IT): The IT bits identify the data segment as falling at the beginning, middle or end of the message. Length Indicator (LI): The first six bits of the trailer are used with the final segment of the message (when the IT field indicates the end of the message) to indicate how much of the final cell is data and how much is padding. This field indicates at which octet position the padding starts. CRC: The last 10 bits of the trailer are CRC for the entire data unit.

© Dr. Dirk H Pesch, Electronics Dept., CIT, 2002/2003

49

AAL 3/4

· Supports connection-oriented and connectionless data services · Allows services such as TCP/IP, X.25, Frame relay to connect to ATM networks 65535 bytes Data from upper layer User data <=

CS

T: Type BT: Begin tag BA: Buffer alloc

Header

Trailer

T BT BA

1 1 2

PAD AL ET L

0 - 43 1 1 2

AL: Alignment ET: End tag L: Length

44 octets SAR

............ 2 octet header

44 octets

......... 44 octets

........

44 octets payload IT

10 bits

2 octet trailer LI

6 bits

ST CSI

2 bits 1 bit

SC

3 bits

CRC

10 bits

ST: Segment type CSI: Convergence sublayer identifier SC: Sequence count MID: Message ID

LI: Length indicator CRC: Cyclic redundancy check

Initially AAL3 was intended to support connection-oriented data and AAL4 to support connectionless data services. As they evolved, it became evident that the fundamental issues of the two protocols were the same. They have therefore been combined in a single protocol for AAL3/4. Convergence sublayer: The convergence sublayer accepts data packets of no more than 65535 (216 1 ) octets from the upper layer service and adds header and trailer. Header and trailer indicate start and end of the packet for reassembly purposes as well as how much of the final frame is data and how much is padding. Once that is done, the CS passes the data packet in 44 octets segments to the SAR sublayer. The AAL3/4 CS header and trailer fields are: Type (T): This field is legacy from the original AAL3 and is set to zero here. Begin Tag (BT): This one octet field indicates the first segment of the packet and provide synchronisation for the receiving end. Buffer Allocation (BA): This two octet field tells the receiver how much buffer space is required for this data packet. PAD: Padding is added to fill the payload of the final cell. Alignment (AL): A one octet field to make the rest of the trailer four octets long. Ending Tag (ET): This one octet flag serves as synchronisation for the receiver. Length (L): The two octet field indicates the length of the data unit.

© Dr. Dirk H Pesch, Electronics Dept., CIT, 2002/2003

50

Segment and Reassembly sublayer: The figure above shows the format of the AAL3/4 data unit. Functions at this layer accept a 44 octets payload from the CS and add a 2 octets header and a 2 octets trailer. The resulting 48 octets data unit is passed on to the ATM layer. The fields in header and trailer are: Segment type (ST): The two bit field tells whether the segment belongs to the start, middle or end of the data packet or is a single segment data packet. Convergence sublayer identifier (CSI): The one bit CSI field will be used for signalling purposes that are still under study. Sequence count (SC): The three bit SC field is a modulo 8 sequence number to be used for ordering and identifying the payloads for an end-to-end error and flow control. Multiplexing identification (MID): This 10 bits field identifies cells coming form different data sources and are multiplexed into the same virtual connection. Length indicator (LI): The first six bits of the trailer are used in conjunction with the ST field to indicate how much of the last segment is data and how much is padding. CRC: The last 10 bits of the trailer are CRC for the entire data unit.

© Dr. Dirk H Pesch, Electronics Dept., CIT, 2002/2003

51

AAL 5

· Simple and Efficient Layer (SEAL), no sequencing, addressing or error protection · Used for LAN emulation or ATM backbones

User data <= 65535 bytes

UU: User-to-user ID T: Type L: Length

CS PAD UU

0 - 47 1

Trailer

T

1

L

2

CRC

4

............ SAR

48 bytes

......

48 bytes

48 octets payload

AAL3/4 provides comprehensive sequencing and error control functions that are not necessary for every application. When transmissions are not routed through multiple nodes or multiplexed with other transmissions, sequencing and elaborate error correction mechanisms are an unnecessary overhead. ATM backbones and LANs are applications that do not need this overhead. For these applications the AAL5 was introduced. AAL5 assumes that all segments that belong to a single data packet travel sequentially and that the rest of the functions provided by CS and SAR in AAL3/4 are provided by upper layer for AAL5. Convergence sublayer: The convergence sublayer accepts data packets of no more than 65535 octets from upper layers service and adds an 8 octet trailer as well as any padding required to ensure that the position of the trailer falls where the receiving equipment expects it (at least 8 octets of the last segment). The message is then passes on to the SAR sublayer in 48 octet segments. Fields added at the end of the data packet are PAD: The rules of padding are the same as for AAL3/4. User-to-User ID (UU): This one octet field is left to the discretion of the user. Type (T): Is reserved for future use. Length (L): This field indicates how much is data and how much padding CRC: The last 10 bits of the trailer are CRC for the entire data unit.

© Dr. Dirk H Pesch, Electronics Dept., CIT, 2002/2003

52

Traffic Contract Reference Model

Traffic Deviations PHYSAP Physical Layer Functions SB Other CPE Functions Generating Traffic Deviations TB UPC

Traffic Source 1 Connection Endpoints Traffic Source 2 MUX

Shaper

Private UNI ATM Layer Physical Layer Equivalent Terminal

Public UNI

The basis of the traffic contract is a reference configuration, which in the ATM standards is called an equivalent terminal reference model (as illustrated above). ATM cell traffic is generated by a number of sources, for example, a number of workstations, which each have either a VPC or VCC connection endpoint. These are all connected to a cell multiplexor. Associated with the multiplexing function is a traffic shaper, which assures that the cell stream conforms to the set of traffic parameters defined by a particular conformance-checking algorithm. The output of the shaper is the physical layer service access point (PHY-SAP) in the layered model of ATM. After a shaper function, some physical layer functions may change the actual cell flow emitted over a private ATM UNI ( or SB reference point) so that it no longer conforms to the traffic parameters. This ATM cell stream may then be switched through other Customer Premises Equipment (CPE), such as a ATM backbone before it is delivered to the public UNI (or TB reference point).

© Dr. Dirk H Pesch, Electronics Dept., CIT, 2002/2003

53

End-to-end QoS Reference Model

Sending Terminal 1 Receiving Terminal

Network A N

Network B

Node 1 QoS

Node N QoS Network B QoS

Network A QoS

End-to-end QoS

The end-to-end QoS reference model, depicted above, may contain one or more intervening networks, each with multiple nodes. Each of these networks may introduce additional fluctuations in the cell flow due to multiplexing and switching, thereby impacting on QoS.

© Dr. Dirk H Pesch, Electronics Dept., CIT, 2002/2003

54

Traffic Descriptor

· Traffic descriptor is list of parameters which capture source characteristics

CDV tolerance Maximum back-to-back Cells = /T+1

Cell

PCR = 1/T

Ti MBS MBS MBS

SCR = MBS/Ti Tb = (MBS-1)T

PCR = 1/T

Tb

ATM traffic descriptors include: · A mandatory Peak Cell Rate (PCR) in cells/second · A Cell Delay Variation (CDV) tolerance in seconds · An optional Sustainable Cell Rate (SCR) in cells/second, note: SCR < PCR · A Maximum Burst Size (MBS) in cells The figure above illustrates the key traffic contract parameters for a traffic descriptor · Peak Cell Rate (PCR) = 1/T in units cells/second, where T is the minimum intercell spacing in seconds, i.e. the time interval from the first bit of one cell to the first bit of the next cell. · Cell Delay Variation (CDV) tolerance = in seconds. This traffic parameter normally cannot be specified by the user, but is set instead by the network. The number of cells that can be sent back-toback at the access line rate is /T+1. · Sustainable Cell Rate (SCR) is the maximum average rate at which a bursty source can send at the peak cell rate. · Maximum Burst Size (MBS) is the maximum number of cells in a burst at the peak cell rate.

© Dr. Dirk H Pesch, Electronics Dept., CIT, 2002/2003

55

Traffic and Congestion Control

· Statistical multiplexing properties of ATM require traffic and congestion control · Requirements for ATM traffic and congestion control (I.371)

­ ATM layer traffic and congestion control should support set of ATM layer QoS classes sufficient for all network services ­ ATM layer traffic and congestion control should not rely on AAL protocols that are network specific or higher layer protocols that are application specific ­ The design of an optimum set of ATM layer traffic and congestion controls should minimise network and end-system complexity while maximising network utilisation

© Dr. Dirk H Pesch, Electronics Dept., CIT, 2002/2003

56

Traffic and Congestion Control Functions

Response time Traffic control functions · Network resource management Congestion control functions

Long Term Connection Duration

· Connection admission control

Roundtrip Propagation Time

· Fast resource assignment

· Explicit notification

Cell Insertion Time

· Usage parameter control · Priority control

· Selective cell discarding

In order to meet the traffic and congestion control objective ITU-T has defined a collection of traffic and congestion control functions that operate across a spectrum of timing intervals. The table above lists the functions with respect to the expected response times. The four levels of timing are considered: · Cell insertion time - Functions at this level react immediately to cells as they are transmitted · Roundtrip propagation time - At this level, the network responds within the lifetime of a cell in the network, and may provide feedback indications to the source · Connection duration - At this level, the network determines whether a new connection at a given QoS level can be accommodated and what traffic contract will be agreed to. · Long term - These control functions affect more than one ATM connection and that are established for long-tem use. Traffic control functions · Network resource management - allocate network resources in such a way as to separate traffic flows according to service characteristics. The only traffic control function based on network resource management defined by the ATM Forum deals with virtual paths. Two options for resource allocation to virtual path connections 1. Aggregate peak demand - allocate capacity equal to the aggregate peak data rate demand of all virtual channel connections 2. Statistical multiplexing - allocate capacity greater than or equal to the average data rate demand but less than the aggregate peak rate demand. In most cases the resource allocation would be based on the aggregate sustainable rate demand. · Connection admission control - Connection admission control is the first line of defence for the network to protect itself from excessive loads. At every admission request by a user, the network and the user agree on a traffic contract and its parameters as specified earlier. Once the connection is admitted, the network will provide the agreed QoS as long as the user complies with the traffic contract. Connection admission control is a very complex function and currently subject to intense research. The main problem lies in the fact that it is extremely difficult to estimate proper values for traffic parameters so that the required QoS can be provided.

© Dr. Dirk H Pesch, Electronics Dept., CIT, 2002/2003

57

· Usage parameter control - Once a connection has been accepted by the connection admission control function, the usage parameter control (UPC) function of the network monitors the connection to determine whether the traffic conforms to the traffic contract. The main purpose of UPC is to protect network resources from an overload on one connection that would adversely affect the QoS on other connections by detecting violations of assigned parameters and taking appropriate actions. Usage parameter control can be done at both the virtual path and virtual circuit levels. More important is the control at the VPC level as network resources are allocated for a virtual path. UPC encompasses two separate functions: · Control of peak cell rate and the associated cell-delay variation (CDV) · Control of sustainable cell rate and the associated burst tolerance. A form of leaky bucket algorithm is used in the control of peak and sustainable cell rates. The leaky bucket algorithm discards cells that are not compliant with the traffic contract. Alternatively, noncompliant cells may be tagged with a CLP of 1 and are subject to discard at a later stage in the network. · Priority Control - Priority control comes in the picture when the network, at some point beyond UPC, discards (CLP = 1) cells. The objective is to discard lower priority cells in order to protect the performance of higher priority cells. · Fast Resource Management - Fast resource management functions operate on the time scale of the roundtrip propagation delay of the ATM connection. The current version of ITU-T I.371 lists fast resource management as a potential tool for traffic control that is for further study. One example of such a function is the ability of the network to allow users to temporarily exceed the data rate agreed in the traffic contract to send a burst of data. If the network determines that the required resources are available on the VPC/VCC, it may grant the request. After the burst has been sent, the connection resumes its original resource allocation. Congestion Control Congestion control is a set of actions taken by the network to minimise the intensity, spread and duration of congestion in the network. · Selective cell discarding - Selective cell discarding is similar to priority control. In the priority control function only excess cells (CLP = 1) are discarded to avoid congestion. Once congestion has actually occurred, the network is no longer bound to meet agreed performance criteria and can discard any (CLP = 1) cell and may even discard (CLP = 0) cells on ATM connections that are not complying with their traffic contract. · Explicit Forward Congestion Indication - Any ATM node that is experiencing congestion may set an EFCI in the payload type field of the cell header of cells on connections passing through the node. The indication notifies the user that congestion avoidance procedures should be initiated for traffic in the same direction as the received cell. It indicates that this cell on this ATM connection has encountered congested resources. The user may then invoke higher-layer protocols to adaptively lower the cell rate of the connection. The network issues the indication by setting the first two bits of the payload type field to 01.

© Dr. Dirk H Pesch, Electronics Dept., CIT, 2002/2003

58

Code Division Multiplexing

· Spread-spectrum communication technique · Common communication link shared through

­ a combination of frequency and time multiplexing Frequency Hopping ­ application (multiplication) of a pseudo-random sequence (code) to distinguish users - Direct-Sequence

· Transmitted signal has much wider bandwidth than information signal · Applications in mobile radio systems, wireless LANs, and high-speed optical fibre communication systems

© Dr. Dirk H Pesch, Electronics Dept., CIT, 2002/2003

59

Basic Concept of Spread Spectrum Communications

· Frequency band spread of information signal is achieved by application of Pseudo Noise (PN) sequence · Application of PN Sequence through

­ Multiplication (Direct Sequence) ­ Fast Carrier Changes (Frequency Hopping)

In a spread spectrum communication system, the narrow band source signal such as human voice is transformed by the system in to a wideband signal. For example the speech encoder of many mobile radio systems creates a digital signal with a bandwidth of something like 8kHz out of a human voice signal. A spread spectrum system would transform this into a signal of bandwidth 1MHz. In order to achieve the band spread from 8kHz to 1MHz, the spread spectrum system applies a pseudo noise sequence to the original signal. In the digital domain, which is considered here, a pseudo noise signal is a quasi random sequence of 0s and 1s. The application of the pseudo noise sequence is performed through multiplication or through fast changes in the carrier frequency at which the signal is transmitted. Multiplication is performed through a modulo 2 addition of the original digital source signal with the pseudo noise sequence. This method is the basis of direct sequence spread spectrum systems (DSSS). The second common method is achieved by jumping from narrow band carrier to narrow band carrier, which has the bandwidth of the source signal. These changes (jumps) in carrier frequency take place at a high frequency, the frequency of the spreading signal, e.g. 1MHz.

© Dr. Dirk H Pesch, Electronics Dept., CIT, 2002/2003

60

Model of Spread-Spectrum Communication System Spread010111001 Information sequence

Channel encoder

Modulator

Channel

Pseudo-random pattern generator

Demodulator

Channel decoder

010111001 Output sequence

Pseudo-random pattern generator

The figure above depicts the model of a spread spectrum communication system. The source information sequence is passed into the channel encoder which protects it from transmission errors encounter on the radio link, by adding redundancy to the sequence. The information sequence including redundancy is then passed on to the modulator. The modulator transforms the information signal into a radio signal. A pseudorandom pattern generator generates the pseudo-noise sequence which is applied to the modulator in order to achieve the band spread from say 8kHz to 1MHz. The spread spectrum radio signal is transmitted through the radio channel and received at the destination, where the reverse process takes place. The demodulator transforms the received radio signal back into an information sequence. Again, a pseudo-noise sequence is applied in order to de-spread the signal back from 1MHz to 8kHz. The channel decoder detects and possibly corrects errors encountered in the radio channel. Hopefully, the correct information signal is output at the end of the channel decoder.

© Dr. Dirk H Pesch, Electronics Dept., CIT, 2002/2003

61

Spread-Spectrum Signal Transmission

x(k) S S

s(k ) = x ( k )g (k - i)

i =0

M -1

s(k ) + n ( k )

x ( k ) + n ( k )g ( k - i )

i=0

M -1

S

S

f a b c d

Spread Spectrum signal transmission is shown in the slide above. Assume x(k) is the binary data sequence of a voice signal. Again, the voice signal is assumed to have a bandwidth of 8kHz. Figure a shows the frequency bandwidth occupied by the data sequence (voice signal). S is the spectral power density, which essentially is the amplitude of the voice signal. Figure b shows what happens to the data sequence x(k) when a pseudo-noise sequence g(k) is applied to it. Spreading of the bandwidth of the original signal x(k) to the spread spectrum signal s(k) takes place. At the same time, the spectral power density (amplitude) of the signal is greatly reduced. However, the total power of the data signal, which is spectral density S time bandwidth f, remains the same. When the spread spectrum signal s(k) is transmitted through the radio channel it experiences the influence of noise. At the receiver the desired signal s(k) together with the noise signal n(k) is received. Both signals occupy the same bandwidth and at this stage a user would not be able to detect the original signal. However, as seen in the previous slide, the received signal s(k) + n(k) is passed through the demodulator, where the same spreading sequence that was used at the transmitter, is applied again. The de-spreading has the effect of gathering the power from the 1MHz wideband signal s(k) back into the 8kHz narrow band signal x(k). At the same time the de-spreading has a spreading effect on the noise signal, which remains spread over the wide bandwidth of 1MHz. Since the power spectral density times frequency remains constant, the ratio between reconstructed narrow band signal x(k) and noise n(k) is so large, that the original data sequence can be reconstructed.

© Dr. Dirk H Pesch, Electronics Dept., CIT, 2002/2003

62

Character of Spread-Spectrum

· Transmission bandwidth is much larger than the information bandwidth · The resulting radio frequency bandwidth is determined by a function independent of the information signal · The ratio of transmitted bandwidth to information bandwidth is the processing gain

GP =

Bt Bi

As we have seen earlier, the transmission bandwidth spread spectrum signal is much larger than the information signal. This is the reason why we talk about spread spectrum signals. Important to note is that the resulting frequency bandwidth of the radio signal is independent of the information signal. This is in contrast to for example FM modulation (used for stereo radio broadcasting), where the bandwidth of the radio signal is in fact determined by the information signal. An important figure in spread spectrum systems is the processing gain, which is the ratio of transmitted signal to information signal. The processing gain influences the efficiency of spread-spectrum systems as well as the possible capacity of multiple access systems using direct sequence spread spectrum.

© Dr. Dirk H Pesch, Electronics Dept., CIT, 2002/2003

63

Properties of SS Signals

· · · · · · · Protection against multipath interference Interference rejection Anti-jamming capability Low probability of interception Privacy, although limited in real systems Multiple access capability Frequency reuse in every cell

Spread-spectrum signals exhibit a number of interesting properties, which make them a very interesting candidate technology for mobile radio systems. The first property is protection against multi-path interference. Multi-path interference occurs when a radio signal arrives at the receiver on a number of different paths from the transmitter. This happens, because a radio signal is reflected and scattered at natural and human made structures such as trees, buildings, vehicles, etc., to take many more paths than just one. Interference rejection and anti-jamming work in the same way as noise rejection as we have seen in an earlier slide. The low probability of interception results from the low power spectral density of a SS signal. The power per Hertz is very low because the total power of the signal is spread over a wide bandwidth. This makes it very different for a listening device to detect the signal. SS signals provide privacy because a signal can only be reconstructed at the receiver if the spreading code is known. In real systems, the number of spreading codes is limited and usually known, so that the property of privacy is only limited. The property of multiple access capability will be explained below. SS systems allow frequency reuse in every cell due to power control and the fact that not all available spreading codes are used in every cell. This concept will be elaborated upon later on.

© Dr. Dirk H Pesch, Electronics Dept., CIT, 2002/2003

64

Interference Rejection

Before de-spreading deS i s f i f

After de-spreading deS s

© Dr. Dirk H Pesch, Electronics Dept., CIT, 2002/2003

65

Multiple Access Capability

S 1 f S 2 f f f S f S f S S

E BER = f b N 0

Eb E B E G B S = b t = b P i = GP N 0 N 0Bt N 0Bt N

© Dr. Dirk H Pesch, Electronics Dept., CIT, 2002/2003

66

Code Division Multiplexing

Code

User N

User 3 User 2 User 1

Frequency

© Dr. Dirk H Pesch, Electronics Dept., CIT, 2002/2003

67

Types of Spread Spectrum based Multiplexing

· Direct Sequence · Frequency Hopping · Time Hopping

The multiplexing and multiple access capabilities of spread-spectrum signals have let to the development of code division multiple access (CDMA) technology. CDMA divides the available radio bandwidth in terms of codes (spreading signals) rather than carrier frequencies such as FDMA systems or time slots such as TDMA systems. However, if a CDMA system uses more than one carrier frequency it will be a hybrid FDMA/CDMA systems. CDMA systems can be divided into three basic classes, direct sequence (DS), frequency hopping (FH) and time hopping (TH) spead spectrum systems. Time hopping is a conceptual possibility and currently not used much in real world applications. DS-CDMA is the basis of most CDMA based cellular mobile systems as well as some Wireless LAN systems. Frequency hopping is used in some a number of LAN and Personal Area Network systems such as IEEE802.11 and Bluetooth.

© Dr. Dirk H Pesch, Electronics Dept., CIT, 2002/2003

68

Direct Sequence Spread Spectrum

Binary data Wideband modulator Code generator Carrier generator

DS CDMA transmitter

Despreading

Data demodulator

Binary data

Code synchron.

Code generator

Carrier generator

DS CDMA receiver

© Dr. Dirk H Pesch, Electronics Dept., CIT, 2002/2003

69

Direct Sequence Spread Spectrum

· Sequence of information symbols is modulated with a spreading code · Spreading code signal consists of a number of code bits called chips, either +1 or -1 · Chip rate usually much higher than information symbol rate which leads to process gain Gp · Chip rate determines bandwidth of transmitted radio signal

Direct Sequence (DS) Spread Spectrum is the most common form of spread-spectrum communications. The sequence of information symbols ( a symbol can be one, two or more bits, depending on the modulation technique employed) is modulated with a spreading code. The modulation is usually achieved at the bit level by a modulo 2 summing. The spreading signal is a sequence of code bits called chips, either +1 or -1. In order to achieve bandspread, the chip rate (data rate) is much higher than the information rate. If we come back to our example of the human speaker who creates a digital signal with data rate 8kbit/s, then a spreading code with a chip rate of 1Mchip/s would produce a processing gain of Gp = 125. The data signal after spreading has the same bit rate as the spreading code, that is 1 MHz in our example. Therefore, the chip rate determines the bandwidth of the transmitted radio signal.

© Dr. Dirk H Pesch, Electronics Dept., CIT, 2002/2003

70

Example of DS Spreading

Baseband information signal

1 0 1 1 0

Pseudo-random spreading code

1 00 1 1 00 1 1 0 00 1 1 1 10 1 1 00 1 1 0 1 00 1 1 00 1 1 0 00 111 0

Resulting spreadspectrum signal

0 1 1 0 01 1 0 1 0 0 0 1 1 1 1 1 0 0 1 1 0 0 1 0 1 1 0 0 1 1 0 1 0 0 0 11 1 0

© Dr. Dirk H Pesch, Electronics Dept., CIT, 2002/2003

71

Example of DS De-Spreading

Received spreadspectrum signal

0 1 1 0 0 1 1 0 1 0 0 0 1 1 1 1 1 0 0 1 1 0 0 1 0 1 1 0 0 1 1 0 1 0 0 0 11 1 0

Synchronous pseudo1 random spreading code

00 1 1 00 1 1 0 00 1 1 1 10 1 1 00 1 1 0 1 00 1 1 00 1 1 0 00 1 11 0

Original baseband information signal

1

0

1

1

0

© Dr. Dirk H Pesch, Electronics Dept., CIT, 2002/2003

72

De-spreading of Interfering Signal

Interfering spreadspectrum signal

11 0 1 1 00 1 1 0 0 0 1 1 1 1 0 0 00 1 1 1 0 0 0 0 1 10 0 0 1 0 00 111 0

Synchronous pseudo-random spreading code

1 00 1 1 00 1 1 0 00 1 1 1 10 1 1 00 1 1 0 1 00 1 1 00 1 1 0 00 111 0

Resulting interfering signal

1

1

1

1

© Dr. Dirk H Pesch, Electronics Dept., CIT, 2002/2003

73

Time-Frequency Occupation of DS and FH Spread Spectrum

frequency

FH

DS

time

© Dr. Dirk H Pesch, Electronics Dept., CIT, 2002/2003

74

Frequency Hopping Spread Spectrum

Data Baseband modulator Code generator Up converter Frequency synthesiser Data

Down converter Synchr. tracking Frequency synthesiser Code generator

Data demodulator

© Dr. Dirk H Pesch, Electronics Dept., CIT, 2002/2003

75

Spreading Waveforms (Code)

Properties

­ Good auto-correlation properties

· Synchronisation and Detection · Self-Interference

­ Good cross-correlation properties

· Multiple Access Interference

Problem: both, good auto-correlation and good crosscorrelation properties cannot be achieved at the same time (Utopian codes)

The key aspect of every spread-spectrum system is the right selection of spreading codes or waveforms. Two properties of signals are essential for good spreading waveforms, that is good auto-correlation properties and good cross-correlation properties. The auto-correlation of a signal is a measure of the selfsimilarity of the signal. When a signal is shifted in time, the auto-correlation gives an indication of how similar the time shifted version of the signal is to its original version. Ideally, the time shifted version of the signal should be as different as possible. This is so, because the receiver has to find the start of the periodic spreading sequence. In order to do so, the spreading code must be very different from its time shifted version for any time greater than zero, otherwise the receiver may determine some other point in time than zero as the start of the sequence. The cross-correlation is and indication of how different (or similar) two signals are. If the cross-correlation of two signals, in this case spreading codes, has many large values, the two signals are very similar. Good auto-correlation properties are essential for synchronisation and detection of SS signals at the receiver and to minimise self interference. Good cross-correlation properties are essential to minimise multiple access interference between the signals of two different transmitters. The information content of a SS signal is reconstructed at the receiver by multiplying the received SS signal with the spreading code used at the transmitter. If another spreading code is very similar to the used at the transmitter, the interfering signal, which used this code, will not be completely de-spread by the first code and will cause multiple access interference.

© Dr. Dirk H Pesch, Electronics Dept., CIT, 2002/2003

76

Correlation Properties

Auto-correlation Cross-correlation

ii ( ) =

ij ( ) =

k =0

i

N -1

i

(t ) i (t - )

j

k=0

N -1

(t ) j (t - ), i

ii

ij

© Dr. Dirk H Pesch, Electronics Dept., CIT, 2002/2003

77

Useful Spreading Codes

· PN sequences

­ Maximal length or m-sequences ­ Gold sequences ­ Kasami sequences

· Orthogonal codes

­ Walsh-Hadamard codes ­ Variable length orthogonal codes (tree structured codes)

Useful spreading codes are pseudo-noise sequences generated by feedback shift registers or orthogonal codes. PN sequences generated by feedback shift registers are also called maximal length or m-sequences according to the length of the code that can be generated by a shift register. M-sequences have excellent auto-correlation properties but poor cross-correlation properties. This makes them suitable for SS systems that do not need multiple access capability. Intensive research, however, has discovered PN sequences that have better cross-correlation properties than m-sequences. These are called Gold or Kasami sequences after their inventors. The second class of spreading codes are orthogonal codes. It is possible to create code that have perfect cross-correlation properties. The term orthogonal with respect to spreading codes can be explained as follows. A spreading code can be viewed as a generalised vector. The cross-correlation function can then be interpreted as the inner or scalar product of the two generalised vectors. Two vectors are said to be orthogonal if their scalar product is zero. Two orthogonal spreading codes have a zero cross-correlation. This explains, why orthogonal codes have perfect cross-correlation properties. However, orthogonal codes, such as Walsh-Hadamard codes, have poor auto-correlation properties. The use of those codes would minimise multiple access interference but would make it very difficult for a receiver to synchronise to the SS signal. It would also lead to much self-interference due to multi-path propagation.

© Dr. Dirk H Pesch, Electronics Dept., CIT, 2002/2003

78

PN sequences

· PN sequences are generated with a linear feedback shift register - maximal length or m-sequences

Linear Feedback

1

2

3

N

· m-sequences have excellent auto-correlation properties but poor cross-correlation properties · Improved PN sequences with better cross-correlation properties are Gold and Kasami sequences

© Dr. Dirk H Pesch, Electronics Dept., CIT, 2002/2003

79

Orthogonal codes

· Perfect cross-correlation but very poor auto-correlation properties · Walsh-Hadamard codes · Property: · Construction:

(k )

i k=0

N -1

j

(k ) =

0, i j

1 1 1 1 1 - 1 1 - 1 1 1 H 1 = [1], H 2 = , H 4 = 1 1 - 1 - 1 1 - 1 1 - 1 - 1 1 HN H H 2N = N H N - H N

© Dr. Dirk H Pesch, Electronics Dept., CIT, 2002/2003

80

Variable-Length Orthogonal Codes

C8(1) C4(1)={1, 1, 1, 1} C8(2) C2(1)={1, 1} C4(2)={1, 1, -1, -1} C8(4) C1(2)={1} C4(3)={1, -1, 1, -1} C2(2)={1, -1} C4(4)={1, -1, -1, 1} C8(8) SF = 1 SF = 2 SF = 4 SF = 8 C8(5) C8(6) C8(7) C8(3)

© Dr. Dirk H Pesch, Electronics Dept., CIT, 2002/2003

81

Power Control

· Power control is vital for proper operation of DS Spread Spectrum due to near-far effect · Two main power control algorithms

­ Open loop power control ­ Closed loop power control

As indicated earlier, power control is one of the most crucial aspects of DSSS design. It is used to overcome the near-far problem, which has a profound effect on multiple access and interference. Power can be considered as the common resource in a DSSS system. The system capacity is limited by the combined total power level, from all stations in a cell, at the base station. If the power is shared carefully such that all mobile stations are received with the same minimum power level at the base station, capacity is maximised. However, if one station is received with a much stronger power level at the base station, it steals capacity from other stations. Two main power control algorithms that are used in DSSS are open-loop and closed-loop power control.

© Dr. Dirk H Pesch, Electronics Dept., CIT, 2002/2003

82

Near-Far Problem

BTS MS 1 MS 2 MS 2 MS 2 Signals at antenna Signals after despreading MS 1 MS 1

The figure above illustrates the near-far problem. Consider two mobile transmitters MS1 and MS2. MS1 is much farther away from the base station BTS than MS2. If both MSs transmitted with the same power, MS2's radio signal would be received at the base station much stronger than MS1's signal (see figure above). When the receiver de-spreads the signal, which means the power that is distributed across a much larger bandwidth is gathered into a narrower bandwidth, the power spectral density of MS1's narrow band signal may not be much larger than the power density of MS2's signal even though it is still spread across a wide bandwidth. This means that the signal to interference ratio of MS1's signal power to MS2's signal power is small. Only if the signal to interference ratio has a certain ratio a signal is deemed of sufficient quality. This would not be the case in the example shown above. Referring back to the analogy about the total power at the base station, MS2's signal consumes much more of the total power available than necessary and thus steals capacity and signal quality from other MSs.

© Dr. Dirk H Pesch, Electronics Dept., CIT, 2002/2003

83

Space Division Multiplexing

· Spatial division of simultaneously transmitted signals by use of directional antennae · Used mainly in satellite communications · More recently proposed for fixed wireless communications (wireless local loop)

© Dr. Dirk H Pesch, Electronics Dept., CIT, 2002/2003

84

Concept of Space Division Multiplexing

© Dr. Dirk H Pesch, Electronics Dept., CIT, 2002/2003

85

Information

Microsoft PowerPoint - Multiplexing.ppt

85 pages

Find more like this

Report File (DMCA)

Our content is added by our users. We aim to remove reported files within 1 working day. Please use this link to notify us:

Report this file as copyright or inappropriate

1043960


You might also be interested in

BETA
mietzner.dvi
VPItransmissionMakerTMOptical Systems User's Manual
Nortel Norstar Technical Handbook. Call 1-800-845-6780 or [email protected] for assistance. Nortel Norstar Phone System
1
Instruction Manual: TT1260 Standard Definition Professional Receiver/Decoder