Packet Switching Networks:Traffic Management

Traffic Management

  • · The main objectives of traffic management are efficient use of network resources & deliver QoS.
  • · Traffic Management is classified into three levels that are Packet level, Flow level and  Flow aggregated level.
Traffic Management at Packet Level

o Queueing & scheduling at switches, routers and multiplexers.

image

  • The path traversed by packet through a network can be modeled as sequence of Queueing systems as shown in above figure.
  • A packet traversing network encounters delay and possible loss at various multiplexing points.
  • End-to-end performance is sum of the individual delays experienced at each system.
  • Average end-to-end delay is the sum of the individual average delay.
  • To meet the QoS requirements of multiple services, a queueing system must implement strategies for controlling the transmission bit rates.

The different strategies for Queue scheduling are:-

1. FIFO QUEUEING

2. PRIORITY QUEUEING

3. FAIR QUEUEING

4. WEIGHTED FAIR QUEUEING

1) FIFO QUEUEING

· Transmission Discipline: First-In, First-Out

· All packets are transmitted in order of their arrival.

· Buffering Discipline:- Discard arriving packets if buffer is full

· Cannot provide differential QoS to different packet flows

· Difficult to determine performance delivered

· Finite buffer determines a maximum possible delay

· Buffer size determines loss probability, but depends on arrival & packet length statistics.

FIFO Queueing with Discard Priority

FIFO queue management can be modified to provide different characteristics of packet- loss performance to different classes of traffic.

· The above Figure 7.42 (b) shows an example with two classes of traffic.

· When number of packets in a buffer reaches a certain threshold, arrivals of lower access priority (class 2) are not allowed into the system.

· Arrivals of higher access priority (class 1) are allowed as long as the buffer is not full.

 

clip_image004

2) Head of Line (HOL) Priority Queueing

clip_image006

· Second queue scheduling approach which defines number of priority classes.

· A separate buffer is maintained for each priority class.

· High priority queue serviced until empty and high priority queue has lower waiting time

· Buffers can be dimensioned for different loss probabilities

· Surge in high priority queue can cause low priority queue to starve for resources.

· It provides differential QoS.

· High-priority classes can hog all of the bandwidth & starve lower priority classes

· Need to provide some isolation between classes

Sorting packets according to priority tags/Earliest due Date Scheduling

clip_image007

· Third approach to queue scheduling

· Sorting packets according to priority tags which reflect the urgency of packet needs to be transmitted.

· Add Priority tag to packet, which consists of priority class followed by the arrival time of a packet.

· Sort the packet in queue according to tag and serve according to HOL priority system

· Queue in order of “due date”.

· The packets which requires low delay get earlier due date and packets without delay get indefinite or very long due dates

3) Fair Queueing / Generalized Processor Sharing
  • Fair queueing provides equal access to transmission bandwidth.
  • Each user flow has its own logical queue which prevents hogging and allows differential loss probabilities
  • C bits/sec is allocated equally among non-empty queues.
  • The transmission rate = C / n bits/second, where n is the total number of flows in the system and C is the transmission bandwidth.
  • Fairness: It protects behaving sources from misbehaving sources.
  • Aggregation:

o Per-flow buffers protect flows from misbehaving flows

o Full aggregation provides no protection

o Aggregation into classes provided intermediate protection

  • Drop priorities:

o Drop packets from buffer according to priorities

o Maximizes network utilization & application QoS

o Examples: layered video, policing at network edge.

image

· Idealized system assumes fluid flow from queues, where the transmission bandwidth is divided equally among all non-empty buffers.

· The figure assumes buffer1 and buffer 2 has single L-bit packet to transmit at t=0 and no subsequent packet arrive.

· Assuming capacity of C=L bits/second=1 packet/second.

· Fluid-flow system transmits each packet at a rate of ½ and completes the transmission of both packets exactly at time=2 seconds.

· Packet-by-packet fair queueing system transmits the packet from buffer 1 first and then transmits from buffer 2, so the packet completion times are 1 and 2 seconds.

image

The above figure 7.48 illustrates the differences between ideal or fluid flow and packet-by-packet fair queueing for packets of variable length.

  • · The fluid flow fair queueing is not suitable, when packets have variable lengths.
  • · If the different user buffers are serviced one packet at a time in round-robin fashion, then we do not obtain fair allocation of transmission bandwidth.
  • · Finish tag is number used for the packet and the packet with smallest finish tag will be served first, and finish tag is computed as follows.
  • · Finish tag is used as priorities in packet-by-packet system.

Consider Bit-by-Bit Fair Queueing

  • Assume n flows, n queues
  • 1 round = 1 cycle serving all n queues
  • If each queue gets 1 bit per cycle, then 1 round is the number of opportunities that each buffer has had to transmit a bit.
  • Round number = number of cycles of service that have been completed

clip_image014

  • If packet arrives to idle queue:

Finishing time = round number + packet size in bits

  • If packet arrives to active queue:

Finishing time = finishing time of last packet in queue + packet size

Computing the Finishing Time

  •  F(i,k,t) = finish time of kth packet that arrives at time t to flow i
  •  P(i,k,t) = size of kth packet that arrives at time t to flow i
  •  R(t) = round number at time t
  • Fair Queueing:

F(i,k,t) = max{F(i,k-1,t), R(t)} + P(i,k,t)

4) Weighted Fair Queueing (WFQ)

· WFQ addresses the situation in which different users have different requirements.

· Each user flow has its own buffer and each user flow also has weight.

· Here weight determines its relative bandwidth share.

· If buffer 1 has weight 1 and buffer 2 has weight 3, then when both buffers are nonempty, buffer 1 will receive 1/(1+3)=1/4 of the bandwidth and buffer 2 will receive ¾ of the bandwidth.

image

In the above figure,

· In Fluid-flow system, the transmission of each packet from buffer 2 is completed at time t=4/3, and the packet from buffer 1 is completed at t=2 seconds.

· In the above figure buffer1 would receive 1 bit/round and buffer 2 would receive 3

bits/second.

· Packet-by-packet weighted fair queueing calculates its finishing tag as follows F(i,k,t) = max{F(i,k-1,t), R(t)} + P(i,k,t)/wi

· The above figure also shows the completion times for Packet-by-packet weighted fair queueing.

· The finish tag for buffer1 is F(1,1)=R(0)+1/1 =1 and finish tag for buffer 2 is F(2,1) =R(0) + 1/3 =1/3.

· Therefore the packet from buffer 2 is served first and followed by packet from

buffer 1.

Buffer Management: - Random Early Detection (RED)

· An approach to preventing unfair buffer hogging by detecting congestion when a buffer begins to reach certain level and it notifies the source to reduce the rate at which they send packets.

· Packets produced by TCP will reduce input rate in response to network congestion

· RED is a buffer management technique that attempts to provide equal access to FIFO system by randomly dropping arriving packets before the buffer overflows.

· A dropped packet provides feedback information to the source and informs the source to reduce its transmission rate.

· Early drop: discard packets before buffers are full

· Random drop causes some sources to reduce rate before others, causing gradual reduction in aggregate input rate.

· Minth and Maxth are the two thresholds defined

· RED algorithm uses average queue length, when average queue length is below

Minth , RED does not drop any arriving packets.

· When average queue length is between Minth and Maxth, RED drops an arriving

packet with an increasing probability as the average queue length increases.

· Packet drop probability increases linearly with queue length

· RED improves performance of cooperating TCP sources.

· RED increases loss probability of misbehaving sources

Algorithm:

  • Maintain running average of queue length
  • If Qavg < minthreshold, do nothing
  • If Qavg > maxthreshold, drop packet
  • If in between, drop packet according to probability
  • Flows that send more packets are more likely to have packets dropped

image

Traffic Management at the Flow Level

· Management of individual traffic flows & resource allocation to ensure delivery of QoS(e.g. Delay, jitter, loss)

· Traffic management at flow level operates on the order of milliseconds to seconds.

· It is concerned with managing the individual traffic flow to ensure the QoS (e.g. delay, jitter, loss) requested by user is satisfied.

· The purpose of Traffic Management at the Flow Level is to control the flows of traffic

and maintain performance even in presence of traffic overload.

· The process of managing the traffic flow in order to control congestion is called congestion control.

· Congestion occurs when a surge of traffic overloads network resources

image

Approaches to Congestion Control:

• Preventive Approaches: Scheduling & Reservations

• Reactive Approaches: Detect & Throttle/Discard

Ideal effect of congestion control:

Resources used efficiently up to capacity available

image

Open-Loop Control

· It prevents congestion from occurring.

· It does not depend on feedback information to react to congestion.

· Network performance is guaranteed to all traffic flows that have been admitted into the network

· It depends on three Key Mechanisms and they are:-

· Admission Control

· Policing

· Traffic Shaping

Admission Control

· It is a network function that computes the resource (bandwidth and buffers) requirements of new flow and determines whether the resources along the path to be followed are available or not available.

· Before sending packet the source must obtain permission from admission control.

· Admission control decides whether to accept the flow or not.

· Flow is accepted, if the QoS of new flow does not violate QoS of existing flows

· QoS can be expressed in terms of maximum delay, loss probability, delay variance, or other performance measures.

· QoS requirements:

o Peak, Avg., Min Bit rate

o Maximum burst size

o Delay, Loss requirement

· Network computes resources needed

o “Effective” bandwidth

· If flow accepted, network allocates resources to ensure QoS delivered as long as source conforms to contract

clip_image019

Policing

· Network monitors traffic flows continuously to ensure they meet their traffic contract.

· The process of monitoring and enforcing the traffic flow is called policing.

· When a packet violates the contract, network can discard or tag the packet giving it lower priority

· If congestion occurs, tagged packets are discarded first

· Leaky Bucket Algorithm is the most commonly used policing mechanism

o Bucket has specified leak rate for average contracted rate

o Bucket has specified depth to accommodate variations in arrival rate

o Arriving packet is conforming if it does not result in overflow

Leaky Bucket algorithm can be used to police arrival rate of a packet stream

clip_image020

Let X = bucket content at last conforming packet arrival

Let ta be last conforming packet arrival time = depletion in bucket

Leaky Bucket Algorithm

clip_image021

· The above figure shows the leaky bucket algorithm that can be used to police the traffic flow.

· At the arrival of the first packet, the content of the bucket is set to zero and the last

conforming time (LCT) is set to the arrival time of the first packet.

· The depth of the bucket is L+I, where l depends on the traffic burstiness.

· At the arrival of the kth packet, the auxiliary variable X’ records the difference between the bucket content at the arrival of the last conforming packet and the

interarrival time between the last conforming packet and the kth packet.

· If the auxiliary variable is greater than L, the packet is considered as nonconforming, otherwise the packet is conforming. The bucket content and the arrival time of the

packet are then updated.

Leaky Bucket Example: - The operation of the leaky bucket algorithm is illustrated in the below figure.

· Here the value I is four packet times, and the value of L is 6 packet times.

· The arrival of the first packet increases the bucket content by four (packet times).

· At the second arrival the content has decreased to three, but four more are added to the bucket resulting in total of seven.

· The fifth packet is declared as nonconforming since it would increase the content to11, which would exceed L+I (10).

· Packets 7, 8, 9 and 10 arrive back to back after the bucket becomes empty. Packets 7, 8 and 9 are conforming, and the last one is nonconforming.

· Non-conforming packets not allowed into bucket & hence not included in calculations.

image

Dual Leaky Bucket

· Dual leaky bucket is use to police multiple traffic parameters like PCR, SCR, and MBS:

· Traffic is first checked for SCR at first leaky bucket.

· Nonconforming packets at first bucket are dropped or tagged.

· Conforming (untagged) packets from first bucket are then checked for PCR at second bucket.

· Nonconforming packets at second bucket are dropped or tagged.

clip_image023

Traffic Shaping

image

· Networks police the incoming traffic flow

· Traffic shaping is used to ensure that a packet stream conforms to specific parameters

· Networks can shape their traffic prior to passing it to another network

· In the above figure, the traffic shaping device is located at the node just before the traffic flow leaves a network, while the policing device is located at the node that receives the traffic flow from another network.

Leaky Bucket Traffic Shaper

clip_image026

  • Incoming packets are first stored in a buffer.
  • Packets are served periodically so that the stream of packets at the output is smooth.
  • Incoming packets are first stored in a buffer.
  • Packets are served periodically so that the stream of packets at the output is smooth.
  • A traffic shaping device needs to introduce certain delays for packets that arrive earlier than their scheduled departures and require a buffer to store these packets.
  • l Leaky bucket traffic shaper is too restrictive, since the output rate is constant when the buffer is not empty.
  • Token Bucket Traffic Shaper
  • · Token bucket is a simple extension of leaky bucket traffics shaper
  • · Tokens are generated periodically at constant rate and are stored in token bucket.
  • · Token rate regulates transfer of packets.
  • · If the token bucket is full, arriving tokens are discarded.
  • · A packet from the buffer can be taken out only if a token in the token bucket can be drawn
  • · If sufficient tokens available, packets enter network without delay
  • · If the token bucket is empty, arriving packets have to wait in the packet buffer.
  • · The size K determines how much burstiness allowed into the network

clip_image028

Closed-Loop Flow Control

· Congestion control

o Feedback information is used to regulate the flow from sources into network based on buffer content, link utilization, etc.

o Examples: TCP at transport layer; congestion control at ATM level

· Feedback information may be sent by End-to-end or Hop-by-hop.

End-to-end closed loop control

· Feedback information about state of network is propagated back to source which regulate packet flow rate.

· Feedback information may be forwarded directly by a node that detects congestion, or it may be forwarded to destination first which then it relays information to source.

· The transmission of feedback information introduces propagation delay, so the information may not be accurate when the source receives the information.

Hop-by-hop control

· It reacts faster than end-to-end counterpart due to shorter propagation delay.

· State of the network is propagated to the upstream node as shown in below figure.

· When a node detects congestion it tells to its upstream neighbor to slow down its transmission rate.

· The Back Pressure created from one down stream node to another upstream node may continue all the way to the source.

image

Implicit vs. Explicit Feedback: - The information can be implicit or explicit.

Explicit Feedback

  • The node detecting congestion initiates an explicit message to notify the source about the congestion in the network.
  • The explicit message can be sent as separate packet often called as choke packets or piggybacked on a data packet.
  • The explicit message may be bit information or it may contain rich amount of information.

Implicit Feedback

  • In implicit Feedback, no such explicit messages are sent between the nodes.
  • Here congestion is controlled by using time out based on missing acknowledgements from destination to decide whether congestion has been encountered in the network.
  • TCP congestion control is one example that regulates the transmission rate by using the implicit feedback information derived from missing acknowledgement.
Traffic Management at the flow aggregated level / Traffic Engineering
  • Routing of aggregate traffic flows across the network for efficient utilization of resources and meeting of service levels
  • Traffic Management at the Flow-Aggregate Level is called “Traffic Engineering”.
  • Management exerted at flow aggregate level
  • Distribution of flows in network to achieve efficient utilization of resources (bandwidth)
  • Shortest path algorithm to route a given flow not enough
  • Does not take into account requirements of a flow, e.g. bandwidth requirement
  • Does not take account interplay between different flows
  • Must take into account aggregate demand from all flows.
  • Refer figure 7.63 and page number 560-561 for more information.

Why Internetworking?

· To build a “network of networks” or internet

o operating over multiple, coexisting, different network technologies

o providing ubiquitous(universal) connectivity through IP packet transfer

o achieving huge economies of scale

· To provide universal communication services

o independent of underlying network technologies

o providing common interface to user applications

· To provide distributed applications

o Rapid deployment of new applications

  • Email, WWW, Peer-to-peer

o Application independent of network technologies

  • New networks can be introduced

TCP/IP Architecture

clip_image033

  • The TCP/IP protocol suite usually refers not only to the two most well-known protocols called TCP and IP but also to other related protocols such as UDP, ICMP, HTTP, TELNET and FTP.
  • Basic structure of TCP/IP protocol suite is shown in above figure.
  • Protocol data unit (PDU) exchanged between peer TCP protocols is called segments.
  • Protocol data unit (PDU) exchanged between peer UDP protocols is called datagrams.
  • Protocol data unit (PDU) exchanged between peer IP protocols is called packets.

clip_image035

  • In the above figure an HTTP GET command is passed to the TCP layer, which encapsulates the message into a TCP segment.
  • The segment header contains an ephemeral port number for the client process and well known port 80 for HTTP server process.
  • The TCP segment is passed to IP layer where it is encapsulated in an IP packet.
  • The IP packet contains source and destination network address.
  • IP packet is then passed through network interface and encapsulated into PDU of underlying network.
  • In the network interface, the IP packet is encapsulated into an Ethernet frame, which contains physical addresses that identify the physical endpoints for the Ethernet sender and receiver.

clip_image037

· IP packets transfer information across Internet

· Host A IP → router→ router…→ router→ Host B IP

· IP layer in each router determines next hop (router)

· Network interfaces transfer IP packets across networks

· Internet Names

· Each host has a unique name

o Independent of physical location

o Domain Name will facilitates memorization by humans

· Host Name

o Name given to host computer

· User Name

o Name assigned to user

Internet Addresses

  • l Each host has globally unique logical 32 bit IP address
  • l Separate address for each physical connection to a network
  • l Routing decision is done based on destination IP address
  • l IP address has two parts:
    • l netid and hostid
    • l netid unique
    • l netid facilitates routing
  • l Dotted Decimal Notation is used for representation:

Ex: - int1.int2.int3.int4

128.100.10.13

DNS(Domain Name Service) resolves IP name to IP address

Physical Addresses

· LANs (and other networks) assign physical addresses to the physical attachment to the network

· The network uses its own address to transfer packets or frames to the appropriate

destination

· IP address needs to be resolved to physical address at each IP network interface

· Example: Ethernet uses 48-bit addresses

o Each Ethernet network interface card (NIC) has globally unique Medium Access Control (MAC) or physical address

o First 24 bits identify NIC manufacturer; second 24 bits are serial number

o 00:90:27:96:68:07 12 hex numbers

Comments

Popular posts from this blog

Packet Switching Networks part2

TCP/IP-II:OSPF Link State Update