...

/

Introduction to Congestion Control

Introduction to Congestion Control

In this lesson, we'll look at congestion control!

What Is Congestion?

When more packets than the network has bandwidth for are sent through, some of them start getting dropped and others get delayed. This phenomenon leads to an overall drop in performance and is called congestion.

This is analogous to vehicle traffic congestion when too many vehicles drive on the same road at the same time. This slows the overall traffic down.

How Do We Fix It?

Congestion physically occurs at the network layer (i.e. in routers), however it’s mainly caused by the transport layer sending too much data at once. That means it will have to be dealt with or ‘controlled’ at the transport layer as well.

Note: Congestion control also occurs in the network layer, but we’re skipping over that detail for now since the focus of this chapter is the transport layer. So congestion control with TCP is end-to-end; it exists on the end-systems and not the network. Also note that in this lesson, the term delay means end-to-end message delay.

Congestion control is really just congestion avoidance. Here’s how the transport layer controls congestion:

  1. It sends packets at a slower rate in response to congestion,
  2. The ‘slower rate’ is still fast enough to make efficient use of the available capacity,
  3. Changes in the traffic are also kept track of.

Congestion control algorithms are based on these general ideas and are built into transport layer protocols like TCP. Let’s also look at a few principles of bandwidth allocation before moving on.

Bandwidth Allocation Principles

Should Allocation Be on a per Host or per Connection Basis?

...