Monday, September 8, 2008

A second look at Congestion Avoidance

In his paper Congestion Avoidance and Control, Van Jacobson basically presents the story of most of TCP Tahoe (all except Karn's clamped retransmit backoff and fast retansmit).

To justify the addition of the the seven new algorithms he introduces, he identifies three things that cause instability in network:

  1. not reaching equilibrium

  2. more packets entering than leaving

  3. equilibrium not reached because of data path limitations

The first can be handled with the ironically named slow start mechanism (which takes approximately time R*log_2(W) where R=rtt, W=window size).

The second, by better estimating round trip time for resend policies.

The third item I find a bit confusing. I think what he means by "resource limits along the path" is network congestion, so he solves this with congestion avoidance (i.e. additive increase/multiplicative decrease, which he is quick to point out he did not invent, nor even bother to justify in this paper).

He points out that dropped packets, as detected by timeouts, are enough for congestion detection and that explicit notification of congestion (as is described in today's other paper)

Like most of the papers we have read so far, this work is worth reading because it shaped the Internet, which in turn continues to shape the world as we know it.

No comments: