Sunday, September 14, 2008

Random Early Detection (RED)

The next stop on our tour of the history of the Internet is another Congestion Avoidance protocol that exists at the gateway level. Random Early Detection sits somewhere between end point congestion avoidance and Fair Queuing in that it runs on routers, but not at the high granularity of individual flows. Instead it operates on summed averages of flows .

This work does not introduce Random Early Detection, rather it builds upon existing research on RED, contributing algorithms and simulations of more intelligent parameterizations than existing RED schemes used.

Highlights:
  • Flexible, works with existing transport protocols
  • Focuses on average queue size (using a low pass filter), allowing fluctuation in actual queue size to handle bursty traffic.
  • Avoid global synchronization by randomly choosing nodes to notify of congestion (via dropped packets or Explicit Congestion Notification) weighted by their share of the router's bandwidth.
  • Nice discussion of parameter selection, but (as Mate and I have learned) some real numbers as part of a sensitivity analysis showing the tradeoffs would be nice.

Thoughts:
  • "Short packets are just a likely to be dropped as long ones"? I wonder, what would be the implications of adding packet length into the random weighting algorithm?

Why it should stay in the syllabus:
Because it fits so nicely into our historical tour de networking.

1 comment:

Randy H. Katz said...

There is a little industry on revisiting algorithms so they operate on bytes rather packets to get behavior like "fair dropping"--it does seem that sending more bytes should increase your probability of experiencing a drop. It is not the same thing as sending a lot of packets.