This work does not introduce Random Early Detection, rather it builds upon existing research on RED, contributing algorithms and simulations of more intelligent parameterizations than existing RED schemes used.
Highlights:
- Flexible, works with existing transport protocols
- Focuses on average queue size (using a low pass filter), allowing fluctuation in actual queue size to handle bursty traffic.
- Avoid global synchronization by randomly choosing nodes to notify of congestion (via dropped packets or Explicit Congestion Notification) weighted by their share of the router's bandwidth.
- Nice discussion of parameter selection, but (as Mate and I have learned) some real numbers as part of a sensitivity analysis showing the tradeoffs would be nice.
Thoughts:
- "Short packets are just a likely to be dropped as long ones"? I wonder, what would be the implications of adding packet length into the random weighting algorithm?
Why it should stay in the syllabus:
Because it fits so nicely into our historical tour de networking.
1 comment:
There is a little industry on revisiting algorithms so they operate on bytes rather packets to get behavior like "fair dropping"--it does seem that sending more bytes should increase your probability of experiencing a drop. It is not the same thing as sending a lot of packets.
Post a Comment