Thursday, September 10, 2009

Core-Stateless Fair Queueing

I. Stoica, S. Shenker, H. Zhang, "Core-Stateless Fair Queueing: Achieving Approximately Fair Bandwidth Allocations in High Speed Networks," ACM SIGCOMM, (August 1998).

This paper claims that although fair queueing does a pretty good job in managing bandwidth and buffer space and ensuring promptness, it needs to maintain state, manage buffers and perform packet scheduling on a per flow basis. This increases their complexity as well as decreases their cost-effectiveness. This paper on the other hand proposes a different architecture to tackle the complexity. The idea is that this architecture may not be as good as the strict FQ algorithm but has a very good 'profit by cost' ratio.

The architecture consists of an island of routers (which is quite vague in definition and may refer to a collection of routers owned by an ISP or may even spread across many ISPs in case there are no trust issues) consisting of a contiguous region of network. The authors make a distinction between edge routers and core routers. The idea is that the edge routers maintain flow state information and insert a label into each packet based on their estimates. The core routers which are essentially stateless implement FIFO packet scheduling  and a probabilistic dropping algorithm taking into account the packet labels. Two important assumption taken by the authors were:
  • fair allocation mechanism play an important role in congestion control
  • the complexity of the current fair allocation algorithms is a substantial hindrance to their adoption.
The probabilistic dropping algorithms used by core routers doesn't give nearly-perfect levels of fairness however, it does help in achieving a reasonable approximation to the fair bandwidth allocation. The authors compared this scheme with other queueing scheme FIFO, FRED, RED and DRR and found that CSFQ's performance was better than FIFO and RED and was comparable to FRED. It's complexity level of edge routers were comparable to FRED while core routers were comparable to RED. DRR had a better performance however the cost was a play down.

Overall, this paper was very straight-forward and honest in its approach. It freely questioned its own assumptions and the authors even admitted that they were unsure how to interpret some of the final results! Moreover, even though the paper skipped some mathematical proofs and only gave the final result, it did provide an intuitive understanding of the concept which was really great.

1 comment:

  1. To me the main contribution is the separation of processing between the edge and the core, which was a hot topic at the time. Today, core routers actually can do a lot of per flow processing even for huge numbers of simultaneous flows, so the separation is not so significant. And in the current threat model, it is hard to trust those edge routers -- of which we all have one at home most likely -- to "do the right thing".

    ReplyDelete