Thursday, September 10, 2009

Analysis and Simulation of a Fair Queueing Algorithm

A. Demers, S. Keshav, S. Shenker, "Analysis and Simulation of a Fair Queueing Algorithm," Internetworking: Research and Experience, 1 (1990), pp. 3-26.

This paper involves the design of the fair queueing  algorithm which can ensures proper allocation of bandwidth (which packets get transmitted), promptness (how soon they get transmitted) and buffer state (which/when packets are to be discarded) independently. It shows that fair queueing provides several important advantages over the standard FCFS algorithm used by most of the routers. One of the important one is that the latter assumes that there are no ill-behaved sources in the network while fair queueing provides protection from ill-behaved sources by appropriately penalizing them.

The paper started off by highlighting the work done by Nagle on fair queueing in which the gateways maintaind separate queues of packets for each source and serviced them in round robin manner. However, this simplistic algorithm didn't tackle the case of fair bandwidth allocation due to difference in packet sizes. Eg. large packet connections effectively ended up getting higher bandwidths. Moreover, due to lack of simulation data and comparison metrics involving this algorithm, it was necessary to know exactly 'how much' improvement are these fair queueing algorithms over standard FCFS. This lead to the authors look into designing a better FQ algorithm and back it up with adequate simulation/comparison data. The max-min criterion of fairness was that an allocation is fair if:
  • no user receives more than its request
  • no other allocation scheme satisfying the above condition has higher minimum allocation
  • the above condition remains recursively true as the minimal user is removed and the total resource is reduced accordingly.
The allocation criterion used in this paper was conversations -- source destination pairs. A hypothetical implementation of such a fair criterion would be a bit-by-bit round robin scheme (BR) implemented on the gateways where each bit of data was forwarded by the gateway for each flow in a round robin method. The authors quite elegantly showed that this hypothetical scenario can very well be extended to the generic scenario by simply defining the rule that whenever a packet finishes transmision, the next packet sent was the one with the smallest value of F (finishing time). The authors also introduced 'delta' which was kind of a history element of the packet and helped in promptness. It helped in rewarding those connections which were using less than their share of bandwidth. Moreover, the dropped packets from a hosts were also counted in throughput which rightly penalized ill-behaved senders. The authors did a really good job at simulating the generic, JK and DEC flow control with the FCFS and FQ queueing algorithm in a variety of synthetic scenarios which were kind of border cases of evaluation which nicely backed their above arguments.

Overall, the paper was very well presented and highlighted the deep insight of authors in this area. The experimental data adequately backed up all the claims made by the authors. Few things which could have been taken into account was how would the protocol behave when we had multiple connections from the same host. Since the whole FQ was being done on a 'per connection' basis, there could be very well a series of connections establish by a single ill-behaving host. Moreover, as a young student in network research, an interesting question that comes to my mind is that how do researchers choose evaluation scenarios? Why is it apt to claim that the whole protocol works if it works for 6 boundary case scenarios? Couldn't there have been a 7th scenario which would have uncovered an interesting insight in this problem or flaws in the algorithm?

No comments:

Post a Comment