This paper involves the design of the fair queueing algorithm which can ensures proper allocation of bandwidth (which packets get transmitted), promptness (how soon they get transmitted) and buffer state (which/when packets are to be discarded) independently. It shows that fair queueing provides several important advantages over the standard FCFS algorithm used by most of the routers. One of the important one is that the latter assumes that there are no ill-behaved sources in the network while fair queueing provides protection from ill-behaved sources by appropriately penalizing them.
The paper started off by highlighting the work done by Nagle on fair queueing in which the gateways maintaind separate queues of packets for each source and serviced them in round robin manner. However, this simplistic algorithm didn't tackle the case of fair bandwidth allocation due to difference in packet sizes. Eg. large packet connections effectively ended up getting higher bandwidths. Moreover, due to lack of simulation data and comparison metrics involving this algorithm, it was necessary to know exactly 'how much' improvement are these fair queueing algorithms over standard FCFS. This lead to the authors look into designing a better FQ algorithm and back it up with adequate simulation/comparison data. The max-min criterion of fairness was that an allocation is fair if:
- no user receives more than its request
- no other allocation scheme satisfying the above condition has higher minimum allocation
- the above condition remains recursively true as the minimal user is removed and the total resource is reduced accordingly.
Overall, the paper was very well presented and highlighted the deep insight of authors in this area. The experimental data adequately backed up all the claims made by the authors. Few things which could have been taken into account was how would the protocol behave when we had multiple connections from the same host. Since the whole FQ was being done on a 'per connection' basis, there could be very well a series of connections establish by a single ill-behaving host. Moreover, as a young student in network research, an interesting question that comes to my mind is that how do researchers choose evaluation scenarios? Why is it apt to claim that the whole protocol works if it works for 6 boundary case scenarios? Couldn't there have been a 7th scenario which would have uncovered an interesting insight in this problem or flaws in the algorithm?
No comments:
Post a Comment