Thursday, October 29, 2009

Active Network Vision and Reality: Lessons from a Capsule-Based System

D. Wetherall, "Active Network Vision and Reality: Lessons from a Capsule-Based System," 17th Symposium on Operating Systems Principles," (December 1999).
 Active Networks are a novel approach to network architecture in which customized programs are executed within the network at various network elements along the path to the destination. The source node is empowered to run routines on various active node on its path to the destination which not only aids in experimenting new Internet services and facilitates their easy deployment but also enables dynamic routing on a per-application basis.  Various performance and security issues that may result from the execution of untrusted and mobile code are dealt by ANTS as follows:
  • Security Issues:  Constraints have been placed on the forwarding routine disallowing changes in path and certification by a third party CA is required to impose additional security. Further, hashes are used to perform integrity checks.
  • Performance Issues: Upper limits are placed on the size (16 KB) and running time on the subroutine codes for security as well as performance purposes. Further, ANTS active network toolkit also uses active caching and loads code on-demand before execution.
Overall, this seems like a great idea. It gives the source node the ability of dynamically choosing the best suited forwarding routine on a per-application basis. No doubt, this service needs to be deployed in an incremental fashion in the Internet, something that the author did not address explicitly. However, despite being a good clean slate design idea, it raises quite a few issues:
  1. Violation of strict end-to-end principle. Does the performance increase justify this?
  2. Limitations on the type of services due to size and runtime limitations on code execution.
  3. Security issues arising from execution of third party code still remain a concern. I personally felt that relying on third party certifying authority was not very clearly explained. Moreover, even though limits were placed on size and runtime of code, the system may be still vulnerable to coordinated attacks which have DDoS like nature.
  4. Maybe I missed it, but I failed to see if there would be any incentive to the ISP to allow active node routers in the path.

Resilient Overlay Networks

D. Andersen. H. Balakrishnan, F. Kaashoek, R. Morris, "Resilient Overlay Networks," 18th Symposium on Operating Systems Principles, (December 2001).
 Resilient Overlay Network is an architecture that allows distributed Internet applications to detect and recover from path outages and periods of degraded performance within a matter of seconds. The idea is that by building an application level overlay network, we can achieve an increase in performance as well as reliability. The overall concept of RON involves a small group of nodes spread across the network in multiple ISPs. They keep pinging each other and help in creating a global state of 'link healths' (similar to link state concept) and decide to overlay packets between hosts either directly or through intermediate host(s) based on the global link health information. This helps in circumventing policy restrictions imposed by ISPs, helps it in adapting to network conditions or helps in customizing to various applications such as video/voice etc.

Overall, the analysis was definitely encouraging. It was observed that overlay network reacted to failures in a matter of seconds as opposed to BGP which takes many minutes before restoring original flow. An interesting phenomenon was observed that RON paths were actually shorter that BGP paths. My only point of critique/concern with this paper would be as to how feasible would be its wide deployment? Wouldn't the ISPs be concerned about the fact that overlay networks mess with their traffic engineering? Wouldn't this make them take steps against it? Further, issues involving large scale deployment (and probably resulting congestion due to control place information) aren't really tackled in the paper. It would be interesting to discuss these aspects in class.

Tuesday, October 27, 2009

On Inferring Autonomous System Relationships in the Internet

L. Gao, "On Inferring Autonomous System Relationships in the Internet," IEEE/ACM Transactions on Networks, V. 9, N. 6, (December 2001), pp. 733-745.
 This paper talked about how the author classified various types of routes that appeared in BGP routing tables based on AS relationships and presented an heuristic algorithm that inferred these relationships from the routing table. The underlying theme of this paper was that connectivity does not imply reachability. So, in order to study the sturcutre of the Internet, it is not just sufficient to study the graph topologies as reachability also depends a lot on inter domain policies between different ASes managed by different ISPs.

ASes may fall into one or more of the following categories: Provider, Customer, Peer or Sibling. Based on this, we can have following Relationships:
  • Customer-AS: AS exports all its routes, other customer's routes as well as provider and peer routes to the customer.
  • AS-Provider: AS usually doesn't export its provider and peer routes and only exports customer routes.
  • AS-Peer: AS can export its routes and its customers' routes but usually doesn't export its provider or peer routes.
  • AS-Sibling: AS exports its routes, routes of customers as well as provider and peer routes.
The essential theme of this paper is the 'Valley Free Property" that exists in the Internet between different ASes.  Intituively, considering Customer --> Provider relationship to be "going upwards", Provider --> Customer relationship to be "going downwards", and Peer --> Peer and Sibling --> Sibling relationships to be at the "same level", Gao proved that the Internet can never have a "Valley". This helped her to propose an algorithm that could identify quite accurately how the various ASes were connected. The first phase of the algorithm consisted of walking on a path connecting various ASeS and finding for each node, its set of neighbors as well as its out-degree. This then classified AS pairs into provider-customer or sibling relationships. Then the second phase identified AS pairs that did not have a peering relationships and finally the third phase assigned peering relationships to AS pairs.


Comments

The paper was well written, clear in explanation and it was particularly helpful to understand the algorithm due to the incremental way it was proposed. My only concern would be how much accurate these identification of relationships can be? As highlighted, the repository for BGP routing relationships is not always accurate and peering relationships might well be protected by contractual agreements. Doesn't this puts an open question on the results?

On Power-Law Relationships of the Internet Topology

M. Faloutsos, P. Faloutsos, C. Faloutsos, "On Power-Law Relationships of the Internet Topology," ACM SIGCOMM Conference, (September 1999).
This paper talks about some very interesting power laws that exist in the Internet. Despite the widespread randomness of the Internet, the authors discover some surprisingly simple power laws of the Internet topology. These were discovered by extensive real world measurement data and using least square fit methods subsequently on the plots. The correlation coefficients of 96% or higher imply that the power laws are more that just a coincidental discovery.

Power-laws are expressions of the form y = k * x^a where k and a is a constants, x and y are the measures of interest. The entire work is based on extensive measurements of the Internet. The authors had to their disposal 4 datasets, 3 of which consisted of inter-domain topology data from Nov 1997 to Dec 1998 and the 4th consisted of the router topology data in 1995. The following were the 3 power laws discovered:

Power-Law 1 (rank exponent): The outdegree, dv, of a node v, is proportional to the rank of the node, rv, to the power of a constant R.

Power-Law 2 (outdegree exponent): The frequency fd of an outdegree d is proportional to the outdegree of the power of a constant O.

Power-Law 3 (eigen exponent): The eigenvalues of a graph are proportional to the order of the graph to the power of a constant E.

Comments

An important take-away from this paper is that most of the distributions of interest are skewed (as illustrated by the power law) which makes the use of 'average values' totally misleading while studying the Internet. Power laws are no doubt really helpful in predicting the future of the internet and as well as studying the performances of new protocols that are being created for the Internet. However, my only point of critique about this otherwise very well written paper would be an inadequate analysis of why exactly these power laws hold in the Internet. As highlighted by the authors in the beginning, that in case of some drastic changes in policies or topologies, these power laws may fail to hold. I think it would have been really great had the authors talked a bit more about what these 'drastic changes' may be. Specifically, what are the underlying assumptions that make the power law hold. This would have been very important particularly in the current scenario in which the whole Internet is divided into various ASes and their policy decisions have the potential of having considerable effect on the Internet routing.

Monday, October 19, 2009

White Space Networking with Wi-Fi like Connectivity

P. Bahl, R. Chandra, T. Moscibroda, R. Murty, M. Welsh, "White Space Networking with Wi-Fi like Connectivity", ACM SIGCOMM Conference, (August 2009).
This paper presented WhiteFi, the first WiFi like system constructed on top of UHF white spaces. WiFi builds on a technique called SIFT (Signal Interpretation before Fourier Transform) that helps in reducing the time to detect transmissions in variable width systems by analyzing raw signals in time domain. To elaborate the problem in a bit more detail, FCC recently has permitted the use of unlicensed devices in the frequency band of 180 MHz to 512 MHz comprising of 30 channels (21 to 51 with the exception of 37) of 6 MHz each. This opened up a great opportunity of have a WiFi like system on this White Space instead of conventional ISM bands. However, FCC directive requires that communication must not interfere with already existing communication on the channel due to TV channels or microphones. This makes the problem very interesting due to 3 characteristics:
  • Spatial Variation: 'Channel use'  is dependent on the location of sender and receiver. Hence there should be a way of communicating as to which channels are busy and which of them are available.
  • Spectrum Fragmentation: The UHF white space are often fragmented due to presence of incumbents. The size of each fragment can vary from 1 channel to several. A bigger contiguous channel = More Throughput! So it is the responsibility of the protocol to look for contiguous channels.
  • Temporal Variation: UHF band also suffers from temporal variation due to widespread use of microphones. Hence the protocol was designed to be robust to these changes in band availability.
The WhiteFi protocol takes all the above characteristics into account and was developed on the KNOWS hardware prototype. The spectrum assignment policy involves the exchange of a spectrum map and airtime utilization vector between the AP and the client to find out which of the channels are free for use at both the ends. Such a design tackles the problem of spatial variation. Further, once the AP gets the utilization vector, given a range of UHF channels, it calculates the MCham (multichannel airtime metric) and chooses the set of channels with maximum MCham. Further, SIFT is used for efficient variable bandwidth signal detection. The authors propose 2 algorithm choices for the same -- the Linear SIFT-Discovery Algorithm (L-SIFT) or the Jump SIFT-Discovery Algorithm (J-SIFT). J-SIFT outperforms L-SIFT when number of channels is greater than 10.

The evaluation was done using a combination of simulations and experiments and the results pretty much matched the expectations. J-SIFT pretty outperformed both Baseline non-SIFT and Linear SIFT in time taken to discover the AP. Overall WhiteFi due to its adaptive nature was successful in asserting its supremacy over any other static optimal implementation of channel communication policy. Particularly, figure 14 was a treat to watch that highlighted how well the protocol behaved to changes temporal changes in spectrum.

Critiques

This is definitely one of the very good papers we have read as part of this course. Not only does it tackle a very real problem, it presents a efficient and optimal solution which works despite the stringent constraints laid down by FCC. However, I have few comments on this paper:
  1. I thought the paper lacked some analysis on how does the protocol behaves when multiple microphone users interfere on a regular basis (that is how efficient is the J-SIFT based channel hopping). Though Fig. 14 provided some insight, it would have been great to have a deeper analysis involving throughput/overhead over a relatively long period of time.
  2. Theoretically, all the channels could be busy when someone wants to communicate. Isn't that a big usability issue -- 'My Internet may or may not work'! Is it really the case? Can all frequencies between 180-512MHz be assigned for TV/Microphone transmissions?

Interactive WiFi Connectivity For Moving Vehicles

A. Balasubramanian, R. Mahajan, A. Venkataramani, B. N. Levine, J. Zahorjan, "Interactive WiFi Connectivity For Moving Vehicles", ACM SIGCOMM Conference, (August 2008)
This paper talked about ViFi (vehicular WiFi), a protocol designed for providing cheap WiFi connectivity from moving vehicles for common applications such as Web browsing (short TCP transfers) and VoIP.  It was motivated by the fact that clients can overcome many disruptions by communication with multiple base-stations(BS) simultaneously (a concept which they refer to as macrodiversity). The idea is that base-stations that opportunistically overhear a packet but not its ACK, probabilistically relay the packet to the next intended hop thereby minimizing wasted transmissions. This called for designing a new handoff policy and hence required the evaluation of 6 (4 practical and 2 theoretical) handoff policies viz. RSSI, BRR, Sticky, History, BestBS and AllBSeS. ViFi was aimed to be somewhere between BRR and AllBSeS, with the concept of choosing BSeS with high exponentially averaged beacon reception ration and not limiting itself to only one BS. Having multiple BSeS was considered a good strategy mainly because of 2 reasons:
  • The vehicles were often in range of multiple BSeS.
  • Packet losses were bursty and roughly independent across senders and receivers i.e. bursts were path dependent (due to multipath fading) rather than receiver dependent.
Protocol Design
  1. Each vehicle designates one of the nearby BSeS as the anchor (using BRR).
  2. All other BSes that the vehicle can hear are designated as auxiliaries.
  3. Each vehicle periodically broadcasts its anchors and auxiliary BSeS in beacon packets.
  4. Now whenever there is a conversation between the vehicle and its anchor BS, the auxiliary BS overhears it and if the packet is not ACKed for some time, it probabilistically relays the overheard packet with a relaying probability p.
  5. If the vehicle moves out from the range of current anchor before it can deliver packets from the internet, ViFi enables the newly designated anchor to salvage packets by contacting previous anchors over a backplane. 
The authors evaluated this over 2 experimental platforms: VanLan (11 BSeS and 2 vehicles in Redmond, WA) and DieselNet (10 BSeS and (?) vehicles in Amherst, MA). Short TCP transfers and VoIP were evaluated since the authors argued that web browsing and VoIP are 2 most commonly used uses of WiFi. ViFi undoubtedly performed better than BRR on all the fronts in both the cases. However, comparing it only with the worst-case BRR leaves many questions about further optimization of this protocol unanswered.

Critiques

Overall, the paper was very interesting. It tackled a real problem and had appropriate implementation to back up the proposal. However, I  am a bit skeptical about some of the following points:

  1. I might have misunderstood something, but since the vehicle is moving at 40 Km/h (~11 metres/second), I find it a bit intriguing that exchanging huge control information regarding probabilities of transfer between vehicle and BSes were not a problem. Computing relaying probabilities uses these values continuosly to make decisions. Isn't that supposed to result in a huge overhead of periodic beacons?
  2. The authors didn't really consider the effects of reordering of packets and claimed that it would be easy to order packets using a sequencing buffer at anchor BSes and vehicles; however how is this going to affect the auxiliary BSeS (which in turn depends on whether the buffered packets are ACKed or not) remains unanswered.
  3. ViFi's coordination mechanism depended on making the expected number of packets relayed to be 1, however, it makes much more sense to use a formulation that makes the expected number of packets received by the destination to be 1.

Thursday, October 15, 2009

XORs in the Air: Practical Wireless Network Coding

S. Katti, H. Rahuk, W. Hu, D. Katabi, M. Medard, J. Crowcroft, "XORs in the Air: Practical Wireless Network Coding," ACM SIGCOMM Conference, (September 2006).
This paper talks about COPE, a new architecture for wireless mesh networks. The idea is to cleverly use network coding to mix packets from different sources in order to increase the information content in each transmission. COPEs design is based on 3 main principles:
  • It effectively utilize the broadcast nature of wireless channel to exploit opportunistic learning. All the nodes are kept in promiscuous mode and passively listen for broadcast packets. 
  • Use of Network Coding in a practical scenario. The sender effectively XORs various packets together and depends on neighbour nodes to decode individual packets.
  • Using reception reports by which adjacent nodes can find out about the packets that their neighboring node contains.
COPE definitely performed well with UDP, resulting in a throughput increase of upto 3-4 times, however, it didn't result in any significant improvement with TCP. This difference in the behavior of COPE based on protocols and topolology raises questions about the practicality of its actual deployment. However, due credit goes to the authors for cleverly using network coding for increasing throughput. It would be interested to know about some techniques that may have attempted to combine COPE with ExOR in a more conclusive manner.

ExOR: Opportunistic Multi-Hop Routing for Wireless Networks

S. Biswas, R. Morris, "ExOR: Opportunistic Multi-Hop Routing for Wireless Networks," ACM SIGCOMM Conference, (August 2005).
This paper talked about ExOR- (Ex-Opportunistic Routing, what does Ex stand for?) , an efficient integrated routing and MAC protocol. The idea was that ExOR chooses each hop of a packet's route after the transmission for that hop, so that the choice can reflect which intermediate nodes actually received the transmission.  Basically, the paper questioned the very premise of routing in wireless networks. Traditional routing protocols choose the best sequence of nodes between source and destination, and forward each packet through that sequence. However, the authors questioned that such a scheme doesn't work quite well in wireless as transmissions is over lossy paths and has a probabilistic nature. To tackle this, they propose a cooperative diversity scheme which operated as follows:
  • The source broadcasts its batch of packets and the neighboring nodes try to 'hear' and buffer as many packets as they can.
  • Next, the node A that is closest to the destination (identified by the local forwarder list) transmits its batched sets of packets that the receiver had not received.
  • Further, the adjacent node B that overhears A knows about the packets that A has not transmitted and broadcast them and this process continues till the sender.
  • The forwarders continue to cycle through the priority list until the destination has 90% of the packets. The remaining packets are transferred with traditional routing.
Comments

This is definitely a great idea! The authors brilliantly utilize the inevitable use of broadcast  in a wireless network and come up with a smart scheme of probabilistically sending packets. They do make assumptions that the receptions at different nodes is independent of each other and there are not many collisions (which may or may not be true on a general basis) and further that there is a gradual decrease of delivery probability with distance (which is generally true, though presence of intermittent noise sources may affect this claim). However, I guess the paper was definitely novel in its approach and was definitely successful in trying to put forth its point that ExOR multihop cooperative routing is definitely better than traditional routing.

Thursday, October 8, 2009

A Performance Comparison of Multi-Hop Wireless Ad Hoc Network Routing Protocols

J. Broch, D. Maltz, D. Johnson, Y-C Hu, J. Jetcheva, "A Performance Comparison of Multi-Hop Wireless Ad Hoc Network Routing Protocols," ACM Mobicom Conference, (October 1998).
This paper did a detailed simulated performance comparison  of 4 multi-hop wireless Ad Hoc Network Routing Protocols: DSDV, TORA, DSR and AODV. To enable these protocols to be simulated over ns-2, they extended the simulator to include the following:

  • Node mobility
  • Realistic physical layer (with support of propagation delay, capture effects and carrier sense)
  • Radio Network Interfaces
  • IEEE 802.11 MAC protocol using DCF
The following routing protocols were simulated and studied as part of this paper:
  • DSDV: This protocol involves keeping various tuples of the form < destination, next hop, seq. no, metric > for each destination. It requires each node to periodically broadcast routing updates so that adjacent nodes can learn about best possible routes. The authors talked about 2 modifications of DSDV -- DSDV-SQ and DSDV which differs by the fact that in the former receipt of a new sequence number for a destination will cause a triggered update while in the latter, receipt of  a new metric would result in a triggered update.
  • TORA: This is a distributed routing protocol based on a 'link reversal' algorithm. It discovers the routes on demand. The idea is that whenever route A has to QUERY a route to B, it sends a request and a node C in the path sends an UPDATE back to A telling about the path as well as its 'height' wrt B. When A gets the packet, it adjusts its height accordingly to be greater than that of C. These 'heights' help in differentiating between good and bad routes.
  • DSR: It uses source routing rather than hop-by-hop routing, with each packet to be routed carrying the complete ordered list of nodes that it must pass in its header. It consists of ROUTE DISCOVERY and ROUTE MAINTENANCE phases. 
  • AODV: This was essentially a combination of both DSR (on demand mechanism for Route Discovery) and DSDV (Hop-by-Hop routing, sequence numbers and periodic beacons). 
The evaluation of these protocols was done in a simulated environment of 50 wireless nodes forming an ad hoc network moving  about over a rectangular (1500m X 300m) flat space for 900 seconds of simulated time. Evaluations found out that DSR was a clear winner among all based on both packet recv/sent ratio and routing overhead. There were some intrinsic problems with DSDV due to which it was unable to transmit packets to nodes whose path consisted of broken links. This was due to the fact that DSDV had the limitation of keeping only 1 route per path. As far as the routing overheads were concerned, DSR did the best (probably because it promiscuously hears and caches requests), followed by DSDV-SQ.  TORA and AIDV on the other hand had considerable routing overhead. When moving nodes were brought into picture, pretty much all the protocols behaved in a similar manner as far as send/recv ratios were concerned while DSR clearly outperformed everyone on the routing overhead metric.

A High Throughput Path Metric for Multi-Hop Wireless Rounting

D. De Couto, D. Aguayo, J. Bicket, R. Morris, "A High Throughput Path Metric for Multi-Hop Wireless Rounting," ACM Mobicom Conference, (September 2003).
This paper presented ETX (Expected Transmission Count) metric which finds high throughput paths on multi-hop wireless networks and is much superior to the minimum hop count metric used before this paper was written. The authors implemented this metric in DSDV and DSR routing protocols and measured actual data on a 29 node 802.11b test-bed.

The authors started off by highlighting the fact that metrics like min hop count assume black and white links that is a given link either 'works' or doesn't. While this is reasonable to assume in wired networks, wireless links have considerable intermediate losses which might make some sets of links useful over others which the existing metric ignores. This led the authors into designing a much realistic metric which must account for the following issues:
  • Link loss ratios.
  • Asymmetric link ratios.
  • Successive hop interference.
Though End-to-End delay metric pretty much accounts for all of the above, the authors claimed that having a congestion dependent metric can result in wide oscillations in path routes. Moreover, they claimed that load balancing/congestion control can anyway be performed by separate algorithms. This led to the proposal of the ETX metric. Simply put, ETX of a link predicted number of data transmissions required to ssend a packet over that link, including retransmissions. Mathematically, ETX = 1/(df * dr) where df and  dr are the forward and reverse delivery ratios respectively. These are measured by each node periodically broadcasting packets and the neighboring node keeping a ratio count of the packets it received and the packets it should have received during a w second length window. Further, the authors implemented this metric in DSDV and DSR algorithms and observed that ETX provided considerably better routes than min hop count metric. However, the authors observed that for a run with 1,386 byte packets, though ETX offered improvement over min hop count, the gain was not as large as for small packets. This was because of the fact that ETX was using small probes to estimate the link metric and their delivery counts did not accurately depict the actual loss rate due to difference in size.

Critique

My criticism for this paper has mostly got to do with the question of 'how much actual improvement it does it result in over minimum hop metric'?  Given the practical packet size issue, we saw that ETX doesn't always work as nicely as it is theoretically expected to. Not to mention that computing (and most importantly maintaining) ETX  metrics involves considerable overhead as compared to simple minimum hop count metric. Moreover, I would assume that minimum hop count is not a very good metric and is more of a workaround solution that performs badly for lossy links as is clearly shown by the author. If this is indeed the case, why didn't the authors compare their work to more realistic metrics (like the product approximation of per-link delivery ratios proposed by Yarvis et. al, which the authors did mention in their related work!) ?

Monday, October 5, 2009

Architecture and Evaluation of an Unplanned 802.11b Mesh Network

J. Bicket, D. Aguayo, S. Biswas, R. Morris, "Architecture and Evaluation of an Unplanned 802.11b Mesh Network," ACM Mobicom Conference, (September 2005).
This paper evaluated ROOFNET, an unplanned wireless mesh architecture involving omni-directional antennas and multi-hop routing. The whole design criteria was based on assuring high throughput and making the entire system very easy to deploy without any constraints imposed by node-placement planning.

In brief, ROOFNET was deployed over a 4 sq km area in Cambridge, MA consisting of 37 nodes each with omni-directional antennas. It assumes that a small fraction of nodes will voluntarily share its wired Internet access and will be called Gateways (The authors' implementation had 4 gateways). Each node further allocated 192.168.1.x IP address blocks to user hosts attached to the node's Ethernet port and used NAT. Roofnet's routing protocol, Srcr maintained partial dynamic database  of link metrics between various pair of nodes and used Dijkstra's algorithm on the database to find routes. DSR-style flooding was used in case the node had to route a packet in absence of a link metric for that corresponding route. Further, the link metric consisted of Estimate Transmission Time (ETT) which predicted the total amount of time it would take to send a data packet along a multi-hop route.

The authors presented an extensive evaluation of the real system and showed that in a random setting, on an average 2.9 hop path formed to the gateways resulting in an average throughput of 627 kbits/sec and 39 ms average latency. Further the authors simulated a denser network and showed that dense network had increased hop count (possibly due to more number of eligible links) as well as higher throughput (possibly because of shorter hops). One important problem between practice and simulations was that in practice the authors observed link collisions which affected the theoretical throughput value calculated at each node for routing however, they claimed that this is not so bad and to avoid this, a scheme involving RTS/CTS will in degrade the performance even further due to additional overhead.

Critique

Overall, I really liked the Roofnet technology. It was definitely simple, easy to implement and robust to failures which makes it ideal as a candidate for deployment. Moreover, the best thing about it was that majority of the claims made by authors were based on actual deployment of a working system. However, I had 2 questions in mind which the paper left unanswered. Firstly, how often does the distance metrics changes? In a continuously working system, I would assume that the metrics would eventually converge to static values... am I missing something here (probably dynamic congestion considerably affects throughput?). Secondly, assuming that this system is a candidate for commercial deployment, I would assume that it will have a single antenna for each building and all the apartments having Internet access through the node's Ethernet ports. In view of such a scenario, I would have liked to see the effect of varying number of end hosts on a particular node. It may well be possible that a node is more congested for some part of the day and less on other time. It would be interesting to discuss these aspects in class.

Modeling Wireless Links for Transport Protocols

Andrei Gurtov, Sally Floyd, "Modeling Wireless Links for Transport Protocols," ACM SIGCOMM Computer Communications Review, Volume 34, Number 2, (April 2004).
This paper reviews various models for cellular, WLAN and satellite links that are used in the design of transport protocols over wireless links and attempts to strike a balance between realism, generality and detail. The purpose of the models discussed in this paper was to evaluate the effect of link-level mechanisms on end-to-end transport protocols and higher layers.

The authors started off by highlighting the following essential aspect of models:
  • Type of Wireless links: Cellular, Wireless LANs or Satellite? Each of these links have different bandwidth and link latencies and must be modelled accordingly.
  • Topologies: It deals with questions like whether a single wireless link is located at the end or in the middle of a path, or multiple wireless links constitute the path. Further, the number and location of wireless links in the path greatly affects the performance and is an important modeling parameter.
  • Wireless Traffic: It deals with questions like whether the traffic is one way or two way? What is the intrinsic nature of data being transferred over wireless links (web browsing data, GPRS traffic etc.)?
  • Performance Metrics: This deals with the basic questions of what defines a good performance? Various performance metrics such as throughput, delay, fairness, dynamics or goodput may or may not be taken into account.
 Further, the authors presented ways to model specific link characteristics as follows:
  • Error Losses and Corruption: Errors may result due to the fact that link layer only does 3 re-transmissions per data block. Further, they may be due to handovers or mobility of nodes. 
    • Model: These can be modelled by dropping packets on a per-packet, per-bit basis or having a time-based loss probability. Bursty traffic should be modelled by Gilbert model with erroneous and error-free states. Another important point of distinction should be whether errors are due to data corruption or due to handovers.
  • Delay Variation: This refers to the link delay (or delay spike) in the network. Apart from the inherent nature of the link, delay spikes can result from link-layer recovery, handovers, or scheduling. 
    • Model: These can be modelled simply by suspending the data transmission on a simulated link. 
  • Packet Reordering: This refers to reordering of the sequence in which the packets are sent. While being beneficial sometimes (in case of multiple wireless links in path), they can also incorrectly trigger packet retrasmissions and consequently congestion control responses for TCP.
    • Model: These can be modeled by swapping 2 packets in a queue at any given time or delaying one packet and letting others pass it.
  • On-Demand Resource Allocation: Wireless links often allocate channels based on availability of user traffic. This introduces allocation delays which must be accurately modeled.
    • Model: On demand allocation delay can be modeled by introducing an additional delay when a packet arrives to a queue that has been empty longer than the channel hold time. 
  • Bandwidth Variation:  Bandwidth variations may occur when a wireless scheduler assigns a high speed channel to a user for a limited time. 
    • Model: This can be modeled by changing the bandwidth of a link using periods of low and high bandwidth or havning a sine pattern variation. 
  • Asymmetry in Bandwidth and Latency: There is always assymetry in uplink and downlink directions mostly in Cellular link and none in WLAN links. Satellite links have significant asymmetry in bandwidth and  moderate asymetry in latency.
    • Model: This can be modeled simply by having different parameters for bandwidth and latency for uplink and downlinks. 
Further, there are also issues of modeling queue management (accomodate bursty traffic while controlling queueing delay) and effects of mobility.  Queue management in cellular and WLAN links can be done by using a drop-tail buffer with configurable maximum size in packets while mobility is modeled by adequately defining as what the the protocol designer considers as mobility (intersystem Vs intrasystem) and subsequently modeling it thorugh mobility models such as linear, ping pong etc. Further, the authors highlight that there is always an ongoing tussle between the merits of changing wireless links or transport protocols. Due to this, there is not really a clear asnwer as to which layer should be implementing features such as bit errors, reordering or delay variations.

Critique

I found this paper to be more of like '101 ways on how not to model your wireless networks'. Majority of the paper consisted of identifying various flaws in existing models which may or may not accurately depict the shortcomings of previous models. It may very well could have been the case that the model worked well  in a specific scenario thereby highlighting the broader point that the model's authors would have tried to make. This reminded me of something that Prof. Joe Hellerstrein said in our systems class that many systems designers make a mistake of exposing so many gears to the user that in the end he has no idea as to which gear should he turn to do a specific job! I think that having such a generic model may create such a scenario with so many parameters to manipulate that in the end it may result in making the task of modeling much more difficult for those researchers who do not have a complete and exhaustive  knowledge of the system (which may be good or bad...).