Tuesday, November 24, 2009

Not-a-Bot: Improving Service Availability in the Face of Botnet Attacks

R. Gummadi, H. Balakrishnan, P. Maniatis, S. Ratnasamy, "Not-a-Bot: Improving Service Availability in the Face of Botnet Attacks," NSDI'09, (April 2009).
This paper talks about identifying the traffic that is sent from compromised machines that form botnets. It aims at separating human generated  traffic from that generated by botnets by observing a fact that human generated traffic is generally accompanied by keyboard or mouse inputs. The authors designed a system called NAB ("Not-A-Bot") to approximately identify human generated activity. It uses a trusted software component called 'attester' which runs on the client machine with an untrusted OS. The authors implemented their version of the attester in the Xen hypervisor but claimed that other implementations such as building the attester in the trusted hardware or running it in software without virtualization is also possible. (The last option actually looks very promising and I wonder as to why it was not done).


Design

The authors used TPM and built the attester on top of it. A TPM provides many security services, among which the ability to measure and attest to the integrity of trusted software running on the computer boot time. The attester relies on two key primitives provided by TPMs called the direct anonymous attester (DAA) and the sealed storage. The basis of the design is that whenever an application send a message, it has to obtain an attestation from the attester. The attestation has the hash of the message, a nonce and range values signed by the attester's public key as well as its certificate. The attestation is given if the request is generated within some range (delta-m and delta-k) of the keyboard and mouse activity (with certain exceptions such as scripted mails which uses PID). On the other end, there is a verifier, which can now implement various policies based on the attestation. It can all together drop non attested packet, but practically, the authors talk about giving low priority to non-attested packets or rather making sure that the attested packets always pass through. Based on this, the paper contained implementation of effective spam solutions, DDoS prevention and click-fraud mitigation.

Comments

Overall, this was an excellent paper. It tackled a very practical problem and came up with a partially practical solution too. However, I have certain comments on the effectiveness of the protocol:

  1. It can very well be possible to have adaptive botnets that can adapt to sending data whenever there is a mouse or key stroke. This can in the worst case result in sending botnet data in bursts and may result in throttling the sender.
  2. Current design an implementations involve modifying applications and protocols (to accommodate attester data). This raises concerns about its deployability.
  3. Although, the authors mention that they donot intend to 'punish' those applications who donot use attesters, I somehow felt that giving priority to attested traffic is against this very goal.
  4. The evaluations showed that 0.08% of human generated mail was still misclassified as spam. Shouldn't it be ideally zero now? Given that they already tackled script generated mails, what could be the cause of these emails? 
  5. It was a bit of a surprise for  me when the authors mentioned that 600 GB of disk overhead for storing nonces is not much for an ISP. Is that scalable?

1 comment:

  1. I agree with your concern that NAB is punishing applications that don't use attesters. It seems like requests that were not attested will be treated like non-human generated requests and given the lowest priority, which clearly goes against one of their key requirements.

    ReplyDelete