Automatic Discovery of Protocol Manipulation Attacks in Large Scale Distributed Systems Implementations

(autoattack logo)

Most distributed systems are designed to meet application-prescribed metrics that ensure availability and high-performance. However, attacks can significantly degrade performance, limiting the practical utility of these systems in adversarial environments. Specifically, compromised participants can manipulate protocol semantics through attacks that target the messages exchanged with honest participants.

Finding attacks against performance in distributed systems implementations is a very challenging task due to (1) state-space explosion that occurs as attackers are more realistically modeled, (2) diversity of programming language, software, operating systems and the subtle interactions between the software components, (3) diversity of communication channels (wired or wireless communication, TCP or UDP, encrypted or not-encrypted), (4) difficulty of expressing performance as an invariant in the system, (5) difficulty of capturing real-world performance in a reproducible way, not only the system performance but the network conditions when that performance was obtained.

This project aims to build an easy-to-use and maintain, low cost platform to find reproducible, real, high-impact, malicious performance attacks on distributed systems implementations in realistic environment.

Towards Intrusion Tolerant Clouds


Cloud computing paradigm made the global IT infrastructure become dependent on a relatively small number of very large distributed systems managed as clouds. To achieve adequate scale and availability, cloud computing systems need two distributed system capabilities: consistent global state replicated across the network; and a distributed messaging system that connects cloud components, transforming them to a cohesive system that mostly manages itself autonomously. However, both systems are vulnerable to intrusions and the algorithms and tools necessary to build them at cloud scale, guaranteeing their integrity and performance under intrusion attacks, do not exist in practice. The goal of this project is to create and develop the replication and overlay messaging engines necessary to make public and private clouds resilient to intrusion attacks. In addition to facilitating cloud infrastructure builders, the same technologies will also enable application builders to make their cloud applications more resilient to intrusions.

A Unifying Framework For Theoretical and Empirical Analysis of Secure Communication Protocols


Many networking protocols have been designed without security in mind, and many cryptographic schemes have been designed without practical deployments in mind. Moreover, most of security-enhanced communication protocols still lack the provable-security treatment and hence the security guarantees. This project aims at bridging the gap between protocol design, implementation, deployment, and security guarantees by developing a novel general security framework that facilitates the provable-security analyses of practical networking protocols. The project has an interdisciplinary approach as it combines concepts from applied cryptography and algorithms with implementation and empirical analyses to provide a unifying framework for studying and developing secure communication protocols. This joint design effort yields both new cryptographic foundations and fundamentally secure networking protocols. >

Adversarial Machine Learning


Machine learning has numerous applications to security and privacy. For example, intrusion detection systems use signatures to detect known attacks, email systems use Bayesian filters to detect spam, several protocols use spatial-temporal outlier detection, decision trees, or support vector machines (SVM) to filter malicious data or detect attacks. While machine learning techniques have been very useful in defending against attacks where adversaries are not knowledgeable of the intricacies of the defense techniques themselves, they are less effective in the presence of adaptive and smart adversaries that exploit the specifics of machine learning defenses in an attempt to bypass them or to make them less useful because of a high number of false alarms. In this project we focus on machine learning techniques that have to operate in the presence of a diverse class of adversaries with sophisticated capabilities. The overarching goal of the project is to understand the attack-defense space when machine learning is used for security and privacy applications, identify the vulnerabilities and limitations of different machine learning approaches, and propose solutions to address them.