Adversarial Machine Learning



Machine learning has numerous applications to security and privacy. For example, intrusion detection systems use signatures to detect known attacks, email systems use Bayesian filters to detect spam, several protocols use spatial-temporal outlier detection, decision trees, or support vector machines (SVM) to filter malicious data or detect attacks. While machine learning techniques have been very useful in defending against attacks where adversaries are not knowledgeable of the intricacies of the defense techniques themselves, they are less effective in the presence of adaptive and smart adversaries that exploit the specifics of machine learning defenses in an attempt to bypass them or to make them less useful because of a high number of false alarms. In this project we focus on machine learning techniques that have to operate in the presence of a diverse class of adversaries with sophisticated capabilities. The overarching goal of the project is to understand the attack-defense space when machine learning is used for security and privacy applications, identify the vulnerabilities and limitations of different machine learning approaches, and propose solutions to address them.


    On the Practicality of Integrity Attacks on Document-Level Sentiment Analysis. A. Newell, R. Potharaju. L. Xiang, and C. Nita-Rotaru. In the 7th ACM Workshop on Artificial Intelligence and Security (AISec) with the 21st ACM CCS, Nov. 2014.
    Securing Application-Level Topology Estimation Networks: Facing the Frog-Boiling Attack. S. Becker, J. Seibert, C. Nita-Rotaru, and R. State. In International Symposium on Recent Advances in Intrusion Detection (RAID) 2011. [PDF] [BIBTEX]
    Applying Game Theory to Analyze Attacks and Defenses in Virtual Coordinate Systems. S. Becker, J. Seibert, D. Zage, C. Nita-Rotaru, and R. State. In International Conference on Dependable Systems and Networks (DSN) 2011. [PDF] [BIBTEX]


    Current Members

    • Wei Kong, Undergraduate student

    Previous Members

    • Sheila Becker, Ph.D.Dec.2011, University of Luxembourg
    • Andrew Newell, Ph.D. Aug. 2014
    • Rahul Potharaju, Ph.D. May 2014
    • Jeffrey Seibert, Ph.D. May 2012
    • David Zage, Ph.D. May 2010
    • Luojie Xiang, M.S. May 2014


This project was partially funded by Verisign. Collaborators: Radu State, University of Luxembourg.