I am a graduate student in Computer Science and Engineering at The Pennsylvania State University. I work with Professor Patrick McDaniel on the security of deep learning. I am also a Google PhD Fellow in Security. Previously, I received my M.S. and B.S. in Engineering Sciences from the Ecole Centrale de Lyon in France. This website covers some of my background as well as publications I have worked on. Feel free to contact me directly for more information.
I am a graduate student in Computer Science and Engineering at The Pennsylvania State University. I work with Professor Patrick McDaniel on the security of deep learning. I am also a Google PhD Fellow in Security. Previously, I attended Ecole Centrale de Lyon in France. This website covers some of my background as well as projects and publications I have worked on. Feel free to contact me directly for more information.
Adversarial examples that affect one model often affect another model, even if the two models have different architectures or were trained on different training sets, so long as both models were trained to perform the same task. We introduce new transferability attacks between previously unexplored (substitute, victim) pairs of machine learning model classes, most notably SVMs and decision trees. We demonstrate our attacks on two commercial machine learning classification systems from Amazon (96.19% misclassification rate) and Google (88.94%) using only 800 queries of the victim model, thereby showing that existing machine learning approaches are in general vulnerable to systematic black-box attacks regardless of their structure. This paper is under peer-review.
Machine learning models, including DNNs, were shown to be vulnerable to adversarial samples—subtly (and often humanly indistinguishably) modified malicious inputs crafted to compromise the integrity of their outputs. Adversarial examples are known to transfer from one model to another, even if the second model has a different architecture or was trained on a different set. We introduce the first practical demonstration that this cross-model transfer phenomenon enables attackers to control a remotely hosted DNN with no access to the model, its parameters, or its training data. In one experiment, we force a DNN supported by MetaMind (one of the online APIs for DNN classifiers) to mis-classify inputs at a rate of 84.24%. This paper is under peer-review.
I introduced a defensive mechanism called defensive distillation to reduce the effectiveness of adversarial samples, inputs crafted by adversaries with the intent of causing deep neural networks to misclassify. This work combines an analytical and empircal investigation of the generalizability and robustness properties granted by the use of defensive distillation during training. The study shows that defensive distillation can reduce effectiveness of sample creation from 95% to less than 0.5% on a studied deep neural network. This work was accepted to the 37th IEEE Symposium on Security and Privacy.
I studied imperfections in the training phase of deep neural networks that make them vulnerable to adversarial samples: inputs crafted by adversaries with the intent of causing deep neural networks to misclassify. I formalized the space of adversaries against deep neural networks (DNNs) and introduced a novel class of algorithms to craft adversarial samples based on a precise understanding of the mapping between inputs and outputs of DNNs. This work was accepted at the 1st IEEE European Symposium on Security and Privacy.
I designed a fully automated and agile approach to access control enforcement in relational databases. My algorithms enforce any policy expressible using the high-level syntax of the Authorization Specification Language. This includes complex policies involving information flow control or user history dependencies. This work was accepted at the 2015 Military Communications Conference
I am currently contributing to this 10-year project aimed at developing the science of security. My work was firt focused on developing agile reconfigurations improving security. It involved modeling system behavior to predict which reconfigurations will optimize security. I am now investigating deep learning in adversarial settings. My insight in computer security has been rewarded with a $3,000 prize from Microsoft.
During my summer internship in 2014, I developed a knowledge base application using Zend Framework and a relational database. I managed the software project from conception to production. This was a great opportunity to learn more about indexing and search techniques, as I developed my own search engine from scratch.
I managed this one-year project for the Atomic Energy and Alternative Energies Commission and worked with 4 other students from Ecole Centrale de Lyon. This project is confidential so I can not give much details on its content. We programmed an algorithm used to identify bacterias using MatLab.
I built a 3D scanner with 3 other students from Ecole Centrale de Lyon. This exciting project was an excellent way to apply skills in Computer science, Mechanics, and Electronics. I discovered a lot about 3D modeling software and laser cutting techniques while working in the FabLab of our school.
While I was studying advanced Mathematics concepts and Computer Science at Lycée Louis-le-Grand in Paris, I did some research on Elliptic Curve Cryptography. I learned a lot about public key cryptography and managed to design and implement my own Elliptic Curve Cryptography algorithm!
As a graduate student at Penn State, I am extending my knowledge in Computer Science and Engineering. I find a special interest in computer security and machine learning related issues.