Welcome!

I am a graduate student in Computer Science and Engineering at The Pennsylvania State University. I work with Professor Patrick McDaniel on the security of deep learning. I am also a Google PhD Fellow in Security. Previously, I received my M.S. and B.S. in Engineering Sciences from the Ecole Centrale de Lyon in France. This website covers some of my background as well as publications I have worked on. Feel free to contact me directly for more information.

Download resume Contact Me Twitter LinkedIn GitHub Google Scholar

Welcome!

I am a graduate student in Computer Science and Engineering at The Pennsylvania State University. I work with Professor Patrick McDaniel on the security of deep learning. I am also a Google PhD Fellow in Security. Previously, I received my M.S. and B.S. in Engineering Sciences from the Ecole Centrale de Lyon in France. This website covers some of my background as well as publications I have worked on. Feel free to contact me directly for more information.

Download resume

Contact Me

Twitter

LinkedIn

GitHub

Google Scholar

Black-Box Attack

Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data

Some machine learning applications involve training data that is sensitive, such as the medical histories of patients in a clinical trial. A model may inadvertently and implicitly store some of its training data; careful analysis of the model may therefore reveal sensitive information. To address this problem, we demonstrate a generally applicable approach to providing strong privacy guarantees for training data. The approach combines, in a black-box fashion, multiple models trained with disjoint datasets, such as records from different subsets of users. Compared with previous work, the approach imposes only weak assumptions on how teachers are trained: it applies to any model, including non-convex models like DNNs. We achieve state-of-the-art privacy/utility trade-offs on MNIST and SVHN thanks to an improved privacy analysis and semi-supervised learning.

Download Paper GitHub Repository

Black-Box Attack

Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples

Adversarial examples that affect one model often affect another model, even if the two models have different architectures or were trained on different training sets, so long as both models were trained to perform the same task. We introduce new transferability attacks between previously unexplored (substitute, victim) pairs of machine learning model classes, most notably SVMs and decision trees. We demonstrate our attacks on two commercial machine learning classification systems from Amazon (96.19% misclassification rate) and Google (88.94%) using only 800 queries of the victim model, thereby showing that existing machine learning approaches are in general vulnerable to systematic black-box attacks regardless of their structure. This paper is under peer-review.

Download Paper

Black-Box Attack

Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples

Machine learning models, including DNNs, were shown to be vulnerable to adversarial samples—subtly (and often humanly indistinguishably) modified malicious inputs crafted to compromise the integrity of their outputs. Adversarial examples are known to transfer from one model to another, even if the second model has a different architecture or was trained on a different set. We introduce the first practical demonstration that this cross-model transfer phenomenon enables attackers to control a remotely hosted DNN with no access to the model, its parameters, or its training data. In one experiment, we force a DNN supported by MetaMind (one of the online APIs for DNN classifiers) to mis-classify inputs at a rate of 84.24%. This paper is under peer-review.

Download Paper Popular Science Article

Deep Learning Security

Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks.

I introduced a defensive mechanism called defensive distillation to reduce the effectiveness of adversarial samples, inputs crafted by adversaries with the intent of causing deep neural networks to misclassify. This work combines an analytical and empircal investigation of the generalizability and robustness properties granted by the use of defensive distillation during training. The study shows that defensive distillation can reduce effectiveness of sample creation from 95% to less than 0.5% on a studied deep neural network. This work was accepted to the 37th IEEE Symposium on Security and Privacy.

Download Paper Presentation

Deep Learning Security

The Limitations of Deep Learning in Adversarial Settings. Research on deep learning security.

I studied imperfections in the training phase of deep neural networks that make them vulnerable to adversarial samples: inputs crafted by adversaries with the intent of causing deep neural networks to misclassify. I formalized the space of adversaries against deep neural networks (DNNs) and introduced a novel class of algorithms to craft adversarial samples based on a precise understanding of the mapping between inputs and outputs of DNNs. This work was accepted at the 1st IEEE European Symposium on Security and Privacy.

Download Paper


Generic placeholder image

Ph.D. & M.S. in Computer Science and Engineering

As a graduate student at Penn State, I am extending my knowledge in Computer Science and Engineering. I find a special interest in computer security and machine learning related issues.

Generic placeholder image

M.S. in Engineering Sciences

As a graduate student at Ecole Centrale de Lyon, I covered many different engineering fields including Computer Science, Electrical engineering, Mathematics, and Signal Processing.


Generic placeholder image

Education

View details »

Generic placeholder image

Experience

View details »

Generic placeholder image

Publications

View details »

Generic placeholder image

Contact

View details »