Welcome!

I am a graduate student in Computer Science and Engineering at The Pennsylvania State University. I work with Professor Patrick McDaniel on the security of deep learning. I am also a Google PhD Fellow in Security. Previously, I received my M.S. and B.S. in Engineering Sciences from the Ecole Centrale de Lyon in France. This website covers some of my background as well as publications I have worked on. Feel free to contact me directly for more information.

Download resume Contact Me Twitter LinkedIn

Welcome!

I am a graduate student in Computer Science and Engineering at The Pennsylvania State University. I work with Professor Patrick McDaniel on the security of deep learning. I am also a Google PhD Fellow in Security. Previously, I attended Ecole Centrale de Lyon in France. This website covers some of my background as well as projects and publications I have worked on. Feel free to contact me directly for more information.

Download resume

Contact Me

Twitter

LinkedIn

Black-Box Attack

Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples

Adversarial examples that affect one model often affect another model, even if the two models have different architectures or were trained on different training sets, so long as both models were trained to perform the same task. We introduce new transferability attacks between previously unexplored (substitute, victim) pairs of machine learning model classes, most notably SVMs and decision trees. We demonstrate our attacks on two commercial machine learning classification systems from Amazon (96.19% misclassification rate) and Google (88.94%) using only 800 queries of the victim model, thereby showing that existing machine learning approaches are in general vulnerable to systematic black-box attacks regardless of their structure. This paper is under peer-review.

Download Paper

Black-Box Attack

Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples

Machine learning models, including DNNs, were shown to be vulnerable to adversarial samples—subtly (and often humanly indistinguishably) modified malicious inputs crafted to compromise the integrity of their outputs. Adversarial examples are known to transfer from one model to another, even if the second model has a different architecture or was trained on a different set. We introduce the first practical demonstration that this cross-model transfer phenomenon enables attackers to control a remotely hosted DNN with no access to the model, its parameters, or its training data. In one experiment, we force a DNN supported by MetaMind (one of the online APIs for DNN classifiers) to mis-classify inputs at a rate of 84.24%. This paper is under peer-review.

Download Paper Popular Science Article

Deep Learning Security

Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks.

I introduced a defensive mechanism called defensive distillation to reduce the effectiveness of adversarial samples, inputs crafted by adversaries with the intent of causing deep neural networks to misclassify. This work combines an analytical and empircal investigation of the generalizability and robustness properties granted by the use of defensive distillation during training. The study shows that defensive distillation can reduce effectiveness of sample creation from 95% to less than 0.5% on a studied deep neural network. This work was accepted to the 37th IEEE Symposium on Security and Privacy.

Download Paper Presentation

Deep Learning Security

The Limitations of Deep Learning in Adversarial Settings. Research on deep learning security.

I studied imperfections in the training phase of deep neural networks that make them vulnerable to adversarial samples: inputs crafted by adversaries with the intent of causing deep neural networks to misclassify. I formalized the space of adversaries against deep neural networks (DNNs) and introduced a novel class of algorithms to craft adversarial samples based on a precise understanding of the mapping between inputs and outputs of DNNs. This work was accepted at the 1st IEEE European Symposium on Security and Privacy.

Download Paper

Database Security

Enforcing Agile Access Control Policies in Relational Databases using Views. Research on database security.

I designed a fully automated and agile approach to access control enforcement in relational databases. My algorithms enforce any policy expressible using the high-level syntax of the Authorization Specification Language. This includes complex policies involving information flow control or user history dependencies. This work was accepted at the 2015 Military Communications Conference

Download Paper Presentation

Cyber-Security Collaborative Research Alliance

Cyber-Security Collaborative Research Alliance. Research on computer security.

I am currently contributing to this 10-year project aimed at developing the science of security. My work was firt focused on developing agile reconfigurations improving security. It involved modeling system behavior to predict which reconfigurations will optimize security. I am now investigating deep learning in adversarial settings. My insight in computer security has been rewarded with a $3,000 prize from Microsoft.

Download Paper View Microsoft Prize Page

Development of a knowledge base

Development of a knowledge base. Internship at Orange.

During my summer internship in 2014, I developed a knowledge base application using Zend Framework and a relational database. I managed the software project from conception to production. This was a great opportunity to learn more about indexing and search techniques, as I developed my own search engine from scratch.

Automation of image processing

Automation of image processing. Industrial project.

I managed this one-year project for the Atomic Energy and Alternative Energies Commission and worked with 4 other students from Ecole Centrale de Lyon. This project is confidential so I can not give much details on its content. We programmed an algorithm used to identify bacterias using MatLab.

Construction of a 3D scanner

Construction of a 3D scanner. Working in a FabLab.

I built a 3D scanner with 3 other students from Ecole Centrale de Lyon. This exciting project was an excellent way to apply skills in Computer science, Mechanics, and Electronics. I discovered a lot about 3D modeling software and laser cutting techniques while working in the FabLab of our school.

Read more about the project (in French)

Elliptic Curve Cryptography

Elliptic Curve Cryptography. Learning about security fundamentals.

While I was studying advanced Mathematics concepts and Computer Science at Lycée Louis-le-Grand in Paris, I did some research on Elliptic Curve Cryptography. I learned a lot about public key cryptography and managed to design and implement my own Elliptic Curve Cryptography algorithm!

Seminary's website (in French) Project paper (in French) Slides (in French)


Generic placeholder image

Ph.D. & M.S. in Computer Science and Engineering

As a graduate student at Penn State, I am extending my knowledge in Computer Science and Engineering. I find a special interest in computer security and machine learning related issues.

View education details »

Generic placeholder image

M.S. in Engineering Sciences

As a graduate student at Ecole Centrale de Lyon, I covered many different engineering fields including Computer Science, Electrical engineering, Mathematics, and Signal Processing.

View education details »


Generic placeholder image

Education

View details »

Generic placeholder image

Experience

View details »

Generic placeholder image

Publications

View details »

Generic placeholder image

Contact

View details »