Constructing Unrestricted Adversarial Examples with Generative Models

This repo contains necessary code for reproducing main results in the paper Constructing Unrestricted Adversarial Examples with Generative Models, NIPS 2018, Montréal, Canada.

by Yang Song, Rui Shu, Nate Kushman and Stefano Ermon, Stanford AI Lab.


We propose Unrestricted Adversarial Examples, a new kind of adversarial examples to machine learning systems. Different from traditional adversarial examples that are crafted by adding norm-bounded perturbations to clean images, unrestricted adversarial examples are realistic images that are synthesized entirely from scratch, and not restricted to small norm-balls. This new attack demonstrates the danger of a stronger threat model, where traditional defense methods for perturbation-based adversarial examples fail.

Datasets

Here are links to the datasets used in our experiments:

Running Experiments

Training AC-GANs

In order to do unrestricted adversarial attack, we first need a good conditional generative model so that we can search on the manifold of realistic images to find the adversarial ones. You can use train_acgan.py to do this. For example, the following command

CUDA_VISIBLE_DEVICES=0 python train_acgan.py --dataset mnist --checkpoint_dir checkpoints/

will train an AC-GAN on the MNIST dataset with GPU #0 and output the weight files to the checkpoints/ directory.

Run python train_acgan.py --help to see more available argument options.

Unrestricted Adversarial Attack

After the AC-GAN is trained, you can use main.py to do targeted / untargeted attack. You can also use main.py to evaluate the accuracy and PGD-robustness of a trained neural network classifier. For example, the following command

CUDA_VISIBLE_DEVICES=0 python main.py --mode targeted_attack --dataset mnist --classifier zico --source 0 --target 1

attacks the provable defense method from Kolter & Wong, 2018 on the MNIST dataset, with the source class being 0 and target class being 1.

Run python main.py --help to view more argument options. For hyperparameters such as --noise, --lambda1, --lambda2, --eps, --z_eps, --lr, and --n_iters (in that order), please refer to Table. 4 in the Appendix of our paper.

Evaluating Unrestricted Adversarial Examples

In the paper, we use Amazon Mechanical Turk to evaluate whether our unrestricted adversarial examples are legitimate or not. We have provided html files for the labelling interface in folder amt_websites.

Samples

Perturbation-based adversarial examples (top row) VS unrestricted adversarial examples (bottom-row):

compare

Targeted unrestricted adversarial examples against robust classifiers on MNIST (Green borders denote legitimate unrestricted adversarial examples while red borders denote illegimate ones. The tiny white text at the top-left corder of a red image denotes the label given by the annotators. )

mnist

We also have samples for SVHN dataset:

svhn

Finally here are the results for CelebA

celeba

Citation

If you find the idea or code useful for your research, please consider citing our paper:

@inproceedings{song2018constructing,
  author={Song, Yang and Shu, Rui and Kushman, Nate and Ermon, Stefano},
  booktitle = {Advances in Neural Information Processing Systems (NIPS)},
  title = {Constructing Unrestricted Adversarial Examples with Generative Models},
  year = {2018},
}