News

dlupi-heteroscedastic-dropout

Deep Learning under Privileged Information Using Heteroscedastic Dropout (CVPR 2018, Official Repo)

This is the code for the paper:

Deep Learning Under Privileged Information Using Heteroscedastic Dropout
John Lambert*, Ozan Sener*, Silvio Savarese
Presented at CVPR 2018

The paper can be found on ArXiv here.

This repository also includes an implementation for repeatable random data augmentation transformations, useful for transforming images and bounding boxes contained therein identically.

If you find this code useful for your research, please cite

@InProceedings{Lambert_2018_CVPR,
author = {Lambert, John and Sener, Ozan and Savarese, Silvio},
title = {Deep Learning Under Privileged Information Using Heteroscedastic Dropout},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2018}
}

In this repository we provide:

We also provide implementations of various baselines that use privileged information, including:

Setup

All code is implemented in PyTorch.

First install PyTorch, torchvision, and CUDA, then update / install the following packages:

(with Conda and Python 2.7 on Linux the instructions here will look something like)

conda install pytorch torchvision -c pytorch

(Optional) GPU Acceleration

If you have an NVIDIA GPU, you can accelerate all operations with CUDA.

First install CUDA.

(Optional) cuDNN

When using CUDA, you can use cuDNN to accelerate convolutions.

First download cuDNN and copy the libraries to /usr/local/cuda/lib64/.

Pretrained CNN Models

Download all pretrained CNN models from Google Drive by running the script

bash models/download_CNN_models.sh

Download ImageNet CLS-LOC

First,register and create an ImageNet account.

Next, download the 1.28 Million images

Now, we need to download the XML bounding box annotations, either via the link here (42.8 MB in size). or via command line

wget http://image-net.org/Annotation/Annotation.tar.gz

The XML annotations are stored in recursive tar.gz files. They can be recursively unzipped via tar, which will take around 10 minutes on a typical workstation:

mkdir bbox_annotation
tar -xvzf Annotation.tar.gz -C bbox_annotation
rm Annotation.tar.gz
cd bbox_annotation
for a in `ls -1 *.tar.gz`; do gzip -dc $a | tar xf -; done
rm *.tar.gz

Now, we have a directory called bbox_annotation/Annotation that contains .xml files with bounding box information for 3,627 classes ("synsets") of ImageNet. We will use only the 1000 classes featured in the ImageNet Large-Scale Visual Recognitiion Challenge (ILSVRC) task.

At this point, we'll arrange the image data into three folders: "train", "val", and "test".

6.3G val.zip 56G train.zip

On the ILSVRC 2016 page on the ImageNet website, find and download the file named

ILSVRC2016_CLS-LOC.tar.gz

This is the Classification-Localization dataset (155GB),unchanged since ILSVRC2012. There are a total of 1,281,167 images for training. The number of images for each synset (category) ranges from 732 to 1300. There are 50,000 validation images, with 50 images per synset. There are 100,000 test images. All images are in JPEG format.

It is arranged as follows: {split}/{synset_name}/{file_name}.JPEG

For example, ImageNet_2012/train/n02500267/02500267_2597.JPEG

We will use the bounding box subset of the images from CLS-LOC (that have bounding box information). We'll then use subsets of the images with annotated bounding boxes to evaluate sample efficiency. Run:

mkdir ImageNetLocalization
python cnns/imagenet/create_bbox_dataset.py
python cnns/imagenet/create_imagenet_test_set.py

Pretrained RNN Models

Training CNN Models From Scratch

The script train.py lets you train a new CNN model from scratch.

python cnns/train/train.py

By default this script runs on GPU; to run on CPU, remove the .cuda() lines within the code.

License

Free for personal or research use; for commercial use please contact me.