This repository contains a Keras implementation of the SESEMI architecture for supervised and semi-supervised image classification, as described in the NeurIPS'19 LIRE Workshop paper:
Tran, Phi Vu (2019) Exploring Self-Supervised Regularization for Supervised and Semi-Supervised Learning.
The training and evaluation of the SESEMI architecture for supervised and semi-supervised learning is summarized as follows:
The code is tested on Ubuntu 16.04 with the following components:
This reference implementation loads all data into system memory and utilizes GPU for model training and evaluation. The following hardware specifications are highly recommended:
For training and evaluation, execute the following bash
commands in the same directory where the code resides. Ensure the datasets have been downloaded into their respective directories.
# Set the PYTHONPATH environment variable.
$ export PYTHONPATH="/path/to/this/repo:$PYTHONPATH"
# Train and evaluate SESEMI.
$ python train_evaluate_sesemi.py
--network <network_str>
--dataset <dataset_str>
--labels <nb_labels>
--gpu <gpu_id>
# Train and evaluate SESEMI with unlabeled extra data from Tiny Images.
$ python train_evaluate_sesemi_tinyimages.py
--network <network_str>
--extra <nb_extra>
--gpu <gpu_id>
The required flags are:
<network_str>
refers to one of convnet
, wrn
, or nin
architecture;<dataset_str>
refers to one of three supported datasets svhn
, cifar10
, and cifar100
;<nb_labels>
is an integer denoting the number of labeled examples;<nb_extra>
denotes the amount of unlabeled extra data to sample from Tiny Images;<gpu_id>
is a string denoting the GPU device ID, defaults to 0
if not specified.