Perceptual Similarity Metric and Dataset [Project Page]

The Unreasonable Effectiveness of Deep Features as a Perceptual Metric
Richard Zhang, Phillip Isola, Alexei A. Efros, Eli Shechtman, Oliver Wang.
In CVPR, 2018.

This repository contains our perceptual metric (LPIPS) and dataset (BAPPS). It can also be used as a "perceptual loss". This uses PyTorch; a Tensorflow alternative is here.

Table of Contents

  1. Learned Perceptual Image Patch Similarity (LPIPS) metric
    a. Basic Usage If you just want to run the metric through command line, this is all you need.
    b. "Perceptual Loss" usage
    c. About the metric
  2. Berkeley-Adobe Perceptual Patch Similarity (BAPPS) dataset
    a. Download
    b. Evaluation
    c. About the dataset
    d. Train the metric using the dataset

(0) Dependencies/Setup

Installation

pip install -r requirements.txt

(1) Learned Perceptual Image Patch Similarity (LPIPS) metric

Evaluate the distance between image patches. Higher means further/more different. Lower means more similar.

(A) Basic Usage

(A.I) Line commands

Example scripts to take the distance between 2 specific images, all corresponding pairs of images in 2 directories, or all pairs of images within a directory:

python compute_dists.py -p0 imgs/ex_ref.png -p1 imgs/ex_p0.png --use_gpu
python compute_dists_dirs.py -d0 imgs/ex_dir0 -d1 imgs/ex_dir1 -o imgs/example_dists.txt --use_gpu
python compute_dists_pair.py -d imgs/ex_dir_pair -o imgs/example_dists_pair.txt --use_gpu

(A.II) Python code

File test_network.py shows example usage. This snippet is all you really need.

import models
model = models.PerceptualLoss(model='net-lin', net='alex', use_gpu=use_gpu, gpu_ids=[0])
d = model.forward(im0,im1)

Variables im0, im1 is a PyTorch Tensor/Variable with shape Nx3xHxW (N patches of size HxW, RGB images scaled in [-1,+1]). This returns d, a length N Tensor/Variable.

Run python test_network.py to take the distance between example reference image ex_ref.png to distorted images ex_p0.png and ex_p1.png. Before running it - which do you think should be closer?

Some Options By default in model.initialize:

(B) Backpropping through the metric

File perceptual_loss.py shows how to iteratively optimize using the metric. Run python perceptual_loss.py for a demo. The code can also be used to implement vanilla VGG loss, without our learned weights.

(C) About the metric

Higher means further/more different. Lower means more similar.

We found that deep network activations work surprisingly well as a perceptual similarity metric. This was true across network architectures (SqueezeNet [2.8 MB], AlexNet [9.1 MB], and VGG [58.9 MB] provided similar scores) and supervisory signals (unsupervised, self-supervised, and supervised all perform strongly). We slightly improved scores by linearly "calibrating" networks - adding a linear layer on top of off-the-shelf classification networks. We provide 3 variants, using linear layers on top of the SqueezeNet, AlexNet (default), and VGG networks.

If you use LPIPS in your publication, please specify which version you are using. The current version is 0.1. You can set version='0.0' for the initial release.

(2) Berkeley Adobe Perceptual Patch Similarity (BAPPS) dataset

(A) Downloading the dataset

Run bash ./scripts/download_dataset.sh to download and unzip the dataset into directory ./dataset. It takes [6.6 GB] total. Alternatively, run bash ./scripts/get_dataset_valonly.sh to only download the validation set [1.3 GB].

(B) Evaluating a perceptual similarity metric on a dataset

Script test_dataset_model.py evaluates a perceptual model on a subset of the dataset.

Dataset flags

Perceptual similarity model flags

Misc flags

An example usage is as follows: python ./test_dataset_model.py --dataset_mode 2afc --datasets val/traditional val/cnn --model net-lin --net alex --use_gpu --batch_size 50. This would evaluate our model on the "traditional" and "cnn" validation datasets.

(C) About the dataset

The dataset contains two types of perceptual judgements: Two Alternative Forced Choice (2AFC) and Just Noticeable Differences (JND).

(1) 2AFC Evaluators were given a patch triplet (1 reference + 2 distorted). They were asked to select which of the distorted was "closer" to the reference.

Training sets contain 2 judgments/triplet.

Validation sets contain 5 judgments/triplet.

Each 2AFC subdirectory contains the following folders:

(2) JND Evaluators were presented with two patches - a reference and a distorted - for a limited time. They were asked if the patches were the same (identically) or different.

Each set contains 3 human evaluations/example.

Each JND subdirectory contains the following folders:

(D) Using the dataset to train the metric

See script train_test_metric.sh for an example of training and testing the metric. The script will train a model on the full training set for 10 epochs, and then test the learned metric on all of the validation sets. The numbers should roughly match the Alex - lin row in Table 5 in the paper. The code supports training a linear layer on top of an existing representation. Training will add a subdirectory in the checkpoints directory.

You can also train "scratch" and "tune" versions by running train_test_metric_scratch.sh and train_test_metric_tune.sh, respectively.

Docker Environment

Docker set up by SuperShinyEyes.

Citation

If you find this repository useful for your research, please use the following.

@inproceedings{zhang2018perceptual,
  title={The Unreasonable Effectiveness of Deep Features as a Perceptual Metric},
  author={Zhang, Richard and Isola, Phillip and Efros, Alexei A and Shechtman, Eli and Wang, Oliver},
  booktitle={CVPR},
  year={2018}
}

Acknowledgements

This repository borrows partially from the pytorch-CycleGAN-and-pix2pix repository. The average precision (AP) code is borrowed from the py-faster-rcnn repository. Backpropping through the metric was implemented by Angjoo Kanazawa.