Tensorflow port for the PyTorch implementation of the Learned Perceptual Image Patch Similarity (LPIPS) metric. This is done by exporting the model from PyTorch to ONNX and then to TensorFlow.
git clone https://github.com/alexlee-gk/lpips-tensorflow.git
cd lpips-tensorflow
pip install -r requirements.txt
The lpips
TensorFlow function works with individual images or batches of images.
It also works with images of any spatial dimensions (but the dimensions should be at least the size of the network's receptive field).
This example computes the LPIPS distance between batches of images.
import numpy as np
import tensorflow as tf
import lpips_tf
batch_size = 32
image_shape = (batch_size, 64, 64, 3)
image0 = np.random.random(image_shape)
image1 = np.random.random(image_shape)
image0_ph = tf.placeholder(tf.float32)
image1_ph = tf.placeholder(tf.float32)
distance_t = lpips_tf.lpips(image0_ph, image1_ph, model='net-lin', net='alex')
with tf.Session() as session:
distance = session.run(distance_t, feed_dict={image0_ph: image0, image1_ph: image1})
git submodule update --init --recursive
export PYTHONPATH=PerceptualSimilarity:$PYTHONPATH
pip install -r requirements-dev.txt
models
directory.
python export_to_tensorflow.py --model net-lin --net alex