neural image analogies

Image of arch Image of Sugar Steve Image of season transferImage of Trump

This is basically an implementation of this "Image Analogies" paper, In our case, we use feature maps from VGG16. The patch matching and blending is inspired by the method described in "Combining Markov Random Fields and Convolutional Neural Networks for Image Synthesis". Effects similar to that paper can be achieved by turning off the analogy loss (or leave it on!) --analogy-w=0 and turning on the B/B' content weighting via the --b-content-w parameter. Also, instead of using brute-force patch matching we use the PatchMatch algorithm to approximate the best patch matches. Brute-force matching can be re-enabled by setting --model=brute

The initial code was adapted from the Keras "neural style transfer" example.

The example arch images are from the "Image Analogies" website. They have some other good examples from their own implementation which are worth a look. Their paper discusses the various applications of image analogies so you might want to take a look for inspiration.

Installation

This requires either TensorFlow or Theano. If you don't have a GPU you'll want to use TensorFlow. GPU users may find to Theano to be faster at the expense of longer startup times. Here's the Theano GPU guide.

Here's how to configure the backend with Keras and set your default device (e.g. cpu, gpu0).

To install via virtualenv run the following commands.

virtualenv venv
source venv/bin/activate
pip install neural-image-analogies

If you have trouble with the above method, follow these directions to Install latest keras and theano or TensorFlow

The script make_image_analogy.py should now be on your path.

Before running this script, download the weights for the VGG16 model. This file contains only the convolutional layers of VGG16 which is 10% of the full size. Original source of full weights. The script assumes the weights are in the current working directory. If you place them somewhere else make sure to pass the --vgg-weights=<location-of-the-weights.h5> parameter or set the VGG_WEIGHT_PATH environment variable.

Example script usage: make_image_analogy.py image-A image-A-prime image-B prefix_for_output

e.g.:

make_image_analogy.py images/arch-mask.jpg images/arch.jpg images/arch-newmask.jpg out/arch

The examples directory has a script, render_example.sh which accepts an example name prefix and, optionally the location of your vgg weights.

./render_example.sh arch /path/to/your/weights.h5

Currently, A and A' must be the same size, the same holds for B and B'. Output size is the same as Image B, unless specified otherwise.

It's too slow

If you're not using a GPU, use TensorFlow. My Macbook Pro with with can render a 512x512 image in approximately 12 minutes using TensorFlow and --mrf-w=0. Here are some other options which mostly trade quality for speed.

I want it to look better

The default settings are somewhat lowered to give the average user a better chance at generating something on whatever computer they may have. If you have a powerful GPU then here are some options for nicer output:

Parameters

The MRF loss (or "local coherence") is the influence of B' -> A' -> B'. In the parlance of style transfer, this is the style loss which gives texture to the image.

The B/B' content loss is set to 0.0 by default. You can get effects similar to CNNMRF by turning this up and setting analogy weight to zero. Or leave the analogy loss on for some extra style guidance.

If you'd like to only visualize the analogy target to see what's happening, set the MRF and content loss to zero: --mrf-w=0 --content-w=0 This is also much faster as MRF loss is the slowest part of the algorithm.

License

The code for this implementation is provided under the MIT license.

The suggested VGG16 weights are originally from here and are licensed http://creativecommons.org/licenses/by-nc/4.0/ Open a ticket if you have a suggestion for a more free-as-in-free-speech license.

The attributions for the example art can be found in examples/images/ATTRIBUTIONS.md