This is an attempt at implementing something like Real-Time Style Transfer with the Keras framework.
pip install keras-rtst
After installation you'll find train-rtst.sh
, render-rtst.sh
and rtst.py
on your path.
The shell scripts are just wrappers around rtst.py
to demonstrate usage and maybe be a little convenient. There's also a script rtst-download-training-images.sh
that
will download a small batch of images randomly selected from a subset of ImageNet 2012.
There's an examples folder. Example of an example:
Train a brick texturizer: ./make-example-texturizer.sh bricks0 path/to/training/images path/to/evaluation/images path/to/vgg16/weights.h5
Texturize a gif with that brick texturizer: VGG_WEIGHTS=/path/to/vgg.h5 ./texturize-gif.sh path/to/your.gif bricks0 out/bricks0gif
MRFRegularizer
and AnalogyRegularizer
which add losses for patch-wise
markov random fields and image analogies. Use --style-map-path=/your/image.jpg
to specify "image A" in image analogy parlance (--style-path
corresponds to
"Image A prime")--model=girthy
adds a series of residual blocks at each depth instead of just
the bottom-most scale. Set maximum depth with --depth
and the peak number of
convolution filters with --num-res-filters
. The number of filters is halved
at each larger scale.