Project Page | Paper | Demo Video | SIGGRAPH Talk
04/10/2020 Update: @mabdelhack provided a windows installation guide for the PyTorch model in Python 3.6. Check out the Windows branch for the guide.
10/3/2019 Update: Our technology is also now available in Adobe Photoshop Elements 2020. See this blog and video for more details.
9/3/2018 Update: The code now supports a backend PyTorch model (with PyTorch 0.5.0+). Please find the Local Hints Network training code in the colorization-pytorch repository.
Real-Time User-Guided Image Colorization with Learned Deep Priors.
Richard Zhang*, Jun-Yan Zhu*, Phillip Isola, Xinyang Geng, Angela S. Lin, Tianhe Yu, and Alexei A. Efros.
In ACM Transactions on Graphics (SIGGRAPH 2017).
(*indicates equal contribution)
We first describe the system (0) Prerequisities and steps for (1) Getting started. We then describe the interactive colorization demo (2) Interactive Colorization (Local Hints Network). There are two demos: (a) a "barebones" version in iPython notebook and (b) the full GUI we used in our paper. We then provide an example of the (3) Global Hints Network.
Clone this repo:
git clone https://github.com/junyanz/interactive-deep-colorization ideepcolor
cd ideepcolor
Download the reference model
bash ./models/fetch_models.sh
Install Caffe or [PyTorch]() and 3rd party Python libraries (OpenCV, scikit-learn and scikit-image). See the Requirements for more details.
We provide a "barebones" demo in iPython notebook, which does not require QT. We also provide our full GUI demo.
ipython notebook
and click on DemoInteractiveColorization.ipynb
.If you need to convert the Notebook to an older version, use jupyter nbconvert --to notebook --nbformat 3 ./DemoInteractiveColorization.ipynb
.
Install Qt4 and QDarkStyle. (See Installation)
Run the UI: python ideepcolor.py --gpu [GPU_ID] --backend [CAFFE OR PYTORCH]
. Arguments are described below:
--win_size [512] GUI window size
--gpu [0] GPU number
--image_file ['./test_imgs/mortar_pestle.jpg'] path to the image file
--backend ['caffe'] either use 'caffe' or 'pytorch'; 'caffe' is the official model from siggraph 2017, and 'pytorch' is the same weights converted
User interactions
image_file
was, along with the user input ab values.We include an example usage of our Global Hints Network, applied to global histogram transfer. We show its usage in an iPython notebook.
Add ./caffe_files
to your PYTHONPATH
Run ipython notebook
. Click on ./DemoGlobalHistogramTransfer.ipynb
Install Caffe or PyTorch. The Caffe model is official. PyTorch is a reimplementation.
WITH_PYTHON_LAYER=1
in the Makefile.config
) and build Caffe python library by make pycaffe
.You also need to add pycaffe
to your PYTHONPATH
. Use vi ~/.bashrc
to edit the environment variables.
PYTHONPATH=/path/to/caffe/python:$PYTHONPATH
LD_LIBRARY_PATH=/path/to/caffe/build/lib:$LD_LIBRARY_PATH
Install scikit-image, scikit-learn, opencv, Qt4, and QDarkStyle pacakges:
# ./install/install_deps.sh
sudo pip install scikit-image
sudo pip install scikit-learn
sudo apt-get install python-opencv
sudo apt-get install python-qt4
sudo pip install qdarkstyle
For Conda users, type the following command lines (this may work for full Anaconda but not Miniconda):
# ./install/install_conda.sh
conda install -c anaconda protobuf ## photobuf
conda install -c anaconda scikit-learn=0.19.1 ## scikit-learn
conda install -c anaconda scikit-image=0.13.0 ## scikit-image
conda install -c menpo opencv=2.4.11 ## opencv
conda install pyqt=4.11 ## qt4
conda install -c auto qdarkstyle ## qdarkstyle
Docker: [OSX Docker file] and [OSX Installation video] by @vbisbest, [Docker file 2] (by @sabrinawallner) based on DL Docker.
More installation help (by @SleepProgger).
Please find a PyTorch reimplementation of the Local Hints Network training code in the colorization-pytorch repository.
If you use this code for your research, please cite our paper:
@article{zhang2017real,
title={Real-Time User-Guided Image Colorization with Learned Deep Priors},
author={Zhang, Richard and Zhu, Jun-Yan and Isola, Phillip and Geng, Xinyang and Lin, Angela S and Yu, Tianhe and Efros, Alexei A},
journal={ACM Transactions on Graphics (TOG)},
volume={9},
number={4},
year={2017},
publisher={ACM}
}
One of the authors objects to the inclusion of this list, due to an allergy. Another author objects on the basis that cats are silly creatures and this is a serious, scientific paper. However, if you love cats, and love reading cool graphics, vision, and learning papers, please check out the Cat Paper Collection: [Github] [Webpage]