Towards Adversarial Retinal Image Synthesis

Arxiv Demo

We use an image-to-image translation technique based on the idea of adversarial learning to synthesize eye fundus images directly from data. We pair true eye fundus images with their respective vessel trees, by means of a vessel segmentation technique. These pairs are then used to learn a mapping from a binary vessel tree to a new retinal image.

How it works

Setup

Prerequisites

Set up directories

The data must be organized into a train, validation and test directories. By default the directory tree is:

The defaults can be changed by altering the parameters at run time:

   python train.py [--base_dir] [--train_dir] [--val_dir]

Folders {A,B} contain corresponding pairs of images. Make sure these folders have the default name. The pairs should have the same filename.

Usage

Model

The model can be used with any given vessel tree of the according size. You can download the pre-trained weights available here and load them at test time. If you choose to do this skip the training step.

Train the model

To train the model run:

   python train.py [--help]

By default the model will be saved to a folder named 'log'.

Test the model

To test the model run:

   python test.py [--help]

If you are running the test using pre-trained weights downloaded from here make sure both the weights and params.json are saved in the log folder.

Citation

If you use this code for your research, please cite our paper Towards Adversarial Retinal Image Synthesis:

@article{ costa_retinal_generation_2017,
  title={Towards Adversarial Retinal Image Synthesis},
  author={ Costa, P., Galdran, A., Meyer, M.I., Abràmoff, M.D., Niemejer, M., Mendonca, A.M., Campilho, A. },
  journal={arxiv},
  year={2017},
  doi={10.5281/zenodo.265508}
}

DOI