BigGAN Tensorflow TPU

Simple Tensorflow TPU implementation of "Large Scale GAN Training for High Fidelity Natural Image Synthesis" (BigGAN)

I (David Mack) have been modifying this network to allow for configuration of its self-attention, to facilitate experiments into the effectiveness of different self-attention architectures.

main

Implementation notes/issues

Usage

Building the data

For ImageNet, use TensorFlow's build scripts to create TFRecord files of your chosen image size (e.g. 128x128). --tfr-format inception

You can also use the data build script from NVidia's Progressive Growing of GANs. --tfr-format progan

Training

You can train on a Google TPU by setting the name of your TPU as an env var and running one of the training scripts. For example,

You need to have your training data stored on a Google cloud bucket.

Architecture

128x128

256x256

512x512

Contributing

You're very welcome to! Submit a PR or contact the author(s)

Authors

Junho Kim, David Mack