Overview: Progressive Growing of GANs

Get an overview of the topics covered in this chapter.

We'll cover the following

Progressive growing of GANs is a training methodology that is introduced in a context where high-resolution image synthesis was dominated by autoregressive models, such as PixelCNN and VAEs—just like the models used in the paper “Improved Variational Inference with Inverse Autoregressive FlowThe paper can be found here. https://arxiv.org/abs/1606.04934.”

As we described in earlier chapters, although autoregressive models are able to produce high-quality images, when compared to their counterparts, they lack an explicit latent representation that can be directly manipulated. Additionally, due to their autoregressive nature, at the time of inference autoregressive models tend to be slower than their counterparts. On the other hand, VAE-based models have quicker inference but are harder to train, and the VAE-based models that have been published tend to produce blurry results.

In this mix of models, we also have GANs that, at the time, were not able to produce high-quality images with large resolutions, such as 1024×10241024\times1024. In addition to this limitation, producing samples with enough variety was a common problem that GAN models faced. Completing the list of GAN issues were the long training times and high sensitivity to hyperparameter initializations.

In this chapter, we will learn how to implement the training methodology highlighted in the “Progressive Growing of GANs for Improved Quality, Stability, and Variation” paper by Tero Karras et al.; this is a new training methodology in which the generator and discriminator are trained progressively. Starting from low-resolution images, we will then add new layers that model increasingly fine details as the training process progresses and the image resolution increases. This speeds up the training process and stabilizes it, allowing us to produce images of unprecedented quality. We will explore in detail how the authors increase variety by using minibatch standard deviation and how they normalize the generator and discriminator for training stability. We will run our experiments on the CIFAR-10 dataset with a model and training setup that is easy to adapt to other datasets, such as the CelebA dataset.

Topics covered in this chapter

The following topics will be covered in this chapter:

  • Progressive growing of GANs

  • Experimental setup

  • Model implementation

Get hands-on with 1200+ tech skills courses.