Progressive GAN
Understand the workings of the progressive GAN and how it can be implemented using TensorFlow 2.0.
GANs are powerful systems to generate high-quality samples, examples of which we have seen in the previous sections. Different works have utilized this adversarial setup to generate samples from different distributions like CIFAR-10, celeb_a, LSUN-bedrooms, and so on (we covered examples using MNIST for explanation purposes). Some works, like Lap-GANs, have focused on generating higher-resolution output samples, but they lacked perceived output quality and introduced additional challenges in training. Progressive GANs or Pro-GANs or PG-GANs were presented by Karras et al. in their work titled “GANs for Improved Quality, Stability, and
The method presented in this work not only mitigated many of the challenges present in earlier works but also brought about a very simple solution to crack this problem of generating high-quality output samples. The paper also presented a number of very impactful contributions, some of which we'll cover in detail in the following subsections.
The overall method
The software engineering way of solving tough technical problems is often to break them down into simpler granular tasks. Pro-GANs also target the complex problem of generating high-resolution samples by breaking down the task into smaller and simpler problems to solve. The major issue with high-resolution images is the huge number of modes or details such images have. It makes it very easy to differentiate between generated samples and the real data (perceived quality issues). This inherently makes the task of building a generator with enough capacity to train well on such datasets, along with memory requirements, a very tough one.
To tackle these issues, Karras et al. presented a method to grow both generator and discriminator models as the training progresses from lower to higher resolutions
Get hands-on with 1400+ tech skills courses.