GANs: Strengths and Weaknesses

Discover the strengths and weaknesses of GANs, which excel in unsupervised learning but face challenges in training and evaluation.

GANs are one of the hottest topics in deep learning nowadays! The GAN framework has many strengths compared to other frameworks that we will enumerate in this lesson. Naturally, GANs also have weaknesses and challenges that we will describe.

Strengths of the GAN framework

One of the advantages of GANs is the use of the discriminator as an embedding space that does not require any label. This has been described in the papers “Feature Learning by Inpainting” by Deepak Pathak et al. and “Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks” by Alec Radford et al. In these papers, the authors have used GANs to learn its features in an unsupervised fashion. Another amazing strength of the GAN framework is that it circumvents the potentially difficult challenge of designing an objective function for the task at hand.

Weaknesses and challenges in GANs

There are also many weaknesses in GANs related to training and evaluating them. For example, unlike optimization problems, whose cost function relies on maximizing the likelihood of the data given the model’s parameters, GANs rely on a minimax game between the discriminator and the generator.

It can be seen from the previous equation that KL-based objective functions suffer from exploding loss when the support of the real PP distribution is not contained on the other QQ distribution. In other words, the KL divergence goes to infinity if there is some xx, such that Q(x)=0Q(x) = 0 where P(x)>0P(x) > 0.

Regarding the GAN loss and others, the “Wasserstein GAN” paper provides a thorough comparison of different distances and explains that there are distributions where JS, KL, and even total variation divergence do not converge and have gradients always equal to zero.

Get hands-on with 1200+ tech skills courses.