Challenges in GAN Training
Learn about the challenges associated with general adversarial networks.
We'll cover the following
GANs provide an alternative method of developing generative models. Their design inherently helps mitigate the issues we discussed with some of the other techniques. However, GANs are not free from their own set of issues. The choice to develop models using game theory concepts is fascinating yet difficult to control. We have two agents/models trying to optimize opposing objectives, which can lead to all sorts of issues.
Training instability
GANs play a minimax game with opposing objectives. No wonder this leads to oscillating losses for generator and discriminator models across batches. A GAN setup that is training well will typically have a higher variation in losses initially, but eventually, it will stabilize, and so will the loss of the two competing models. Yet, it is very common for GANs (especially vanilla GANs) to spiral out of control. It is difficult to determine when to stop the training or to estimate an equilibrium state.
Mode collapse
Mode collapse refers to a failure state where the generator finds one or only a small number of samples that are enough to fool the discriminator. To understand this better, let’s take the example of a hypothetical dataset of temperatures from two cities: city
Get hands-on with 1400+ tech skills courses.