Unanswered Questions in GANs

Understand the ongoing inquiries and unresolved aspects within GAN research.

GAN-based research is fertile, and new architectures, loss functions, and tricks are being released on a daily basis. In this context of constant change, we enumerate a few questions that still need to be answered.

Are some losses better than others?

As we addressed earlier, in the paper,Are GANs Created Equal? A Large-Scale Study,” the authors state that in their experiments, they did not find evidence that any of the tested algorithms consistently outperformed the nonsaturating GANs. This leads us to wonder whether some losses are, in fact, better than others. We should bear this in mind when choosing a GAN framework.

Do GANs perform distribution learning?

In their paper, “Generalization and Equilibrium in Generative Adversarial Nets (GANs),” Sanjeev Arora et al. use the birthday paradox to suggest that GANs learn distribution with fairly low levels of support. At the same time, in their paper, “A Style-Based Generator Architecture for Generative Adversarial Networks,” Tero Karras et al. proposed that a GAN architecture is capable of generating many high-quality faces, therefore suggesting that GANs have to learn a distribution with rather high levels of support. We should bear this in mind and evaluate our GANs according to the tasks that we build them for.

All about that inductive bias

In their paper “Deep Image Prior,” Dmitry Ulyanov et al. show that a randomly initialized neural network can be used as a handcrafted prior with excellent results in standard inverse problems, such as denoising, super-resolution, and inpainting. This evidence and the evidence brought by the paper “Are GANs Created Equal?,” leads us to question how important the GAN framework is with respect to the convolutional architectures that are being used to solve these problems. We should bear this in mind when developing new GAN architectures.

How can you terminate a GAN?

We analyzed statistical measures of divergence between real data and other data and the results showed that even in simple cases, for example, the distribution of pixel intensities, the divergence between training data and fake data is high in comparison to the test data.

When using specifications to train GANs, we could use specifications that are easy to learn from real data but hardly differentiable, hence the hard-to-learn methods that rely on differentiation. We should bear in mind that specifications might not be satisfied by GANs.

Get hands-on with 1200+ tech skills courses.