Generative Adversarial Networks (GANs) are one of the most widely used generative models. Generative models create new data instances that resemble the data points from the training data. In short, GANs help us create content it hasn’t seen before.
Given a training set, a GAN learns to generate new data with the same statistics as the training set. For example, a GAN trained on pictures of human faces can create new photographs of human faces, even though the new faces do not belong to any real person.
GANs have the following components:
The generator leans to produce the target output. It starts with random noise and then creates images that look like original images from the dataset. These images will be later used in the training of the discriminator. Generators are typically deconvolutional neural networks.
As the name suggests, the discriminator acts like a critic to distinguish between the original images from the training dataset and the images generated by the generator. Discriminators are convolutional neural networks.
GANs help pair a generator with a discriminator. The discriminator and the generator operate so that the discriminator cannot distinguish the images generated by the generator. In other words, the generator tries to fool the discriminator, and the discriminator tries to keep from being fooled. Initially, we train the discriminator with a known dataset. We present the discriminator with samples from the training dataset until it achieves acceptable accuracy. Then, the generator trains to fool the discriminator. Typically, the generator is fed with randomized input sampled from a predefined latent space. Consider latent space as the random noise. Latent space is useless, but the generator can use samples from this space and build new images. After that, the output generated by the generator is evaluated by the discriminator.
GANs rely on back-propagation on both networks to minimize the errors. This helps the generator produces better images while the discriminator becomes more skilled at flagging synthetic images.
We represent the generative network as
This objective function will minimize the error of the generative network to generate more and more accurate images. It maximizes the discriminator so it can learn to distinguish between the images generated by the generator and the images from the training dataset.
An amazing extension to GANs is Conditional GANs. It is a simple yet powerful extension. We apply a condition
There are a number of ways and applications that utilize GANs.
In this section, we'll discuss some well-known state-of-the-art GAN models.
Although originally proposed as a form of a generative model for unsupervised learning, GANs have also proven useful for supervised learning and reinforcement learning.
Read “What is supervised and unsupervised learning?” to learn more.