Generative Adversarial Networks (GANs)
Learn about the basic framework of generative adversarial networks.
We'll cover the following
Generative adversarial networks (GANs) have a pretty interesting origin story. It all began as a discussion/argument in a bar with Ian Goodfellow and friends discussing work related to generating data using neural networks. The argument ended with everyone downplaying each other’s methods. Goodfellow went back home and coded the first version of what we now call a GAN. To his amazement, the code worked on the first try. A more verbose description of the chain of events was shared by Goodfellow himself in an interview with Wired magazine.
Taxonomy of generative models
The first set of methods corresponds to models that represent data with an explicit density function. Here, we define a probability density function,
There are two further types within explicit density methods: tractable and approximate density methods. PixelRNNs are an active area of research for tractable density methods. When we try to model complex real-world data distributions, for example, natural images or speech signals, defining a parametric function becomes challenging. These techniques work by approximating the underlying probability density functions explicitly. VAEs work toward maximizing the likelihood estimates of the lower bound, while RBMs use Markov chains to make an estimate of the distribution. The overall landscape of generative models can be described using the figure
Get hands-on with 1400+ tech skills courses.