Why Use Generative Models?
Understand the motivation behind using generative models.
We'll cover the following
Why do we have a need for generative models in the first place? What value do they provide in practical applications? To answer this question, let’s take a brief discuss the topics that form the basis of generation using deep learning.
The promise of deep learning
Many of the models we’ll survey in the course are deep, multi-level neural networks. The last 15 years have seen a renaissance in the development of deep learning models for image classification, natural language processing and understanding, and reinforcement learning. These advances were enabled by breakthroughs in traditional challenges in tuning and optimizing very complex models, combined with access to larger datasets, distributed computational power in the cloud, and frameworks such as TensorFlow that make it easier to prototype and reproduce research.
Building a better digit classifier
A classic problem used to benchmark algorithms in machine learning and computer vision is the task of classifying which handwritten digit from
One of their critical observations was that instead of training a network to directly predict the most likely digit (
Generating images
A challenge to generating images such as the Portrait of Edmond Belamy with the approach used for the MNIST dataset is that frequently, images have no labels (such as a digit); rather, we want to map the space of random numbers into a set of artificial images using a latent vector,
A further constraint is that we want to promote the diversity of these images. If we input numbers within a certain range, we would like to know that they generate different outputs and be able to tune the resulting image features. For this purpose, variational autoencoders (VAEs) were developed to generate diverse and photorealistic images.
Here are some sample images from a
Get hands-on with 1400+ tech skills courses.