Creating the Network from TensorFlow 2

Learn how to prepare and build the variational autoencoder model.

We'll cover the following

Now that we’ve downloaded the CIFAR-10 dataset, split it into test and training data, and reshaped and rescaled it, we are ready to build our VAE model. We’ll use the same Model API from the Keras module in TensorFlow 2. The TensorFlow documentation contains an example of how to implement a VAE using convolutional networks, and we’ll build on this code example. However, for our purposes, we’ll implement simpler VAE networks using MLP layers based on the original VAE paper, “Auto-encoding Variational BayesKingma, Diederik P, and Max Welling. 2013. “Auto-Encoding Variational Bayes.” ArXiv.org. December 20, 2013. https://arxiv.org/abs/1312.6114.,” and show how we adapt the TensorFlow example to also allow for IAF modules in decoding.

In the original article, the authors propose two kinds of models for use in the VAE, both MLP feedforward networks: Gaussian and Bernoulli. These names reflect the probability distribution functions used in the MLP network outputs in their final layers.

Bernoulli MLP

The Bernoulli MLP can be used as the network decoder, generating the simulated image xx from the latent vector zz. The formula for the Bernoulli MLP is:

Get hands-on with 1400+ tech skills courses.