Variational Autoencoder: Practice
Build and train a VAE with Python and Pytorch.
We'll cover the following...
It is finally time to put everything we learned together and build the infamous variational autoencoder.
Here is an overview of what we are going to implement:
For the decoder, we will use a simple network with 2 linear layers. This network will parameterize the variational posterior . We also need another neural network (the Encoder), which will parameterize the likelihood , again with 2 linear layers.
class VAE(nn.Module):
def __init__(self):
super(VAE, self).__init__()
self.features =16
# encoder
self.enc1 = nn.Linear(in_features=3072, out_features=128)
self.enc2 = nn.Linear(in_features=128, out_features=self.features * 2)
# decoder
self.dec1 = nn.Linear(in_features=self.features, out_features=128)
self.dec2 = nn.Linear(in_features=128, out_features=3072)
def forward(self, x):
# encoding
x = F.relu(self.enc1(x))
x = self.enc2(x).view(-1, 2, self.features)
# get `mu` and `log_var`
mu = x[:, 0, :] # the first feature values as mean
log_var = x[:, 1, :] # the other feature values as variance
# get the latent vector through reparameterization
z = self.reparameterize(mu, log_var)
...