Training The Human Faces GAN
Learn about training the GAN to generate human faces.
We’re finally ready to train the Human Faces GAN.
The training loop
The training loop code has the same structure as before, with minor changes to move the discriminator and generator to the GPU and also ensure the target values are of type torch.cuda.FloatTensor
.
%%time# create Discriminator and GeneratorD = Discriminator()D.to(device)G = Generator()G.to(device)epochs = 1for epoch in range(epochs):print ("epoch = ", epoch + 1)# train Discriminator and Generatorfor image_data_tensor in celeba_dataset:# train discriminator on trueD.train(image_data_tensor, torch.cuda.FloatTensor([1.0]))# train discriminator on false# use detach() so gradients in G are not calculatedD.train(G.forward(generate_random_seed(100)).detach(), torch.cuda.FloatTensor([0.0]))# train generatorG.train(D, generate_random_seed(100), torch.cuda.FloatTensor([1.0]))passpass
Nothing else needs to change which tells us we have written a very reusable training loop. Running the training loop for one epoch takes just under 10 minutes for us. Without GPU acceleration that might have taken three hours!
Let’s look at the loss charts.
Loss during training the discriminator
The following is the discriminator loss.
This looks good because we can see the training is not unstable and chaotic. The loss values are approximately converging to a value that isn’t too large.
Loss during training the generator
Let’s now look at the generator loss.
Again, this looks good. The training seems to be fairly stable and the loss values are ...