...

/

Training The Human Faces GAN

Training The Human Faces GAN

Learn about training the GAN to generate human faces.

We’re finally ready to train the Human Faces GAN.

The training loop

The training loop code has the same structure as before, with minor changes to move the discriminator and generator to the GPU and also ensure the target values are of type torch.cuda.FloatTensor.

Press + to interact
%%time
# create Discriminator and Generator
D = Discriminator()
D.to(device)
G = Generator()
G.to(device)
epochs = 1
for epoch in range(epochs):
print ("epoch = ", epoch + 1)
# train Discriminator and Generator
for image_data_tensor in celeba_dataset:
# train discriminator on true
D.train(image_data_tensor, torch.cuda.FloatTensor([1.0]))
# train discriminator on false
# use detach() so gradients in G are not calculated
D.train(G.forward(generate_random_seed(100)).detach(), torch.cuda.FloatTensor([0.0]))
# train generator
G.train(D, generate_random_seed(100), torch.cuda.FloatTensor([1.0]))
pass
pass

Nothing else needs to change which tells us we have written a very reusable training loop. Running the training loop for one epoch takes just under 10 minutes for us. Without GPU acceleration that might have taken three hours!

Let’s look at the loss charts.

Loss during training the discriminator

The following is the discriminator loss.

This looks good because we can see the training is not unstable and chaotic. The loss values are approximately converging to a value that isn’t too large.

Loss during training the generator

Let’s now look at the generator loss.

Again, this looks good. The training seems to be fairly stable and the loss values are ...