...

/

Introduction to Convolutional GANs

Introduction to Convolutional GANs

Learn why the traditional GAN architecture leads to higher memory consumption, and how convolutional neural networks (CNNs) help to overcome this.

In this section we’re going to improve on the CelebA GAN we just developed for two reasons.

  • The images it generates look fuzzy. The areas we expect to be fairly smoothly coloured are covered in a high-contrast pattern of pixels.
  • Fully connected neural networks consume a lot of memory. Even moderately larger images or networks will soon hit the limit of our GPU and prevent training. Most consumer GPUs will have much smaller memory than the Tesla T4 or P100 provided by Google’s Colab service.

Memory consumption

Before we explore a new GAN technique, let’s check how much memory our GAN consumes. If we run the entire notebook again, the discriminator and generator networks will have consumed memory. To be more precise, the input data, and information flowing through the networks, the outputs, as well as the learnable parameters, will all have been tensors that have required memory on the GPU.

We can check how much memory is currently allocated using the following code.

Press + to interact
# current memory allocated to tensors
torch.cuda.memory_allocated(device) / (1024*1024*1024)

We divide by 1024102410241024 * 1024 * 1024 to convert from bytes to gigabytes.

We can see that after the all the notebook code has run, about 0.700.70 ...