Model Implementation: PGGANs

Understand the progressive growing of GANs utilizing specific discriminator and generator architecture for enhanced training.

We'll cover the following

The discriminator

The discriminator that is used in PGGANs consists of stacks of convolutions followed by downsampling layers. Each convolution has a WeightScalingLayer layer that normalizes the layer outputs using a constant, and a PixelNormLayer layer that normalizes the outputs by their L2 norm, thereby ensuring that the outputs vector has unit length.

Specific to the PGGANs methodology, this discriminator architecture has a BlockSelectionLayer layer, which defines the layer that should be used as the output during training. Remember, in this methodology, we first train an output layer at a low resolution and then train the other layers that output a higher resolution one by one.

Another special addition to the discriminator is the MinibatchStatConcatLayer layer. This first computes the standard deviation for each feature or channel in each spatial location over the minibatch, and then takes the average standard deviation:

Get hands-on with 1400+ tech skills courses.