Custom Layer Implementation

Explore the custom layers that aid in the progressive growing of GANs: scaling, statistics, output selection, and resizing.

We'll cover the following

Implementing the models for PGGANs is a complex task because the models have many custom layers and operations that are not implemented out of the box, including layers to perform pixel normalization, weight normalization during runtime, and more.

In addition to these layers, in our model design, we are including a mechanism in the models themselves; this allows for the runtime evaluation of the block that will be used as a model output, including how much input it receives from the previous layer.

The implementation described in this course, which encourages code reuse, uses (or adapts) code from an excellent implementation made by the Microsoft Student Club of Beihang University group.

Custom layers

Let’s take a look at the many custom layers and helper functions that are used in our PGGANs implementation. We are going to cover the MinibatchStatConcatLayer layer, which computes statistics that are used on the last layers of the discriminator; the WeightScalingLayer layer, which scales the weights by their L2 norm; the PixelNormLayer layer, which scales the activations; the BlockSelectionLayer layer, which chooses the model output with respect to the current level of detail; and the ResizeLayer layer, which rescales the activations.

The custom layers that we are going to write will overwrite four methods at most, including _init__, build, call, and compute_output_shape.

We will start with the MinibatchStatConcatLayer layer.

Get hands-on with 1400+ tech skills courses.