Vanilla GAN
Learn how to apply a basic understanding of general adversarial networks and build one from scratch.
We'll cover the following...
The vanilla GAN will consist of a repeating block architecture similar to the one presented in the original paper. We’ll try to replicate the task of generating MNIST digits using our network.
The overall GAN setup can be seen in the figure below. The figure outlines a generator model with a noise vector
The discriminator, on the other hand, is a simple feedforward network. This model takes an image as input (a real image or the fake output from the generator) and classifies it as real or fake. This simple setup of two competing models helps us to train the overall GAN.
Implementation
We’ll be relying on TensorFlow 2 and using the high-level Keras API wherever possible. The first step is to define the discriminator model. In this implementation, we’ll use a very basic multilayer perceptron (MLP) as the discriminator model:
def build_discriminator(input_shape=(28, 28,), verbose=True):"""Utility method to build a MLP discriminatorParameters:input_shape:type:tuple. Shape of input image for classification.Default shape is (28,28)->MNISTverbose:type:boolean. Print model summary if set to true.Default is TrueReturns:tensorflow.keras.model object"""model = Sequential()model.add(Input(shape=input_shape))model.add(Flatten())model.add(Dense(512))model.add(LeakyReLU(alpha=0.2))model.add(Dense(256))model.add(LeakyReLU(alpha=0.2))model.add(Dense(1, activation='sigmoid'))if verbose:model.summary()return model
Line 17: We add an
Input
layer to themodel
with the specified input shape.Line 18: We add a
Flatten
layer to the model. It flattens the input, which is necessary when transitioning ...