Experimentation

Experiment using a different activation function and training for a higher number of epochs.

Run your own experiments

If you’ve worked through to this stage you’ve covered enough basics to meaningfully try your own ideas for improving the GAN process.

  • You could try different kinds of loss functions, different sizes of the neural network, perhaps even variations of the standard GAN training loop.
  • Perhaps you might try to discourage mode collapse by including a measure of diversity over several outputs in the loss function.
  • If you’re very confident, you might try to implement your own optimiser, one which is better suited to the adversarial dynamics of a GAN.

Experiment with GELU activation function

The following is one of my own simple experiments with an activation function called GELU, which is like a ReLU but has a softer corner.

📝 Some have suggested that such activation functions are now the state of the art because they provide good gradients, and don’t have a sharp discontinuity around the origin.

Get hands-on with 1400+ tech skills courses.