Digest and Test: Synthesizing and Manipulating Images with GANs

Reinforce your understanding and test your knowledge of the topics covered in this chapter.

We'll cover the following

In this chapter, we learned how to use the GAN framework to build models that can be used to synthesize and manipulate images conditioned on other images, also known as image-to-image translation. We started from a pix2pix baseline implementation and trained a model using the Zappos dataset to synthesize images of shoes from shoe outlines. In pix2pix, we did not use a z vector on the generator, and variety was obtained by using dropout. Unlike discriminators that we have seen in previous chapters, pix2pix uses a PatchGAN discriminator that outputs multiple values per image.

We then learned how to improve the baseline implementation by making many of the modifications proposed in the pix2pixHD paper. The first modification was to use multiscale discriminators that operate on images at different resolutions. The second modification was the feature matching loss, which is believed to improve image quality. Implementing the feature matching loss in Keras is rather intricate, and this adds yet another powerful and valuable tool to our GAN toolkit. Finally, we used instance maps to compute instance edges, which were used to guide the generator image synthesis process.

Test your understanding

Put your knowledge to the test with a quiz designed to reinforce your understanding.

Get hands-on with 1400+ tech skills courses.