Paired Style Transfer Using pix2pix GAN
Learn about a variant of conditional GANs used in the context of style transfer.
We'll cover the following
Style transfer is an intriguing research area that pushes the boundaries of creativity and deep learning together. In their work, “Image-to-Image Translation with Conditional Adversarial
It is called pair-wise style transfer, as the training set needs to have matching samples from both source and target domains. This generic approach is shown to effectively synthesize high-quality images from label maps and edge maps, and even colorize images. The authors highlight the importance of developing an architecture capable of understanding the dataset at hand and learning mapping functions without the need for hand-engineering (which has been the case typically).
The U-Net generator
CNNs are optimized for computer vision tasks, using them for the generator as well as discriminator architectures has a number of advantages. This work focuses on two related architectures for the generator setup. The two choices are the vanilla encoder-decoder architecture and the encoder-decoder architecture with skip connections. The architecture with skip connections has more in common with the U-Net
Get hands-on with 1400+ tech skills courses.