...

/

CycleGAN: Image-to-Image Translation from Unpaired Collections

CycleGAN: Image-to-Image Translation from Unpaired Collections

Understand CycleGAN and practice how to train it for image-to-image translation.

We may have noticed that when training pix2pix, we need to determine a direction (A to B or B to A) that the images are translated to. Does this mean that if we want to freely translate from image set A to image set B and vice versa, we need to train two models separately?
Not with CycleGAN, we say!

CycleGANZhu, Jun-Yan, Taesung Park, Phillip Isola, and Alexei A. Efros. "Unpaired image-to-image translation using cycle-consistent adversarial networks." In Proceedings of the IEEE international conference on computer vision, pp. 2223-2232. 2017. is a bidirectional generative model based on unpaired image collections. The core idea of CycleGAN is built on the assumption of cycle consistency, which means that if we have two generative models, GG and FF, which translates between two sets of images, XX and YY, in which Y=G(X)Y=G(X) and X=F(Y)X=F(Y), we can naturally assume that F(G(X))F(G(X)) should be very similar to XX and G(F(Y))G(F(Y)) should be very similar to YY. This means that we can train two sets of generative models at the same time that can freely translate between ...