Experiments

Analyze the experiments that reveal numerical disparities in generated samples, utilizing JSD and the KS two-sample test for assessment.

We'll cover the following

The experiments described in this section focus on two points. The first shows that fake samples have properties that are hardly noticed with visual inspection and that are tightly related to the requirements of differentiability. The second shows that there are numerical differences between statistical moments computed on features extracted from real and fake samples that can be used to identify the data.

MNIST

The experiment focuses on showing numerical properties of fake MNIST samples and features therein, unknown to the naked eye, which can be used to identify them as produced by a GAN. We start by comparing the distribution of features computed over the MNIST training set to other datasets, including the MNIST test set, samples generated with LSGAN and the Improved Wasserstein GAN (IWGAN), and adversarial samples computed using the Fast Gradient Sign Method (FGSM). The training data is scaled to [0,1][0, 1] and the random baseline is sampled from a Bernoulli distribution with probability equal to the mean value of pixel intensities in the MNIST training data, 0.130.13. Each GAN model is trained until the loss plateaus and the generated samples look similar to the real samples. The datasets we are going to compare have 10,000 samples each.

The following screenshot contains the samples drawn from the MNIST train set, test set, LSGAN, IWGAN, FSGM, and Bernoulli:

Get hands-on with 1200+ tech skills courses.