Experiments
Analyze the experiments that reveal numerical disparities in generated samples, utilizing JSD and the KS two-sample test for assessment.
We'll cover the following
The experiments described in this section focus on two points. The first shows that fake samples have properties that are hardly noticed with visual inspection and that are tightly related to the requirements of differentiability. The second shows that there are numerical differences between statistical moments computed on features extracted from real and fake samples that can be used to identify the data.
MNIST
The experiment focuses on showing numerical properties of fake MNIST samples and features therein, unknown to the naked eye, which can be used to identify them as produced by a GAN. We start by comparing the distribution of features computed over the MNIST training set to other datasets, including the MNIST test set, samples generated with LSGAN and the Improved Wasserstein GAN (IWGAN), and adversarial samples computed using the Fast Gradient Sign Method (FGSM). The training data is scaled to
The following screenshot contains the samples drawn from the MNIST train set, test set, LSGAN, IWGAN, FSGM, and Bernoulli:
Get hands-on with 1400+ tech skills courses.