The Pytorch Image Model (timm) framework provides an option to use Mixup and Cutmix augmentations. We can use these techniques to enhance the performance of our model.
Mixup
Mixup is a domain-agnostic augmentation technique. It randomly generates weighted combinations of image pairs from the training data. It takes two images and their corresponding ground truths to generate a new image.
The implementation for Mix up looks like this:
- : This is the first raw input vector of a random image from the training data.
- : This is the second raw input vector of a random sample from the training data.
- : This is the first one-hot label encoding a random sample from the training data.
- : This is the second one-hot label encoding a random sample from the training data.
- : This is a lambda that represents a random value from the Beta distribution. The value is between 0 and 1.
The timm.data.mixup
class provides all the functionalities for Mixup augmentations.
Define a new function, which we’ll call the ImageDataset
class. Then, call the create_loader
function to load our datasets.
Let’s look at the following code snippet as a reference:
Get hands-on with 1400+ tech skills courses.