Course Overview and Prerequisites

Get a brief overview and learn the prerequisites for this course.

Overview

Training machine learning models often requires large, high-quality, labeled data to achieve better generalization on test data. However, high-quality labeled datasets are difficult to obtain because of their high annotation costs. Furthermore, at times there are tasks for which even gathering data can be difficult (e.g., medical imaging).

This limits the application of supervised learning in practical scenarios where labeled data is absent or present in a limited amount. The course will cover self-supervised learning, a popular AI paradigm. Self-supervised learning works with a huge pool of unlabeled data so the learned representations are transferrable to downstream tasks.

Press + to interact
Self-supervised learning
Self-supervised learning

Intended audience

If you are in industry or academia, want to advance your knowledge in machine learning, and want to educate yourself beyond just supervised learning, this course is for you.

After taking this course, you will:

  • Understand how self-supervised learning works and how self-supervised tasks are created.

  • Be able to design your self-supervised learning objectives.

  • Think beyond supervised and unsupervised learning while designing your training pipelines.

  • Use and tweak existing self-supervised learning objectives to learn from unlabelled data.

  • Learn how to transfer and evaluate your self-supervised network representations on a downstream task.

Prerequisites

Since the course is built on PyTorch and requires a basic understanding of deep learning and neural networks, we expect readers to have the following:

  • Basic familiarity with the Python language and the PyTorch framework.

  • Basic knowledge of neural networks, CNNs, image processing, loss functions, training, and evaluation of models.

  • Basic knowledge of advanced deep neural networks like GANs, Auto-encoders, Transformers, etc.

Press + to interact