About the course

Let's take an overview of the course and its prerequisites.

Before diving into the details, let's briefly overview what you can expect to learn in this course.

Course overview

This course is an introductory guide to Google’s BERT architecture.

It begins with a detailed explanation of the transformer architecture and how the encoder and decoder of the transformer work. We'll become familiar with BERT and explore its architecture, along with discovering how the BERT model is pre-trained and how to use pre-trained BERT for downstream tasks by fine-tuning it.

As we go along, we'll discover different BERT variants based on types, knowledge distillation, fields, etc. Additionally, we'll learn about the M-BERT, XLM, XLM-R, and an interesting variant of BERT called VideoBERT.

By the end of this course, we’ll be well-versed in using BERT and its variants for performing practical NLP tasks.

Intended audience

This course is for NLP professionals and data scientists looking to simplify NLP tasks to enable efficient language understanding using BERT.

Press + to interact

Prerequisite

To get the most out of this course, it is recommended that you have proficiency in Python programming and a basic understanding of Natural Language Processing (NLP) concepts and deep learning principles.

What this course covers

  • A Primer on Transformers: This chapter explains the transformer model in detail. We will understand how the encoder and decoder of the transformer work by looking at their components in detail.

  • Understanding the BERT model: This chapter helps us to understand the BERT model. We will learn how the BERT model is pre-trained using Masked Language Model (MLM) and Next Sentence Prediction (NSP) tasks. We will also learn several interesting subword tokenization algorithms.

  • Getting Hands-On with BERT: This chapter explains how to use the pre-trained BERT model. We will learn how to extract contextual sentences and word embeddings using the pre-trained BERT model. We will also learn how to fine-tune the pre-trained BERT for downstream tasks such as question-answering, text classification, and more.

  • BERT Variants I—ALBERT, RoBERTa, ELECTRA, and SpanBERT: This chapter explains several variants of BERT. We will learn how BERT variants differ from BERT and how they are useful in detail.

  • BERT Variants II—Based on Knowledge Distillation: This chapter deals with BERT models based on distillation, such as DistilBERT and TinyBERT. We will also learn how to transfer knowledge from a pre-trained BERT model to a simple neural network.

Press + to interact
  • Exploring BERTSUM for Text Summarization: This chapter explains how to fine-tune the pre-trained BERT model for a text summarization task. We will understand how to fine-tune BERT for extractive summarization and abstractive summarization in detail.

  • Applying BERT to Other Languages: This chapter deals with applying BERT to languages other than English. We will learn about the effectiveness of multilingual BERT in detail. We will also explore several cross-lingual models, such as XLM and XLM-R.

  • Exploring Sentence and Domain-Specific BERT: This chapter explains Sentence-BERT, which is used to obtain sentence representation. We will also learn how to use the pre-trained Sentence-BERT model. Along with this, we will also explore domain-specific BERT models such as ClinicalBERT and BioBERT.

  • Working with VideoBERT, BART, and More: This chapter deals with an interesting type of BERT called VideoBERT. We will also learn about a model called BART in detail. We will also explore a popular library known as ktrain.