Masked Language Modeling
Learn about using masked language modeling and whole word masking techniques to pre-train the BERT model.
We'll cover the following
BERT is an auto-encoding language model, meaning that it reads the sentence in both directions to make a prediction. In a masked language modeling task, in a given input sentence, we randomly mask 15% of the words and train the network to predict the masked words. To predict the masked words, our model reads the sentence in both directions and tries to predict the masked words.
Training the BERT model for MLM task
Let's understand how masked language modeling works with an example. Let's take the same sentences we saw earlier: 'Paris is a beautiful city', and 'I love Paris'.
Tokenize the sentence
First, we tokenize the sentences and get the tokens, as shown here:
Get hands-on with 1400+ tech skills courses.