Introduction: Fine-Tuning BERT Models
Explore the novel usage of encoder blocks of transformers in the BERT architecture.
We'll cover the following
Think of the original transformer as a model built with LEGO® bricks. The construction set contains bricks such as encoders, decoders, embedding layers, positional encoding methods, multi-head attention layers, masked multi-head attention layers, post-layer normalization, feed-forward sublayers, and linear output layers.
The bricks come in various sizes and forms. We can spend hours building all sorts of models using the same building kit! Some constructions will only require some of the bricks. Other constructions will add a new piece, just like when we obtain additional bricks for a model built using LEGO® components.
What is BERT?
Bidirectional Encoder Representations from Transformers (BERT) added a new piece to the transformer building kit: a bidirectional multi-head attention sublayer. When we humans have problems understanding a sentence, we do not just look at the past words. BERT, like us, looks at all the words in the same sentence at the same time.
Chapter overview
This chapter will first explore the architecture of BERT, which only uses the blocks of the encoders of the transformer in a novel way and does not use the decoder stack.
Then we will fine-tune a pretrained BERT model. The BERT model we will fine-tune was trained by a third party and uploaded to Hugging Face. Transformers can be pretrained. Then, a pretrained BERT, for example, can be fine-tuned on several NLP tasks. We will go through this fascinating experience of downstream transformer usage using Hugging Face modules.
This section covers the following topics:
Get hands-on with 1400+ tech skills courses.