The Rise of the Transformer: Attention Is All You Need
Learn about the rise of the transformer and the components present in its structure in this lesson.
We'll cover the following
In December 2017, Vaswani et al. (2017) published their seminal paper, "Attention is All You Need." They performed their work at Google Research and Google Brain. We will refer to the model described in the paper as the “original transformer model” throughout this course.
Note: The lesson "Terminology of Transformer Models" in the appendix chapter of this course can the help the transition from the classical usage of deep learning words to transformer vocabulary. It also summaries some of the changes to the classical AI definition of neural network models.
Structure of the transformer
In this lesson, we will look at the structure of the transformer model they built. In the following sections, we will explore what is inside each component of the model.
The original transformer model is a stack of 6 layers. The output of layer l is the input of layer l+1 until the final prediction is reached. There is a 6-layer encoder stack on the left and a 6-layer decoder stack on the right:
Get hands-on with 1400+ tech skills courses.