What Is a Transformer?

Get introduced to transformer-based machine learning models and their specific applications in natural language processing (NLP).

Transformer overview

The transformer is a deep learning model architecture introduced in the paper “Attention Is All You Need”Vaswani, Ashish, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Łukasz Kaiser, and Illia Polosukhin. “Attention is all you need.” Advances in neural information processing systems 30 (2017).. It revolutionized NLP tasks by replacing traditional recurrent neural networks (RNNs) with a self-attention mechanism, enabling more efficient and parallelizable processing of sequences, in our case word or character sequences. The transformer architecture has been widely adopted and achieved state-of-the-art results not only for spell checking but across the field of machine learning. In fact, some of the most well-known architectures, such as Bidirectional Encoder Representations from Transformers (BERT) and Generative Pre-trained Transformers (GPT) utilize transformer-based encoder-decoder models. Here is an explanation of the transformer as well as some ML Applications:

Self-attention

Attention is like a communication layer that is put on top of tokens in a text. This allows the model to learn the contextual connections of words in a sentence and weigh the importance of different words within a sequence without the use of recursion or convolution. Essentially, it encodes global information into our model so it can be used in downstream predictions.

Press + to interact
Attention matrix
Attention matrix

This works like such - given text tt, we then convert this from raw ...