What Is a Transformer?

Get introduced to transformer-based machine learning models and their specific applications in natural language processing (NLP).

Transformer overview

The transformer is a deep learning model architecture introduced in the paper “Attention Is All You Need”Vaswani, Ashish, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Łukasz Kaiser, and Illia Polosukhin. “Attention is all you need.” Advances in neural information processing systems 30 (2017).. It revolutionized NLP tasks by replacing traditional recurrent neural networks (RNNs) with a self-attention mechanism, enabling more efficient and parallelizable processing of sequences, in our case word or character sequences. The transformer architecture has been widely adopted and achieved state-of-the-art results not only for spell checking but across the field of machine learning. In fact, some of the most well-known architectures, such as Bidirectional Encoder Representations from Transformers (BERT) and Generative Pre-trained Transformers (GPT) utilize transformer-based encoder-decoder models. Here is an explanation of the transformer as well as some ML Applications:

Self-attention

Attention is like a communication layer that is put on top of tokens in a ...