Transduction and the Inductive Inheritance of Transformers

Learn about what transduction is and the inductive inheritance of transformers.

Let’s start by understanding how humans and machines represent language.

Overview

The emergence of Automated Machine Learning (AutoML)Meaning APIs in automated cloud AI platforms, has deeply changed the job description of every AI specialist. Google Vertex, for example, boasts a reduction of 80% of the development required to implement ML. This suggests that anybody can implement ML with ready-to-use systems. Does that mean an 80% reduction of the workforce of developers? We don’t think so. Industry 4.0 AI specialists assemble AI with added value to a project.

Industry 4.0 NLP AI specialists invest less in source code and more in knowledge to become the AI guru of a team.

Transformers possess the unique ability to apply their knowledge to tasks they did not learn. A BERT transformer, for example, acquires language through sequence-to-sequence and masked language modeling. The BERT transformer can then be fine-tuned to perform downstream tasks that it did not learn from scratch.

In this section, we will do a mind experiment. We will use the graph of a transformer to represent how humans and machines make sense of information using language. Machines make sense of information in a different way than humans but reach very efficient results.

The diagram below, a mind experiment designed with transformer architecture layers and sublayers, shows the deceptive similarity between humans and machines. Let’s study the learning process of transformer models to understand downstream tasks:

Get hands-on with 1400+ tech skills courses.