Running Downstream Tasks

There are many models and tasks. In this lesson, we will try running a few of them. Once we understand the process of running a few tasks, we’ll be able to quickly understand all of them. After all, the human baseline for all these tasks is us!

A downstream task is a fine-tuned transformer task that inherits the model and parameters from a pretrained transformer model.

A downstream task is therefore the perspective of a pretrained model running fine-tuned tasks. That means, depending on the model, a task is downstream if it was not used to fully pretrain the model. In this section, we will consider all the tasks downstream since we did not pretrain them.

Models will evolve, as will databases, benchmark methods, accuracy measurement methods, and leaderboard criteria. However, the structure of human thought reflected through the downstream tasks in this chapter will remain.

Let’s start with CoLA.

The Corpus of Linguistic Acceptability (CoLA)

The Corpus of Linguistic Acceptability (CoLA), a GLUE task, contains thousands of samples of English sentences annotated for grammatical acceptability.

The goal of Alex Warstadt et al. (2019) was to evaluate the linguistic competence of an NLP model to judge the linguistic acceptability of a sentence. The NLP model is expected to classify the sentences accordingly.

The sentences are labeled as grammatical or ungrammatical. The sentence is labeled 0 if the sentence is not grammatically acceptable. The sentence is labeled 1 if the sentence is grammatically acceptable. For example:

Get hands-on with 1400+ tech skills courses.