Evaluating Models with Metrics
Learn ways to evaluate transformer performance in various tasks.
We'll cover the following
Transformers, like humans, can be fine-tuned to perform downstream tasks by inheriting the properties of a pretrained model. The pretrained model provides its architecture and language representations through its parameters.
A pretrained model trains on key tasks to acquire a general knowledge of the language. A fine-tuned model trains on downstream tasks. Not every transformer model uses the same tasks for pretraining. Potentially, all tasks can be pretrained or fine-tuned.
Every NLP model needs to be evaluated with a standard method.
It is impossible to compare one transformer model to another transformer model (or any other NLP model) without a universal measurement system that uses metrics.
Get hands-on with 1400+ tech skills courses.