Named Entity Recognition

Learn how to fine-tune the pre-trained BERT model for named entity recognition (NER) tasks.

In NER, our goal is to classify named entities into predefined categories. For instance, consider the sentence 'Jeremy lives in Paris'. In this sentence, 'Jeremy' should be categorized as a person, and 'Paris' should be categorized as a location.

Now, let's learn how to fine-tune the pre-trained BERT model to perform NER.

Preprocessing the dataset

First, we tokenize the sentence. Then we add the [CLS] token at the beginning and the [SEP] token at the end. Then, we feed the tokens to the pre-trained BERT model and obtain the representation of every token.

Getting the results

We feed those token representations to a classifier (feedforward network + softmax function). Then, the classifier returns the category to which the named entity belongs. This is shown in the following figure:

Get hands-on with 1400+ tech skills courses.