Performing Question-Answering with the Fine-Tuned BERT
Learn how to perform question-answering tasks with the fine-tuned pre-trained BERT model.
Let's learn how to perform question answering with a fine-tuned question-answering BERT model.
Importing the modules
First, let's import the necessary modules:
from transformers import BertForQuestionAnswering, BertTokenizer
Loading the model
Now, we download and load the model. We use the bert-large-uncased-whole-word-masking-fine-tuned-squad
model, which is fine-tuned on the Stanford Question-Answering Dataset (SQUAD):
model = BertForQuestionAnswering.from_pretrained('bert-large-uncased-whole-word-masking-fine-tuned-squad')
...