Use Cases of Sentence-BERT Model
Learn how to use the pre-trained Sentence-BERT models for different tasks.
Let's begin with using the pre-trained Sentence-BERT models.
Computing sentence representation
Let's see how to compute sentence representation using a pre-trained Sentence-BERT model.
First, let's import the SentenceTransformer module from our sentence_transformers
library:
from sentence_transformers import SentenceTransformer
Download and load the pre-trained Sentence-BERT:
model = SentenceTransformer('bert-base-nli-mean-tokens')
Define the sentence for which we need to compute the sentence representations:
sentence = 'paris is a beautiful city'
Compute the sentence representation using our pre-trained Sentence-BERT model with the encode()
function:
sentence_representation = model.encode(sentence)
Now, let's check the shape of our representation:
print(sentence_representation.shape)
The preceding code will print the following:
(768,)
As we can see, our sentence representation size is 768. In this way, we can use the pre-trained Sentence-BERT model and obtain the fixed-length sentence representation.
Coding playground
Note: Press the "Click to launch app!" button in the widget below to launch the Jupyter Notebook. Press the "Run" button after selecting the cell to execute it. You can open the Jupyter Notebook in ...