Summary: Exploring Sentence and Domain-Specific BERT
Let’s summarize what we have learned so far.
We'll cover the following
Key highlights
Summarized below are the main highlights of what we've learned in this chapter.
We started off by understanding how Sentence-BERT works. We learned that in Sentence-BERT, we use mean or max pooling for computing the sentence representation. We also learned that Sentence-BERT is basically a pre-trained BERT model that is fine-tuned for computing sentence representation. For fine-tuning the pre-trained BERT model, Sentence-BERT uses a Siamese and triplet network architecture, which makes the fine-tuning faster and helps in obtaining accurate sentence embeddings.
We learned how to use the
sentence-transformers
library. We learned how to compute sentence representation and also how to compute the semantic similarity between a sentence pair usingsentence-transformers
.
Get hands-on with 1400+ tech skills courses.