Understanding Word Vectors
Let's learn about word vectors and why they are so important for NLP.
We'll cover the following...
The invention of word vectors (or word2vec) has been one of the most thrilling advancements in the NLP world. Those of you who are practicing NLP have definitely heard of word vectors at some point. This chapter will help us understand the underlying idea that caused the invention of word2vec, what word vectors look like, and how to use them in NLP applications.
The statistical world works with numbers, and all statistical methods, including statistical NLP algorithms, work with vectors. As a result, while working with statistical methods, we need to represent every real-world quantity as a vector, including text. In this section, we will learn about the different ways we can represent text as vectors and discover how word vectors provide semantic representation for words.
We will start by discovering text vectorization by covering the simplest implementation possible: one-hot encoding.
One-hot encoding
One-hot encoding is a simple and straightforward way to assign vectors to words: assign an index value to each word in the vocabulary and then encode this value into a sparse vector. Let's look at an example. Here, we will consider the vocabulary of a pizza ordering application; we can assign an index to each word in the order they appear in the vocabulary:
1 a2 e-mail3 I4 cheese5 order6 phone7 pizza8 salami9 topping10 want
Now, the vector of a vocabulary word will be 0
, except for the position of the word's corresponding index value:
a 1 0 0 0 0 0 0 0 0 0e-mail 0 1 0 0 0 0 0 0 0 0I 0 0 1 0 0 0 0 0 0 0cheese 0 0 0 1 0 0 0 0 0 0order 0 0 0 0 1 0 0 0 0 0phone 0 0 0 0 0 1 0 0 0 0pizza 0 0 0 0 0 0 1 0 0 0salami 0 0 0 0 0 0 0 1 0 0topping 0 0 0 0 0 0 0 0 1 0want 0 0 0 0 0 0 0 0 0 1
Now, we can represent a sentence as a matrix, where each row corresponds to one word. For example, the sentence "I want a pizza" can be represented by the following matrix:
I 0 0 1 0 0 0 0 0 0 0want 0 0 0 0 0 0 0 0 0 1a 1 0 0 0 0 0 0 0 0 0pizza 0 0 0 0 0 0 1 0 0 0
As we can see from the preceding vocabulary and indices, the length of the vectors is equal to the number of the words in the vocabulary. Each dimension is devoted to one word explicitly. When we apply one-hot encoding vectorization to our text, each word is replaced by its vector and the sentence is transformed into a (N, V)
matrix, where N
is the number of words in the sentence and V
is the vocabulary's size.
This ...