Understanding Word Vectors
Let's learn about word vectors and why they are so important for NLP.
We'll cover the following
The invention of word vectors (or word2vec) has been one of the most thrilling advancements in the NLP world. Those of you who are practicing NLP have definitely heard of word vectors at some point. This chapter will help us understand the underlying idea that caused the invention of word2vec, what word vectors look like, and how to use them in NLP applications.
The statistical world works with numbers, and all statistical methods, including statistical NLP algorithms, work with vectors. As a result, while working with statistical methods, we need to represent every real-world quantity as a vector, including text. In this section, we will learn about the different ways we can represent text as vectors and discover how word vectors provide semantic representation for words.
We will start by discovering text vectorization by covering the simplest implementation possible: one-hot encoding.
One-hot encoding
One-hot encoding is a simple and straightforward way to assign vectors to words: assign an index value to each word in the vocabulary and then encode this value into a sparse vector. Let's look at an example. Here, we will consider the vocabulary of a pizza ordering application; we can assign an index to each word in the order they appear in the vocabulary:
Get hands-on with 1400+ tech skills courses.