Defining the NMT Model
Learn about the encoder and the decoder for the NMT model.
We'll cover the following...
In this lesson, we’ll define the model from end to end. We are going to implement an encoder-decoder based NMT model equipped with additional techniques to boost performance. Let’s start off by converting our string tokens to IDs.
Converting tokens to IDs
Before we jump to the model, we have one more text processing operation remaining, that is, converting the processed text tokens into numerical IDs. We’re going to use a tf.keras.layers.Layer
to do this. Particularly, we’ll be using the StringLookup
layer to create a layer in our model that converts each token into a numerical ID. As the first step, let’s load the vocabulary files provided in the data. But before doing so, we’ll define the variable n_vocab
to denote the size of the vocabulary for each language:
n_vocab = 25000 + 1
Originally, each vocabulary contains 50,000 tokens. However, we’ll take only half of this to reduce the memory requirement. Note that we allow one extra token because there’s a special token <unk>
to denote out-of-vocabulary (OOV) words. With a 50,000-token vocabulary, it’s quite easy to run out of memory due to the size of the final prediction layer we’ll build. While cutting back on the size of the vocabulary, we have to make sure that we preserve the most common 25,000 words. Fortunately, each vocabulary file is organized so that words are ordered by their frequency of occurrence (high to low). Therefore, we just need to read the first 25,001 lines of text from the file:
en_vocabulary = []with open(os.path.join('data', 'vocab.50K.en'), 'r', encoding='utf-8') as en_file:for ri, row in enumerate(en_file):if ri >= n_vocab: breaken_vocabulary.append(row.strip())
Then we do the same for the German vocabulary:
de_vocabulary = []with open(os.path.join('data', 'vocab.50K.de'), 'r', encoding='utf-8') as de_file:for ri, row in enumerate(de_file):if ri >= n_vocab: breakde_vocabulary.append(row.strip())
Each of the vocabularies contains the special OOV token <unk>
as the first line. We’ll pop that out of the en_vocabulary
and de_vocabulary
lists because we need this for the next step:
en_unk_token = en_vocabulary.pop(0)de_unk_token = de_vocabulary.pop(0)
Here’s how we can define our English ...