...

/

Creating Our Vocabulary for Training

Creating Our Vocabulary for Training

Create a customized vocabulary for training SeqGAN using English jokes dataset.

Reading code that’s been written by someone else in GitHub is easy. The most important thing we need to do is apply the models we know to new applications and create our own samples. Here, we will walk through the basic steps of creating a vocabulary from a huge collection of text and use it to train our NLP models. In the NLP model, a vocabulary set is normally a table that maps each word or symbol to a unique token (typically, an int value) so that any sentence can be represented by a vector of int.

Press + to interact

⚠️ The dataset is intended only for non-commercial research and educational use.

First, let’s find some data to play with. To get started, here is a list of NLP datasetshttps://github.com/niderhoff/nlp-datasets available on GitHub. From this list, we will find an English joke datasethttps://github.com/taivop/joke-dataset that contains more than 200,000 jokes parsed from Reddithttps://www.reddit.com/r/jokes, Stupid Stuffhttp://stupidstuff.org/, and Wockawocka.com. The joke text will be in three different files (reddit_jokes.json, stupidstuff.json, and wocka.json). Now, let’s create our vocabulary. First, create a folder named data in the project code folder and copy the aforementioned files into it.

Putting data in CSV format

Now, let’s create a small program so that we can parse the JSON files and put them in CSV format. Let’s call it parse_ ...