OpenAI GPT-3 is a fully trained model. However, it can be fine-tuned.

This lesson shows how to fine-tune GPT-3 to learn logic. Transformers need to learn logic, inferences, and entailment to understand language at a human level.

Fine-tuning is the key to making GPT-3 your own application, to customizing it to make it fit the needs of your project. It’s a ticket to AI freedom to rid your application of bias, teach it things you want it to know, and leave your footprint on AI.

In this section, GPT-3 will be trained on the works of Immanuel Kant using kantgpt.csv. We used a similar file to train the BERT-type model earlier in the course.

Once you master fine-tuning GPT-3, you can use other types of data to teach it specific domains, knowledge graphs, and texts. OpenAI provides an efficient, well-documented service to fine-tune GPT-3 engines. It has trained GPT-3 models to become different types of engines.

The Davinci engine is powerful but can be more expensive to use. The Ada engine is less expensive and produces sufficient results to explore GPT-3 in our experiment.

Fine-tuning GPT-3 involves two phases:

  • Preparing the data

  • Fine-tuning a GPT-3 model

Preparing the data

Open Fine_Tuning_GPT_3.ipynb (in the "Code playground" section) in Jupyter notebook. OpenAI has documented the data preparation process in detail:

Step 1: Importing OpenAI

First, we import openai:

Get hands-on with 1200+ tech skills courses.