When working with the ChatGPT API, you may have encountered a limitation regarding the size of the text data you can send. This limitation can be circumvented using methods such as prompt splitting or embedding. These tactics involve fragmenting your large text input into smaller segments that can be processed individually by the ChatGPT API or condensing the text into a more compact form.
Note: To learn how to obtain ChatGPT's API, click here.
Embedding is a tactic that can manage large text inputs. Here, the text is converted into a more compact form, such as a numerical vector, using methods like Word2Vec, GloVe, or BERT embeddings. These embeddings encapsulate the semantic essence of the text and can be processed by the GPT model.
However, it’s crucial to remember that while embeddings can shrink the size of the text input, they may not fully conserve the original text’s details. Hence, this method is more suitable for tasks where the exact phrasing of the text is not vital.
Prompt splitting is another technique that enables us to supply vast amounts of contextual data to the prompts sent to ChatGPT. It’s achieved by slicing the text into smaller segments, each sent as a separate request to the API.
Let's consider a practical example. Suppose we have a long text that we want to generate a continuation for using GPT-3. The text is a story about a journey through a magical forest, and it's too long to fit within the model's token limit.
Here's a small excerpt from the story.
"Once upon a time, in a land far, far away, there was a magical forest. This forest was unlike any other. It was filled with trees that sparkled with emerald leaves, rivers that flowed with liquid gold, and flowers that could sing in the voices of angels. The forest was inhabited by a variety of mystical creatures, each with their own unique abilities and stories.One day, a young adventurer named Alex decided to explore this magical forest. With a heart full of courage and a backpack filled with supplies, Alex stepped into the forest. The trees seemed to whisper to each other as Alex walked past them. The singing flowers created a melody that filled the air. It was like walking in a living, breathing fairy tale.As Alex journeyed deeper into the forest, they encountered a variety of its mystical inhabitants. There was a group of pixies playing by the river, a wise old unicorn resting under a tree, and even a friendly dragon that offered Alex a ride through the sky. Each encounter was more magical than the last, and Alex was filled with wonder and joy."
This story is too long to fit within GPT-3's token limit. So, we need to split it into smaller parts.
The first thing to understand is what constitutes a token in GPT-3. A token can be as short as one character or as long as one word. For example, "ChatGPT is great!" is encoded into six tokens: ["Chat", "G", "PT", " is", " great", "!"]. The total number of tokens, both for the messages you send to the API and the messages you receive from it, must not exceed the model's maximum limit (2048 tokens for GPT-3).
Look for natural breakpoints in the text. These could be paragraph breaks, changes in the topic, or shifts in the narrative. In our story, each paragraph represents a different part of Alex's journey, so they make good break points.
Split the text at the identified breakpoints. Make sure that each part is small enough to fit within the model's token limit. In our case, we can split the story into three parts, one for each paragraph.
When sending each part to the model, add some context from the previous parts. This helps the model understand the overall narrative and generate appropriate continuations. For instance, when sending the third part (about Alex encountering the forest's inhabitants), we could add a line from the second part to provide context: "With a heart full of courage and a backpack filled with supplies, Alex stepped into the forest."
Finally, send each part to the model and collect the responses. The model will generate a continuation for each part. You can then combine these continuations to get the full continuation of your original text.
The following code allows you to experiment with different prompts and observe the outputs. Feel free to adjust the prompt length and see how the model's responses change and try prompt splitting.
Note: You will need to input your API key for the code to provide an output.
import openaiopenai.api_key = 'your-api-key'prompt = "Split this prompt!"response = openai.Completion.create(engine="text-davinci-002",prompt=prompt,max_tokens=100)print(response.choices[0].text.strip())
Line 1: Imports the OpenAI library.
Line 3: Sets your OpenAI API key for authentication. Replace 'your-api-key'
with your secure OpenAI API key.
Line 5: Defines the prompt for the model.
Line 7: Sends a request to the OpenAI API to generate a text completion.
Handling longer texts with the ChatGPT API requires a careful manual process of splitting the text into smaller parts and providing sufficient context for each part. While this approach allows you to work with longer texts, it doesn't guarantee that the model will maintain the context perfectly across all parts. However, with careful handling, it can be a useful way to generate continuations for longer texts with GPT-3.
Note: Remember, the effectiveness of this approach heavily relies on the quality of the break points and the context provided. So, take your time to understand your text and choose the best way to split it. Happy coding!
Free Resources