In Part 1, we covered the basics of building a chatbot OpenAI. Now, in Part 2, we're taking a step further by integrating the advanced GPT-3.5 Turbo and GPT-4 models from OpenAI. We'll explore the significance of contextual conversation in chatbot development and elevate our chatbot's user interface. This part is crucial for those looking to understand:
Integration of OpenAI’s Chat Completions API with React: Initially, we’ll explore how to integrate the API without storing the conversational context.
Context storage in chatbot conversations: Next, we’ll implement context storage to maintain a coherent conversation history.
UI Enhancements: Finally, we’ll enhance our chatbot’s user interface to make it more engaging and user-friendly.
In the realm of AI chatbot development, two primary approaches define the bot’s conversational capabilities: simple conversations and contextual conversations. Understanding the distinction between these two is key to developing a chatbot that aligns with your application’s needs.
This approach employs a straightforward request-response mechanism. The chatbot independently responds to each user query without considering previous interactions. It can be implemented using stateless components or basic state management in React. Each message is treated as a standalone query, making it suitable for scenarios like FAQ bots.
Contextual conversations leverage the history of user interaction to provide relevant responses, creating a more natural, human-like interaction. Advanced state management in React holds the conversation history. Models like GPT-3.5 Turbo and GPT-4, proficient in understanding context, play a pivotal role. This approach suits complex scenarios like customer support and personal assistants.
In this segment, we focus on enhancing our chatbot’s conversational abilities using GPT-3.5 Turbo and GPT-4. Key to this enhancement is the effective management of conversation history for contextual relevance. We achieve this by:
Maintaining a conversational log: We keep a record of each interaction within an array, including both user inputs and chatbot responses. This log serves as the memory of our chatbot, which is crucial for contextually rich conversations.
System role for direction: A specific ‘system’ message guides the chatbot’s approach:
const systemMessage = {role: 'system',content: "Explain things like you're a computer science field expert",};
Contextual response strategy: Every time the chatbot replies, its response is added to the log under the “assistant” role. This method ensures that subsequent user inputs are met with responses that are informed by the preceding conversation flow.
By implementing these strategies, our chatbot becomes capable of delivering intelligent, context-aware interactions, fully utilizing the sophisticated language processing abilities of the GPT-3.5 Turbo and GPT-4 models.
Before diving in, let’s set up the environment for React development.
In this section, we begin with a basic request-and-response model that doesn’t retain conversation history for context. The OpenAI’s Chat Completions API necessitates a specific data format, including role (“user,” “assistant,” or “system”) and content (the message). Therefore, we reformat user input accordingly. We employ the useState hook in React for seamless updates. This approach offers straightforward yet less context-aware interactions.
Please replace YOUR_API_KEY_HERE
in the code with your own OpenAI API key to enable the functionality of the chatbot. If you don’t have an API key, you can obtain one by signing up at the official OpenAI website. For demonstration purposes, we’ve utilized gpt-3.5-turbo
, but you have the flexibility to substitute it with other compatible models.
Lines 7–11: systemMessage
: This object sets up the context for the chatbot’s responses.
Lines 14–20: Using useState
, the messages
state is initialized to store chat messages and inputMessage
is used to handle the current input text.
Lines 22–31: This function is triggered when the ‘Send’ button is clicked. It checks if the input message is not empty, then calls chatBotCommunication
with the new message.
Lines 35–38: headerParameters
is defined with necessary HTTP headers, including authorization with the API key and content type as JSON.
Lines 54–73: endpointUrl
specifies the URL for the OpenAI chat completions endpoint. An asynchronous fetch
request is made to the OpenAI API with the request options. The response is handled, and the chatbot’s response is added to the messages
state.
Lines 82–103: The application renders a chat interface where messages are displayed and allows the user to type a new message and send it. The onChange
event on the input field updates inputMessage
, and the onClick
event on the button triggers handleSend
.
We proceed by elevating our chatbot’s capabilities. We store and process all dialogue using OpenAI’s GPT-3.5 Turbo and GPT-4 models. Every chatbot response is tagged as ‘assistant’ and integrated into our array, ensuring coherent and intelligent interactions. This data is efficiently sent to the API, enabling a continuous and contextually rich chatbot conversation.
Please replace YOUR_API_KEY_HERE
in the code with your own OpenAI API key to enable the functionality of the chatbot.
In the code above:
Lines 15–20: The messages
state stores chat messages and inputMessage
stores the current input from the user.
Lines 24–37: The handleSend
function is triggered when a user sends a message. It checks if the input is not empty, creates a new message object, clears the input, and updates the messages
state.
Lines 39–94: The chatBotCommunication
function is an asynchronous function that takes the chat messages, formats them for the API, and sends a request to OpenAI’s chat API. It uses the fetch
API to make a POST request.
With our chatbot’s core functionality in place, we shift our focus to UI enhancements. Using Bootstrap, we’ll improve the chatbot’s appearance, making the interface more engaging and user-friendly.
Please replace YOUR_API_KEY_HERE
in the code with your own OpenAI API key to enable the functionality of the chatbot.
In this part of our series, we’ve not only integrated advanced AI models into our chatbot but also enhanced its user interface. Stay tuned for further developments as we continue to explore innovative ways to improve chatbot interactions.
Free Resources