Build an AI Chatbot with React and OpenAI (Part 1)

In the fast-paced world of digital technology, AI chatbots have emerged as crucial tools for elevating customer experiences. These advanced chatbots, particularly when integrated with leading AI technologies such as OpenAI’s GPT models, offer businesses the opportunity to provide 24/7 support and tailored interactions. This evolution in customer service is not just transforming how companies interact with their clientele but also underscores the growing need for developers to acquire skills in AI chatbot development using platforms like React and OpenAI. Such capabilities are becoming essential in the realm of customer engagement and information dissemination.

What to expect

The primary objective of this article is to serve as a comprehensive guide explaining the step-by-step process of creating an interactive AI chatbot. Our focus will be on utilizing React, a robust JavaScript library, for developing the front end of the chatbot, ensuring a seamless and dynamic user interface. Simultaneously, we will integrate OpenAI’s GPT model to power the back end, enabling our chatbot to engage in intelligent and responsive conversations.

The expected output of our final application will be as follows:

By the end of this guide, you’ll have gained invaluable insights and practical skills in leveraging these advanced technologies to build a chatbot that not only interacts effectively but also provides a rich, user-friendly experience. We are now ready to explore the workflow of the OpenAI API, with a particular emphasis on models for textual data, leading up to an introduction to the basic implementation.

OpenAI API

The OpenAI API is an interface that allows developers to access OpenAI’s advanced AI models, including GPTGenerative Pre-trained Transformer and DALL·E. This API provides a way to integrate these AI models into various applications, enabling functionalities like natural language understanding, text generation, and image creation.

Understanding the API Workflow

Now that we’ve introduced the OpenAI API, let’s explore how its workflow will operate within our chatbot application. An illustration provided here offers a clear visual guide to this process.

The API Workflow in Action

  1. Sending a request: The journey begins with our chatbot (the client) sending a structured request to the OpenAI API. These requests include specific parameters, which we’ll discuss shortly.

  2. Processing by OpenAI: Upon receiving the request, the API employs the specified GPT model to process the data. This is where the AI models’ capabilities in text understanding and generation come into play.

  3. Receiving the response: The API then returns the processed data or response back to our chatbot. This response empowers the chatbot to provide intelligent, relevant, and context-aware answers.

API workflow
API workflow

Having established an understanding of the OpenAI API’s workflow, we’ll now delve into the specifics of how we configure and send these requests, ensuring our chatbot’s efficient and intelligent operation.

Focusing on textual data models

In this guide, our focus will be narrowed to models best suited for understanding and generating textual data, namely, GPT 3.5 and GPT-4. These large language models, trained on extensive text data, are ideal for our chatbot’s requirements. Next, we’ll look at how to interact with specific API endpoints to harness these models’ full potential.

Endpoint compatibility with GPT models

To access certain text-based models, the specific endpoint is listed in the table.

ENDPOINT

LATEST MODELS

/v1/chat/completions

  • gpt-4 and dated model releases
  • gpt-4-1106-previewgpt-4-vision-previewgpt-4-32k and dated model releases 
  • gpt-3.5-turbo and dated model releases
  • gpt-3.5-turbo-16k and dated model releases
  • fine-tuned versions of gpt-3.5-turbo

In this guide, our focus will be on utilizing the /v1/chat/completions endpoint by using the following URL.

https://api.openai.com/v1/chat/completions
Chat Completion API endpoint

Request parameters

Regarding the /v1/chat/completions endpoint, it’s essential to understand its various request parameters. These parameters define how we interact with the endpoint to achieve the desired outcomes. Let’s delve into some of these key request parameters:

Query parameter


Type

Category

Description

messages

array

Required

An array of message objects, representing the conversation history. Each message object typically includes fields like:

  • role: Defining whether the message is from the user or the system.
  • content: This is the actual text of the message.

model

string

Required

Specifies the model version, like "gpt-3.5-turbo"

max_tokens

integer or null

Optional

The maximum number of tokens that can be generated in the chat completion.

response_format

object

Optional

An object specifying the format that the model must output. Setting to { "type": "json_object" } enables JSON mode, which guarantees the message the model generates is valid JSON. Must be one of text or json_object.

stop

string / array / null

Defaults to null

Optional

A maximum of four specified sequences that signal the API to cease further token generation.

temperature

number or null

Defaults to 1

Optional

The temperature parameter adjusts the model's token selection probabilities, with lower values favoring more likely tokens, thus promoting more predictable responses, and higher values allowing less likely tokens to be chosen more often, enhancing creativity and variation.

top_p

number or null

Defaults to 1

Optional

The top_p parameter sets a threshold for considering a subset of probable tokens, limiting the model's selection to this set and excluding less probable choices. Together, they fine-tune the model's balance between predictability and diversity in generating responses.

The minimum necessary parameters should include the following:

{
model: "gpt-3.5-turbo",
messages: [
{
role: "system",
content: "You are a helpful assistant."
},
{
role: "user",
content: "How are you!"
},
]
};

In the request parameters, messages and model are mandatory.

  • The model parameter specifies the version of the GPT model being used, such as gpt-3.5-turbo, gpt-4, gpt-3.5-turbo-0613, or gpt-3.5-turbo-16k-0613.

  • messages is a list where each item contains two essential fields:

    • role: This indicates who is sending the message. The role can be the following:

      • system denotes messages that provide context or instructions rather than being part of the main conversation. For example, a system message might tell the chatbot, “You are a helpful assistant,” which sets the context for how the chatbot should behave or respond.

      • The user role is for messages that come from the human user interacting with the chatbot. These are the inputs or questions that a person types in, which the chatbot is expected to respond to.

      • Messages with the assistant role are those generated by the chatbot or AI assistant in response to user inputs. These are the replies or information provided by the chatbot based on its programming and the user’s queries.

    • content: This is the actual text of the message, like a user asking, “Write me a beautiful poem.”

Basic JavaScript implementation

Now let’s make a basic API call with minimum required parameters.

Please replace YOUR_API_KEY_HERE in the code with your own OpenAI API key to enable the functionality of the chatbot. If you don't have an API key yet, you can obtain one by signing up at the official OpenAI website. For demonstration purposes, we've utilized gpt-3.5-turbo, but you have the flexibility to substitute it with other compatible models.

In the above code, we are making a basic API call to utilize the gpt-3.5-turbo model. We begin by setting the conversational context, requesting the model to mimic Shakespeare through the system role. This is followed by a user role query concerning computers. Following are the more technical details about the code:

  • Line 1: This line declares a constant variable openaiApiKey which should hold your OpenAI API key as a string.

  • Line 3: Here, a new URL object is being created with the endpoint for the OpenAI Chat Completions API.

  • Lines 5–8: const headerParameters = { ... }; block defines the HTTP headers to be used in the API request, specifying the content type as JSON and the Authorization header using the Bearer token scheme with your OpenAI API key.

  • Lines 10–22: const bodyParameters = { ... }; defines the body of the POST request. It specifies the model to use ("gpt-3.5-turbo"), and includes an array of message objects that simulate a conversation with the system assuming the persona of Shakespeare and a user asking “What are computers?”.

  • Lines 24–28: const requestOptions = { ... }; object sets up the request options for the fetch call. It's a POST request with the headers and body defined in the previous steps.

  • Lines 30–46: async function fetchChatResponse() { ... } is an asynchronous function to handle the fetching of the API response.

    • Line 32: An asynchronous call to the OpenAI API endpoint is made using the fetch method with the URL and request options.

    • Lines 33–36: if (response.ok) { ... } checks if the HTTP response status is an OK (200–299). If so, it proceeds to handle the response. If the response is successful, the response data is logged to the console.

  • Lines 39–42: This is the else block that handles HTTP errors when response.ok is false.

    • Line 38: Displays an error message, including the status code, and text is logged if there is an HTTP error.

    • Line 39: The response is read as text to handle non-JSON error responses and then is logged to the console.

  • Line 49: The function fetchChatResponse is called to execute the API request.

Response fields

In the template response from the API call below, we observe that the role is identified as “assistance.” The format of the response to the request will be as follows:

{
"id": "chatcmpl-8UYf9vT33tQJShu8s1DGIaJN5wCPU",
"object": "chat.completion",
"created": 1702293775,
"model": "gpt-3.5-turbo-0613",
"system_fingerprint": null,
"choices":[{
"index": 0,
"message":{
"role": "assistant",
"content": "\n\nThank you for asking! As an AI, I don't have feelings, but I'm here and ready to assist you. How can I help you today?",
},
"finish_reason": "stop"
}],
"usage": {
"prompt_tokens": 21,
"completion_tokens": 32,
"total_tokens": 53,
}
}

Some of the important response fields from the API are as follows:

Response field

Type

Description

id

string

Unique identifier for the response generated by the API call

choices

array

This field contains an array of the responses generated by the model. Each element in the array is an object representing a possible completion or output based on the input prompt.

created

integer

This is a timestamp indicating when the response was generated.

model

string

This field specifies the model version used for generating the response.

system_fingerprint

string

This is a unique identifier for the system or environment that made the API call.

object

string

This typically indicates the type of object that the API response represents. It might be set to a value like "chat.completion", signaling the nature of the API operation.

usage

object

This field provides information about the usage of the API for the current request including details such as the number of tokens processed, which is important for billing and tracking API consumption.

Now that we are well aware of the request and response structure of the chat endpoint, we will utilize it to make a chatbot. In Part 2 of this guide, we’ll focus on incorporating OpenAI’s Chat Completions API into React and explore managing context in chatbot conversations.

It might be beneficial to deepen your understanding of prompt engineering for chatbots. For those interested in further exploring this aspect, Master ChatGPT Prompt Engineering offers a comprehensive learning path.

Copyright ©2024 Educative, Inc. All rights reserved