Configure OpenAI’s Chat Completions API for JSON responses

Integrating natural language understanding into applications can be challenging for developers, particularly when it comes to efficiently managing responses from AI models across various tasks. OpenAI’s Chat Completions API offers a solution by providing versatile functionalities and returning responses in JSON format. This simplifies integration with different platforms, enabling developers to create dynamic applications. This Answer will help developers configure the OpenAI Chat Completion API for JSON responses, making it easier to use AI-powered text processing in projects.

Chat Completions API

The Chat Completions API offers a versatile range of text-related tasks, such as classification, generation, transformation, completing incomplete text, providing factual responses, and more. Users provide input messages along with assigned roles, and the endpoint returns a JSON object in response.

Working of OpenAI Chat Completion endpoint
Working of OpenAI Chat Completion endpoint

Let’s understand how to configure OpenAI’s Chat Completions endpoint for JSON responses using a coding example.

Note: Please replace {{API_KEY}} with your own API key in the code below:

Code example 1

The following example demonstrates how to use the Chat Completions API to generate a JSON-formatted list of all available OpenAI models and print the response.

from openai import OpenAI
client = OpenAI(api_key="{{API_KEY}}")
response = client.chat.completions.create(
model="gpt-3.5-turbo",
response_format={ "type": "json_object" },
messages=[
{"role": "system", "content": "You are a helpful assistant designed to output JSON."},
{"role": "user", "content": "list all the OpenAI models available."}
],
)
print(response.choices[0].message.content)

Code explanation

Here’s a breakdown of what each part does:

  • Line 1: This line imports the OpenAI class from the openai module. This class provides methods for interacting with the OpenAI API.

  • Line 2: This line creates a new instance of the OpenAI class. Initialize it with your API key. You need to replace {{API_KEY}} with your actual OpenAI API key.

  • Lines 4–10: Here, we send a request to the Chat Completions API to generate a completion based on the provided conversation messages. We also specify the model to use ("gpt-3.5-turbo-0125"), the conversation messages (a system message and a user message), and the temperature parameter (set to 0.9).

  • Line 12: This line prints the content of the completion generated by the API. The response object contains a list of choices, each representing a possible completion. Here, it prints the content of the first choice’s message.

Code example 2

Let’s take another example demonstrating the use of Chat Completions API to transform text into a formal tone while ensuring the response is in JSON format.

# Transforming text to a formal tone using the Chat Completions endpoint
from openai import OpenAI
client = OpenAI(api_key="{{API_KEY}}")
response = client.chat.completions.create(
model="gpt-3.5-turbo",
response_format={ "type": "json_object" },
messages=[
{"role": "system", "content": "You are an assistant that transforms text into a formal tone, outputting results in JSON format."},
{"role": "user", "content": "Hey, can you send me the report by 5 PM? Thanks!"}
],
)
print(response.choices[0].message.content)

Code explanation

Notice that we only update the system message and the user message in the code above; the rest of the code remains the same.

  • Lines 10–11: Here, we specify the conversation messages in the system message. We define the instructions for the Chat Completions API to transform the message into a formal tone and ensure the response is in JSON format. When the API receives the user message, it processes the provided text and transforms it into a formal tone in its response.

Free Resources

Copyright ©2025 Educative, Inc. All rights reserved