Chat Completions

Learn how to generate or manipulate text using the chat completions endpoint of OpenAI API.

The chat completions endpoint

The chat completions endpoint can be used to perform many tasks on a text, including classification, generation, transformation, completion of incomplete text, factual responses, and others. The endpoint takes an input message from the user and the role assigned to it and returns a JSON object.

Press + to interact
Chat completions
Chat completions

The following URL uses the POST method which can be used to call the completions endpoint:

https://api.openai.com/v1/chat/completions

Understanding the chat completions endpoint

Let's look at the chat completions endpoint in more detail, reviewing, the request parameters, and the response parameters.

Request parameters

Let’s see some essential request parameters for this endpoint in the table below:

Fields

Format

Type

Description

messages

Array

Required

This is a list of messages comprising the conversation as of yet.

model

String

Required

This is the ID of the model that the chat completions endpoint will use.

max_tokens

Integer

Optional

This is the maximum number of tokens to generate in the chat completion.

temperature

Float

Optional

Which sampling temperature should be employed, ranging from 0 to 2? The default temperature is set at 1. Opting for higher values, such as 0.8, will increase randomness in the output, whereas lower values, like 0.2, will enhance focus and determinism.

top_p

Float

Optional

Nucleus sampling is an alternative to temperature sampling in which the model evaluates the outcomes of tokens with top p probability mass. So, 0.1 indicates that only the top 10% probability mass tokens will be evaluated.

Default value: 1

response_format

object

Optional

This is an object that specifies the format of the output. This parameter is compatible with the newer models.

Setting to { "type" : "json_object" } will guarantee a valid JSON object.

n

Integer

Optional

It indicates the number of chat completions to generate.

Default value: 1

logprobs

Integer

Optional

This value decides whether to return the log probabilities of output tokens. This option is not available on the gpt-4-vision-preview model.

Default value: false

stop

String/array

Optional

This allows you to provide four sequences where the API will stop generating further tokens.

Default value: null

presence_penalty

Float

Optional

A value range between -2.0 to 2.0. The positive value penalizes the new tokens if they exist in the text, increasing the possibility of generating new things.

Default value: 0

frequency_penalty

Float

Optional

A value range between -2.0 to 2.0. The positive value penalizes the new tokens concerning their existing frequency in the text, decreasing the possibility of repeating the same words.

Default value: 0

Note: You can learn more about the ...

Create a free account to access the full course.

By signing up, you agree to Educative's Terms of Service and Privacy Policy