In 2020, OpenAI introduced GPT-3 and it set new heights in NLP with its incredible performance. Despite this, GPT-3 still had its limitations, especially the criticism it faced for biased generation. This led OpenAI to improve it to GPT-3.5 (and subsequently GPT-4).
ChatGPT uses a simple, traditional Yahoo! messenger-like interface (without a sidebar of contacts) where we can type the queries and it will chat with us.
Here is an example of asking ChatGPT to write a simple function in Python.
Now, let's try some machine learning code.
The code is copied below as a reference to verify if it works or not.
import torch# Define the number of neurons in each layernum_neurons = [2, 3, 4, 2]# Define the activation function for each layeractivations = [torch.nn.ReLU(), torch.nn.ReLU(), torch.nn.ReLU(), torch.nn.Sigmoid()]# Create the layers of the neural networklayers = []for i in range(len(num_neurons)):# Create a linear layer with the specified number of neuronslinear_layer = torch.nn.Linear(num_neurons[i], num_neurons[i+1])# Add the activation function for this layeractivation_layer = activations[i]# Add the linear and activation layers to the list of layerslayers.append((linear_layer, activation_layer))# Create the neural networkmodel = torch.nn.Sequential(*layers)
Now, let's ask it a more complex question.
The complete code is as follows.
import torch# Define the number of neurons in each layernum_neurons = [2, 3, 4, 2]# Define the activation function for each layeractivations = [torch.nn.ReLU(), torch.nn.ReLU(), torch.nn.ReLU(), torch.nn.Sigmoid()]# Create the layers of the neural networklayers = []for i in range(len(num_neurons)):# Create a linear layer with the specified number of neuronslinear_layer = torch.nn.Linear(num_neurons[i], num_neurons[i+1])# Add the activation function for this layeractivation_layer = activations[i]# If this is the second or third layer, add a dropout layer and a batch normalization layerif i == 1 or i == 2:dropout_layer = torch.nn.Dropout(p=0.3)batchnorm_layer = torch.nn.BatchNorm1d(num_neurons[i+1])# Add the dropout and batch normalization layers to the list of layerslayers.append((linear_layer, dropout_layer, batchnorm_layer, activation_layer))else:# Add the linear and activation layers to the list of layerslayers.append((linear_layer, activation_layer))# Create the neural networkmodel = torch.nn.Sequential(*layers)
However, can it answer non-programming answers and make day-to-day conversation as well? To check this, we conducted a detailed interview in which ChatGPT answered our queries. For ease of reading, the queries have been divided into categories.
Going by the maxim, "If you don't know history, then you don't know anything," we tested some history queries.
Let's try some more queries.
Let's talk about the weather with ChatGPT.
Let's try a followup question.
Let's ask another query.
Let's see if it can perform quantitative analysis.
Let's present it with a slightly more complex query.
Let's make this query more complex.
Maybe there was some issue with the query's phrasing, so let's make it easier.
Note: Feel free to refer to the original blog post for a much more detailed Turing test for ChatGPT.
As we have seen, ChatGPT is amazing, and it's hard to get tired of it. However, this is just the tip of the iceberg with regards to its capabilities. Here are some of its patterns that we have observed so far.
Its performance is usually superb for many tasks, not limited to code generation.
When it fails, it's often for trivial tasks.
Instead of providing to-the-point answers, it gives very detailed information.
It can become a great tool if combined with human intelligence, but, on the other hand, it can also become a tool for spreading a lot of misinformation.
Free Resources