Accessing DeepSeek
Learn different ways to access and chat with the DeepSeek models.
DeepSeek is a powerful AI model that assists with various tasks, including natural language processing, coding, and more. We can access DeepSeek through its web-based interface or via API integration. In this lesson, we’ll walk through both methods, starting with the easiest approach—using the DeepSeek Web app.
Accessing DeepSeek via web app
The simplest way to start using DeepSeek is through its official web interface. Following are the steps to do that:
Visit the DeepSeek chat website.
Create an account or sign in with an existing account.
That’s it. Once logged in, we’ll be taken to the chat interface.
Type the query in the text box to generate a response using the DeepSeek-V3 model.
DeepSeek also offers additional capabilities beyond standard chat interactions.
We can also use the DeepSeek-R1 model for complex reasoning tasks, logical deductions, or problem-solving. To do so, we can click the “DeepThink (R1)” button along with the query to send to the model.
DeepSeek also provides an AI-powered search feature to retrieve relevant information from the web. To use this, simply click the “Search” button along with the query, and DeepSeek will return search results instead of generating a direct response.
These features make DeepSeek more versatile, allowing us to choose between conversational AI, logical reasoning, and real-time information retrieval based on our needs.
Accessing DeepSeek via API key
DeepSeek also provides API-based access to its models, allowing developers to interact with them programmatically. However, due to current server constraints, DeepSeek has temporarily suspended API service recharges, which means new users cannot add funds. However, users with existing balances can still use it.
Many third-party platforms deploy DeepSeek models and allow access to them via API. We will use Groq, a third-party platform, to access the DeepSeek model for a more reliable approach.
Groq
Groq is an AI inference service that provides access to various large language models (LLMs), enabling developers to integrate AI capabilities into their applications. It is optimized for ultra-low latency and high performance, making it ideal for real-time applications.
Getting an API key
The first step is to set up the Groq account and get the API key.
Visit the Groq website and create an account.
Visit the API keys page and generate a key using the “Create API Key” button.
Store the API key securely, as it will be required in our application.
Install required libraries
Next, we must install the groq
library to interact with the DeepSeek model. We can install it using the following command:
pip install groq
Import necessary modules
Import the following modules:
from groq import Groq
Send an API request
Initialize the client
to interact with the Groq platform. This step will use an API key for authentication.
client = Groq(api_key="API_KEY")
Mention the required model from the available models in the model
parameter and specify the role and prompt in the messages
parameter and send an API request.
Note: Groq does not support the DeepSeek-V3 model on its platform for now. However, there are two distilled versions of the DeepSeek-R1 model available that include
deepseek-r1-distill-qwen-32b
anddeepseek-r1-distill-llama-70b
. We’ll learn about distilled models later in the course. Here we use thedeepseek-r1-distill-llama-70b
model version from the Groq platform for chat.
response = client.chat.completions.create(model="deepseek-r1-distill-llama-70b",messages=[{"role": "user","content": "Identify the next number in the sequence: 2, 4, 8, 16, 32, ?"}],)
Let’s print the response
variable to see the generated response by the DeepSeek-R1 model.
print(response.choices[0].message.content)
Code
Let’s put all the code together.
from groq import Groqclient = Groq(api_key="{{GROQ_KEY}}")response = client.chat.completions.create(model="deepseek-r1-distill-llama-70b",messages=[{"role": "user","content": "Identify the next number in the sequence: 2, 4, 8, 16, 32, ?"}],)print(response.choices[0].message.content)
Chat with DeepSeek-R1
We have integrated DeepSeek-R1 in the coding playground below. Simply enter the prompt and interact with it to explore its reasoning capabilities.
Note: We are using the Groq API to use the DeepSeek-R1 model. Since the service tier has a token limit, you might occasionally encounter a rate limit error if too many tokens are processed in a short time.
Click the "Run" button to chat with DeepSeek-R1.