What Is ReAct?

Learn what ReAct is and how it contributes to agent building process.

Picture yourself playing a video game. You’re navigating through a challenging level, thinking, “I need to jump over that obstacle, but first, let me defeat the enemy ahead.” You’re using your inner voice to guide your actions. This combination of what we do (our actions) with what we think (our reasoning) is a unique feature of human intelligence.

Humans have this incredible ability to talk to themselves in their heads. This “inner speech” isn’t just about keeping us company. It’s a powerful tool for planning, problem-solving, and staying organized. When gaming, you don’t just move from one task to another mindlessly. You think about what you’re doing, adjust your plans if needed (“I’m low on health, so I’ll avoid fights and look for a health pack.”), and seek out new information when you’re unsure (“How do I defeat this boss? Let me look up some strategies.”). This constant back-and-forth between acting and reasoning is what makes us so adaptable and quick learners. We can handle new situations, make decisions on the fly, and deal with uncertainty because we blend these two processes seamlessly.

Press + to interact

This synergy between acting and reasoning is at the heart of human intelligence. It’s what allows us to not just perform tasks, but to strategize, adapt, and innovate.

How is ReAct useful for AI agents?

Now, let’s switch gears and think about AI agents. Imagine trying to make an AI agent that can not just perform actions but also think about them. Recent research has shown that this isn’t just a wild idea; it’s actually possible to combine verbal reasoning with interactive decision-making in autonomous systems. When properly prompted, large language models can perform complex reasoning tasks like solving arithmetic problems or understanding commonsense scenarios. This is what we call chain-of-thought reasoning. It’s like the AI is thinking through a problem step-by-step, much like we do.

However, there’s a catch. This kind of reasoning is a bit like thinking inside a box. The model uses its internal representations to generate thoughts, but it isn’t grounded in the external world. This means it can't easily update its knowledge or react to new information, leading to issues like making up facts (hallucination) or making errors that pile up over time.

Press + to interact

Now, imagine an AI agent that can plan and act based on what it “sees” and “hears.” These approaches often involve converting what the AI observes into text, using a language model to generate actions or plans, and then executing those actions. It’s like giving the AI a to-do list based on its observations. However, these systems often lack the ability to reason abstractly about high-level goals or maintain a working memory to support their actions. Here’s where ReAct comes into play.

Combining reasoning and acting in a synergistic manner allows AI agents to perform much better. Think of it as giving the AI a brain that not only thinks about what it’s doing but also adapts and plans like a human. Instead of just following a script, the AI can reason through problems, adapt to new situations, and make decisions on the fly. This combination helps AI agents handle complex tasks more effectively, just like we do when we play video games or navigate through our daily lives.

ReAct paradigm vs. ReAct prompting

It's important to understand that the ReAct paradigm and ReAct prompting are related but distinct concepts.

  • ReAct paradigm: This is a broad framework that combines reasoning and acting for AI agents. It involves agents thinking through tasks, making decisions based on their reasoning, and then performing actions. The ReAct paradigm ensures that agents can adapt to new information and dynamically change their actions based on their reasoning. It is implemented at the system or framework level, involving the design of agents, tasks, and processes.

  • ReAct prompting: This is a specific technique used to guide language models to follow the ReAct paradigm. ReAct prompting involves designing structured prompts that help the model reason through a problem step-by-step and then act based on its reasoning. This technique is often used in large language models like GPT or Gemini to simulate the ReAct paradigm within their responses.

Press + to interact
ReAct illustrated
ReAct illustrated

Agents in frameworks like CrewAI that we'll use in this course typically follow the ReAct paradigm to combine reasoning and acting. However, they may not always use ReAct prompting unless specifically designed to do so. For instance, an agent might use a tool to gather information (acting) and then analyze that information to make a decision (reasoning), embodying the ReAct paradigm without explicitly using ReAct prompting techniques. While ReAct prompting is a useful technique to guide language models, agents typically follow the broader ReAct paradigm to enhance their capabilities. That's why these frameworks are so easy to use!

How AI agents work in LangChain?

Before we dive into CrewAI and its fundamentals, let's explore an example of how agents work in LangChain. This will give you a solid foundation and help you see the similarities and differences when we transition to CrewAI. In this example, we'll create an AI agent that uses LangChain to perform a task. Specifically, the agent will answer the question, "What is Educative?" using a combination of reasoning and acting.

For this example, we will be using the OpenAI model and Tavily Search API. Tavily is a search engine specifically optimized for LLMs designed to provide efficient, quick, and persistent search results. Tavily offers a generous free plan for the API, which can be generated here. Here’s the code for the LangChain example:

Educative Byte: The LangChain Hub has been updated to LangSmith Hub. With this update we now require a LangSmith API key to pull prompts from the hub. This is a change from the previous setup, where an API key was unnecessary. In case you experience an error similar to that, do add the API key as shown in the widget below.

Press + to interact
# Import necessary modules from LangChain
from langchain import hub
from langchain.agents import AgentExecutor, create_react_agent
from langchain_community.tools.tavily_search import TavilySearchResults
from langchain_openai import OpenAI
import os
os.environ["TAVILY_API_KEY"] = '{{TAVILY_API_KEY}}'
os.environ["OPENAI_API_KEY"] = '{{OPENAI_API_KEY}}'
os.environ["LANGCHAIN_API_KEY"] = '{{LANGCHAIN_API_KEY}}'
os.environ['LANGCHAIN_PROJECT']= 'Project Name'
# Get the prompt to use - you can modify this!
prompt = hub.pull("hwchase17/react")
# Define the tools the agent will use
tools = [TavilySearchResults(max_results=1)]
# Choose the LLM (Large Language Model) to use
llm = OpenAI()
# Construct the ReAct agent using the LLM, tools, and prompt
agent = create_react_agent(llm, tools, prompt)
# Create an agent executor by passing in the agent and tools
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
# Invoke the agent executor with a specific input
response = agent_executor.invoke({"input": "What is Educative?"})
# Print the response
print(response)

In the above code:

  • Lines 2–6: We import necessary modules from LangChain. This includes the OpenAI model, AgentExecutor, and tools like TavilySearchResults.

  • Lines 8–10: We set the values of the API keys needed.

  • Line 14: We use a predefined prompt from the LangChain hub made for ReAct agents. It looks like this:

Press + to interact
Answer the following questions as best you can. You have access to the following tools:,
{tools}
Use the following format:
Question: the input question you must answer
Thought: you should always think about what to do
Action: the action to take, should be one of [{tool_names}]
Action Input: the input to the action
Observation: the result of the action
... (this Thought/Action/Action Input/Observation can repeat N times)
Thought: I now know the final answer
Final Answer: the final answer to the original input question
Begin!
Question: {input}
Thought:{agent_scratchpad}

Note: This prompt guides the AI agent through a structured reasoning process using the ReAct framework. The agent starts with a question, thinks about the best approach, takes an action using a specified tool, observes the results, and repeats this cycle as needed. Finally, the agent combines its observations to deliver a well-reasoned final answer, ensuring a thorough and accurate response.

  • Line 15: The agent uses TavilySearchResults to fetch search results, limited to one result for this example.

  • Line 18: We specify the OpenAI model as the language model for our agent.

  • Line 21: Using the create_react_agent function, we combine the LLM, tools, and prompt to create our agent.

  • Line 24: The AgentExecutor is instantiated with the agent and tools, enabling it to perform actions and reason through tasks.

  • Lines 27–30: Finally, we invoke the agent with a specific input ("What is Educative?") and print the response.

We can run the above code in the following Jupyter Notebook and see that it demonstrates how AI agents in LangChain can be constructed to combine reasoning and acting. The agent doesn't just perform a search; it also reasons through the input to generate a coherent and informative response.

The course notebooks are designed to preview code outputs in a pre-executed mode, addressing time limitations and API constraints that may prevent full execution within the course environment.

Please login to launch live app!

Now that we’ve seen how agents work in LangChain, let's move on to CrewAI.