Agents and Intelligent Systems
Learn about the design, functions, and types of AI agents and discover how they are the building blocks for creating advanced intelligent systems.
We are interested in building intelligent systems that mimic human behavior. Let's study the key elements for bringing such systems to life.
Agents
An agent is a program or machine that can sense what's happening around it and make decisions to do something useful. It tries to achieve a goal by sensing its surroundings and then acting in a way that will get it closer to that goal. In other words, an agent can sense the environment using sensors and act accordingly through actuators.
The table below shows some interesting examples of agents with sensors and actuators.
Agent Type | Sensors | Actuators |
Human | Eyes, ears, nose, skin, tongue | Hands, legs, mouth, vocal cords |
Bird | Eyes, ears, beak, feathers | Wings, legs, beak |
Robot | Cameras, infrared range finders, ultrasonic sensors, touch sensors | Motors, robotic arms, wheels |
Autonomous vehicle | Radar, cameras, GPS, ultrasonic sensors | Steering mechanism, brakes, throttle control |
Agent function
Agents receive input through sensors; we call this input "percept." A series of inputs perceived by an agent through its sensors is called percept history. Typically, an agent's decision on which action to take at any moment, using its actuators, can be based on the entire percept sequence up to that point. If we can define the agent's actions for every possible percept sequence, we have essentially described the agent's behavior completely. Mathematically, an agent's behavior is represented by the agent function, which maps each percept sequence to a corresponding action that the agent performs using its actuators.
Example: Light control agent
Let's understand all these concepts with the aid of an example.
Environment |
|
Actions |
|
Sensors |
|
Percepts |
|
Actuators |
|
Here’s a simple table for our automated light control system:
Percept sequence | Action |
[(Light: Off, Motion: Detected)] | Turn light on |
[(Light: On, Motion: Not Detected)] | Turn light off |
[(Light: Off, Motion: Not Detected)] | Do nothing |
[(Light: On, Motion: Detected)] | Do nothing |
As you add more percept variables or increase the number of possible values for each variable, the number of possible percept combinations grows exponentially, which in turn increases the complexity of defining the agent function.
Mathematical representation:
Here is the agent function designed specifically for this scenario:
Given a percept
The agent function
Program:
The following is a computer program for this problem when the robot senses motion and light is already on.
Try out different possibilities to check if the agent works well.
# Define the possible percepts and actions as string constantsLIGHT_ON = "On"LIGHT_OFF = "Off"MOTION_DETECTED = "Detected"MOTION_NOT_DETECTED = "Not Detected"TURN_LIGHT_ON = "Turn Light On"TURN_LIGHT_OFF = "Turn Light Off"DO_NOTHING = "Do Nothing"# Define the agent functiondef agent_function(light_status, motion_status):if light_status == LIGHT_OFF and motion_status == MOTION_DETECTED:return TURN_LIGHT_ONelif light_status == LIGHT_ON and motion_status == MOTION_NOT_DETECTED:return TURN_LIGHT_OFFelse:return DO_NOTHING#Assumed input for the light status and motion statuslight_status = "On"motion_status = "Detected"# Validate inputsif light_status not in [LIGHT_ON, LIGHT_OFF]:print("Invalid light status. Please enter 'On' or 'Off'.")elif motion_status not in [MOTION_DETECTED, MOTION_NOT_DETECTED]:print("Invalid motion status. Please enter 'Detected' or 'Not Detected'.")else:# Execute the agent function and print the resultaction = agent_function(light_status, motion_status)print(f"Percept: (Light: {light_status}, Motion: {motion_status}) -> Action: {action}")
The above code defines a simple automated agent for controlling the light based on its current status and motion detection. It uses constants to represent possible percepts (LIGHT_ON
, LIGHT_OFF
, MOTION_DETECTED
, MOTION_NOT_DETECTED
) and actions (TURN_LIGHT_ON
, TURN_LIGHT_OFF
, DO_NOTHING
). The agent_function
decides the appropriate action: if the light is off and motion is detected, it turns the light on; if the light is on and no motion is detected, it turns the light off; otherwise, it does nothing.
Types of agents
AI agents are decision-makers that interact with their environment. They're categorized by how they make decisions and handle different situations. Here’s a look at the different types of agents and how they function.
Simple reflex agents: These agents act based on the current situation without considering past events. They follow basic "if-then" rules.
Model-based reflex agents: These agents use a model of the environment to handle partially visible information and make informed decisions.
Goal-based agents: These agents act to achieve specific goals, evaluating the best course of action to reach their objectives.
Utility-based agents: These agents aim not just to achieve goals but to do so optimally, evaluating different options based on their desirability.
Learning agents: These agents improve their performance by learning from past experiences and adapting to new situations over time.
Can you find the right agent for the job?
Now that you have a glimpse of how different AI agents operate, each tailored to specific tasks and goals, let's dive into some real-world scenarios where these agents could be applied. Your challenge is to match each scenario with the right type of AI agent!
An AI program plays chess against a human opponent, planning several moves ahead to ensure victory. It continually evaluates its position, anticipating the opponent’s strategies, and selecting the best possible move to achieve checkmate.
Model-Based Reflex Agent
An email application automatically sorts incoming messages into “Inbox” or “Spam” based solely on the content of each email. It doesn’t consider the history of past emails or adapt its behavior based on previous actions.
Learning Agent
A language learning app provides personalized lessons to users. It tracks their progress and mistakes over time, adjusting the difficulty of exercises and offering tailored feedback to improve their learning experience.
Utility-Based Agent
An online shopping assistant suggests products based on a user’s browsing history, preferences, and past purchases. It calculates the “happiness score” of each potential recommendation to ensure the user is most likely to enjoy or buy the suggested items.
Goal-Based Agent
A self-driving car navigates a busy city street, using its sensors to detect other vehicles, pedestrians, and obstacles. It also relies on an internal map and learns from past routes to handle unexpected situations, like road closures or heavy traffic.
Simple Reflex Agent