Personal tools

Classes of Intelligent Agents

Hong Kong_021723A
[Hong Kong - Bertha Wang/Bloomberg]
 
 

- Overview

AI agents are also known as intelligent agents (IAs). An intelligent agent (IA) is a software program that can act autonomously to achieve goals. IAs are at the core of artificial intelligence (AI). They can:

  • Perceive their environment: IAs use sensors and actuators to perceive their environment and take actions.
  • Learn from their environment: IAs can learn from their environment to achieve their goals.
  • Make decisions: IAs can make decisions based on their environment, user input, and experiences.
  • Perform services: IAs can perform services based on their environment, user input, and experiences.


IAs can act on behalf of individuals or organizations. Examples of AI agents include driverless cars and the Siri virtual assistant. 

Some benefits of intelligent virtual agents include: 

  • Reducing operational and hiring costs
  • Providing customer support
  • Personalizing experiences
  • Collecting and analyzing user data
  • Automating tasks
  • Analyzing customer sentiment in real-time

 

AI Agents can be grouped into five classes based on their degree of perceived intelligence and capability: 

  • Simple Reflex Agents
  • Model-Based Reflex Agents
  • Goal-Based Agents
  • Utility-Based Agents
  • Learning Agent

 

- Simple Reflex Agents

Simple reflex agents are just that: simple. They cannot compute complex equations or solve complicated problems. They work only in environments that are fully-observable in the current percept, ignoring any percept history. If you have a smart light bulb, for example, set to turn on at 6 p.m. every night, the light bulb will not recognize how the days are longer in summer and the lamp is not needed until much later. It will continue to turn the lamp on at 6 p.m. because that is the rule it follows. Simple reflex agents are built on the condition-action rule. 

These agents simply decide actions based on their current percept. By identifying that certain actions are warranted in certain conditions, the agent can build a list of condition-action rules and use them to decide which actions to take.

Simple reflex agents ignore the rest of the percept history and act only on the basis of the current percept. Percept history is the history of all that an agent has perceived till date. The agent function is based on the condition-action rule. A condition-action rule is a rule that maps a state i.e, condition to an action. 

If the condition is true, then the action is taken, else not. This agent function only succeeds when the environment is fully observable. For simple reflex agents operating in partially observable environments, infinite loops are often unavoidable. It may be possible to escape from infinite loops if the agent can randomize its actions. Problems with Simple reflex agents are :

  • Very limited intelligence.
  • No knowledge of non-perceptual parts of state.
  • Usually too big to generate and store.
  • If there occurs any change in the environment, then the collection of rules need to be updated.

- Model-Based Reflex Agents

It works by finding a rule whose condition matches the current situation. A model-based agent can handle partially observable environments by use of model about the world. The agent has to keep track of internal state which is adjusted by each percept and that depends on the percept history. The current state is stored inside the agent which maintains some kind of structure describing the part of the world which cannot be seen. Updating the state requires information about: 

  • how the world evolves in-dependently from the agent, and
  • how the agent actions affects the world.

 

- Goal-Based Agents

These kind of agents take decision based on how far they are currently from their goal(description of desirable situations). Their every action is intended to reduce its distance from the goal. This allows the agent a way to choose among multiple possibilities, selecting the one which reaches a goal state. The knowledge that supports its decisions is represented explicitly and can be modified, which makes these agents more flexible. They usually require search and planning. The goal-based agent’s behavior can easily be changed.

 

- Utility-Based Agents

The agents which are developed having their end uses as building blocks are called utility based agents. When there are multiple possible alternatives, then to decide which one is best, utility-based agents are used. They choose actions based on a preference (utility) for each state. Sometimes achieving the desired goal is not enough. We may look for a quicker, safer, cheaper trip to reach a destination. Agent happiness should be taken into consideration. Utility describes how “happy” the agent is. Because of the uncertainty in the world, a utility agent chooses the action that maximizes the expected utility. A utility function maps a state onto a real number which describes the associated degree of happiness.

 

- Learning Agent

A learning agent in AI is the type of agent which can learn from its past experiences or it has learning capabilities. It starts to act with basic knowledge and then able to act and adapt automatically through learning. 

A learning agent has mainly four conceptual components, which are:

  • Learning element :It is responsible for making improvements by learning from the environment
  • Critic: Learning element takes feedback from critic which describes how well the agent is doing with respect to a fixed performance standard.
  • Performance element: It is responsile for selecting external action
  • Problem Generator: This component is responsible for suggesting actions that will lead to new and informative experiences.



[More to come ...]

 

 

 

 

 

Document Actions