Personal tools

AI Environments

AI_Agent_and_Environment_080320A
[An AI Agent Interacting with an Environment]



- Overview

An environment in AI is the surrounding of the agent. The agent takes input from the environment through sensors and delivers the output to the environment through actuators.

An AI environment is the physical or digital space where an AI agent operates, perceives, acts, and learns. The environment provides stimuli and feedback that shape an agent's decisions and behavior.

The nature of the environment is important to understand when solving problems using AI. For example, a chess bot's environment is a chessboard, while a room cleaner robot's environment is a room. 

 

- Nature of AI Environments

When designing AI solutions, we spend a lot of time focusing on aspects such as the nature of learning algorithms [ex: supervised, unsupervised, semi-supervised] or the characteristics of the data [ex: classified, unclassified…]. 

However, little attention is often provided to the nature of the environment on which the AI solution operates. As it turns out, the characteristics of the environment are one of the absolutely key elements to determine the right models for an AI solution. 

An environment is everything in the world which surrounds the agent, but it is not a part of an agent itself. An environment can be described as a situation in which an agent is present. The environment is where agent lives, operate and provide the agent with something to sense and act upon it. 

The agent takes input from the environment through sensors and delivers the output to the environment through actuators. For example, program a chess bot, the environment is a chessboard and creating a room cleaner robot, the environment is Room. Each environment has its own properties and agents should be designed such as it can explore environment states using sensors and act accordingly using actuators. 

From that perspective, there are several categories we use to group AI problems based on the nature of the environment.  

 

- Complete vs. Incomplete

Complete AI environments are those on which, at any give time, we have enough information to complete a branch of the problem. Chess is a classic example of a complete AI environment. Poker, on the other hand, is an incomplete environments as AI strategies can’t anticipate many moves in advance and, instead, they focus on finding a good "equilibrium” at any given time.

 

- Fully Observable and Partially Observable

An agent’s sensors give it access to the complete state of the environment at each point in time, if fully observable, otherwise not. A fully observable AI environment has access to all required information to complete target task. Image recognition operates in fully observable domains. Partially observable environments such as the ones encountered in self-driving vehicle scenarios deal with partial information in order to solve AI problems.

 

- Deterministic vs. Stochastic

The next state of the environment is completely determined by the current state and the action executed by the agent. Stochastic environment is random in nature and cannot be completely determined. For example, 8-puzzle has a deterministic environment, but self-driving car does not. Deterministic AI environments are those on which the outcome can be determined base on a specific state. In other words, deterministic environments ignore uncertainty. Most real world AI environments are not deterministic. Instead, they can be classified as stochastic. Self-driving vehicles are a classic example of stochastic AI processes.

 

- Static vs. Dynamic

The static environment is unchanged while an agent is deliberating. A dynamic environment, on the other hand, does change. Backgammon has static environment and a roomba has dynamic. Static AI environments rely on data-knowledge sources that don’t change frequently over time. Speech analysis is a problem that operates on static AI environments. Contrasting with that model, dynamic AI environments such as the vision AI systems in drones deal with data sources that change quite frequently.

 

- Discrete vs. Continuous

A limited number of distinct, clearly defined perceptions and actions, constitute a discrete environment. Discrete AI environments are those on which a finite [although arbitrarily large] set of possibilities can drive the final outcome of the task. Chess is also classified as a discrete AI problem. Continuous AI environments rely on unknown and rapidly changing data sources. Vision systems in drones or self-driving cars operate on continuous AI environments.

 

- Single-agent and Multi-agent

An agent operating just by itself has a single agent environment. However if there are other agents involved, then it’s a multi agent environment. Self-driving cars have multi agent environment.

 

- Episodic/Non-episodic

In an episodic environment, each episode consists of the agent perceiving and then acting. The quality of its action depends just on the episode itself. Subsequent episodes do not depend on the actions in the previous episodes. Episodic environments are much simpler because the agent does not need to think ahead. 

 

- Known vs Unknown

In a known environment, the output for all probable actions is given. Obviously, in case of unknown environment, for an agent to make a decision, it has to gain knowledge about how the environment works.


 
[More to come ...]

 

 

 

 

 

Document Actions