Personal tools

Foundations of Edge AI

The Royal Palace And The Almudena Cathedral_Madrid_Spain_092920A
[The Royal Palace And The Almudena Cathedral, Madrid, Spain - Interpixels/Shutterstock]


- Overview

Organizations in every industry are looking to increase automation to improve processes, efficiency and safety. To help them, computer programs need to recognize patterns and safely repeat tasks. But the world is unstructured, and the range of tasks humans perform covers an infinite range of situations that cannot be fully described by procedures and rules. 

Advances in edge AI provide opportunities for machines and devices to operate using the “wisdom” of human cognition, no matter where they are located. Smart applications powered by artificial intelligence (AI) learn to perform similar tasks in different situations, just like in real life.

AI solutions, especially those based on deep learning (DL) in the field of computer vision, are completed in cloud environments that require large amounts of computing power. 

Edge AI is the use of AI techniques in an edge computing environment. AI models are trained on edge devices using their local data. Updates to the model are sent to a central server, without having to send over the actual edge device data, resolving many of the privacy and security issues.

Edge AI enables devices on the periphery of the network to process data locally, making real-time decisions without relying on Internet connections or centralized cloud servers for processing, increasing computing speed, and improving data privacy and security.

By processing information locally on the device, edge AI reduces the risk of data mishandling. In industries subject to data sovereignty regulations, edge AI can help maintain compliance by processing and storing data locally within designated jurisdictions.

 

- Understanding Edge AI

Edge AI is the integration of multiple technologies, including AI, Internet of Things (IoT), edge computing and embedded systems. Each technology plays a vital role in realizing intelligent processing and decision-making at the edge of the network. 

Edge AI involves using embedded algorithms to monitor the activity of remote systems and process data collected by devices such as sensors and other unstructured data trackers, including temperature, language, face, motion, images, proximity and other analog inputs.

These remote systems can take many forms, including sensors, smartphones, IoT devices, drones, cameras, and even vehicles and smart devices. The data collected from these systems serves as input to edge AI algorithms, providing valuable information about the status of the system or its surrounding environment, enabling edge AI systems to quickly respond to changes or anomalies and understand their operating environment. 

These edge AI applications are impractical or even impossible to operate in a centralized cloud or enterprise data center environment due to issues related to cost, latency, bandwidth, security, and privacy.

Presently, common examples of edge AI include smartphones, wearable health-monitoring accessories (e.g., smart watches), real-time traffic updates on autonomous vehicles, connected devices and smart appliances.  

 

- How Does Edge AI Work?

Inference is a relatively less computationally intensive task than training, where latency is more important to provide immediate results on the model. Most inference is still performed in the cloud or on servers, but as the diversity of AI applications grows, centralized training and inference paradigms are being questioned.  

Today, the focus of AI edge processing is to move the inference part of the AI ​​workflow to the device, keeping the data contained on the device. The main factors driving the choice of cloud or edge processing are privacy, security, cost, latency and bandwidth. 

Edge AI can run on many types of hardware, including microcontrollers, central processing units (CPUs), and advanced neural processing devices. 

Edge AI devices use ML algorithms to monitor the device’s behavior and collect and process the device data. These algorithms can run directly at the edge of a given network, close to where the data and information needed to run the system are generated, such as an IoT device or machine equipped with an edge computing device.

This allows the device to make decisions, automatically correct problems, and predict future performance. Edge AI can run on a wide range of hardware, from existing central processing units (CPUs), to microcontrollers and advanced neural processing devices. 

Here's how Edge AI works:

  • Data acquisition: Sensor devices collect raw data from the environment.
  • Data preprocessing: The raw data is preprocessed locally to remove noise and extract relevant features.
  • Edge AI inference: The preprocessed data is fed into the Edge AI model on the device.
  • Local decision-making: The Edge AI model analyzes the data and makes decisions or predictions.

 

- Micro-Data Centers

Edge computing plays a vital role in the efficient implementation of several embedded applications such as artificial intelligence (AI), machine learning (ML), deep learning (DL), and the Internet of Things (IoT). However, today's data centers are currently unable to meet the requirements of these types of applications. This is where the Edge-Micro Data Center (EMDC) comes into play.

By moving intelligence closer to the embedded system (i.e., the edge), it is possible to create systems with a high degree of autonomy and decision-making capabilities. In this way, reliance on the cloud (typically centralized systems) is reduced, resulting in benefits in terms of energy savings, reduced latency, and lower costs.

Self-driving cars, robotic surgery, augmented reality in manufacturing, and drones are a few examples of early applications of edge computing. As of today, current data centers with "cloud services" (hyperscale, mega, and colocation) cannot meet the requirements of these applications, thus requiring complementary edge infrastructure such as EMDC and "edge services".

This edge infrastructure, hardware, and edge services must meet the following requirements:

  • High computational speed, requiring data to be processed as locally as possible (i.e. at the edge)
  • High elasticity
  • High efficiency

 

Bruges_Belgium_060522A
[Bruges, Belgium]

- Edge AI Is The Next Wave of AI

Edge AI is the next wave of AI. detaching the requirement of cloud systems. Edge AI is processing information closer to the users and devices that require it, rather than sending that data for processing in central locations in the cloud.

In the last few years, AI implementations in various companies have changed around the world. As more enterprise-wide efforts dominate, Cloud Computing became an essential component of the AI evolution. 

As customers spend more time on their devices, businesses increasingly realize the need to bring essential computation onto the device to serve more customers. This is the reason that the Edge Computing market will continue to accelerate in the next few years.

Today, it is possible and easier to run AI and ML and perform analytics at the edge, depending on the size and scale of the edge site and the specific systems used. 

Although edge site computing systems are much smaller than those in central data centers, they have matured and can now successfully run many workloads due to the tremendous growth in processing power of today's x86 commodity servers. It’s amazing how many workloads can run successfully at the edge now.

  

- Distributed Edge Computing and Edge AI

Distributed edge computing and edge AI are two popular paradigms. 

Distributed edge computing delegates computational workloads to autonomous devices located at the data source. This is different from edge computing, which moves computation and data storage closer to the data source. 

Edge AI uses artificial intelligence (AI) techniques to enable a data gathering device in the field to provide actionable intelligence. Edge AI chips have three main parts: 

  • Scalar engines: Run Linux-class applications and safety-critical code
  • Adaptable engines: Process data from sensors
  • Intelligence engines: Run common edge workloads such as AI

 

- The Benefits of Moving AI to the Edge

There are several drivers for moving AI processing to the edge. In an edge AI environment, AI computations are done at the edge of a network, usually on the device where the data is created. 

This is different from using cloud infrastructure, where AI computations are done in a centralized facility. Here are some benefits of edge AI: 

  • Reduced latency: Processing data locally can help reduce latency and bandwidth requirements
  • Faster analysis: Edge devices perform computations locally, which can lead to faster analysis
  • Real-time data processing: Edge AI enables real-time data processing and analysis without reliance on cloud infrastructure
  • Reduced reliance on external resources: Edge devices can reduce reliance on external resources

 

Edge AI doesn't require connectivity and integration between systems, allowing users to process data on the device in real time.

Here are some examples of how edge AI can be used: 

  • Security cameras: Edge AI can detect suspicious activity in real time.  
  • Smartwatches: Edge AI can monitor vital signs like heart rate and oxygen levels.  
  • Medical imaging: Edge AI can analyze medical scans like X-rays and MRIs to provide instant results.  
  • Predictive maintenance: Edge AI can use sensor data to detect anomalies and predict when a machine will fail.  
  • Energy generation: Edge AI can combine historical data, weather patterns, and other information to create simulations that help manage energy resources.

 

 



Document Actions