Personal tools

Computer Vision, AI, and VR/AR

Caltech_072821A
[California Institute of Technology - Los Angeles Times]

- Overview

Augmented reality – A blend of the physical and digital environments; refers to the technology in which data is overlaid on the physical reality using a fusion of sensor data from cameras, accelerometers, etc. Virtual reality -- A computer-generated simulation of a 3D image that enables the person to interact with a digital environment.

The AR/VR field has traditionally leveraged techniques like computer vision (not AI-powered) to advance innovation. But many businesses are discovering that these technologies and AI have a deep, complementary connection. AI excels at many actions that are beneficial to AR/VR: it can track objects, create detailed models of the 3D world, understand what features are in these models, and make judgments about them. 

Deep learning models in AI are particularly useful here, as they can identify vertical and horizontal planes; track an object’s movements and position; and estimate object depths, among other AR/VR synchronicities. Deep learning models can, in other words, help an AR/VR system interpret complex environments. An auto mechanic could, theoretically, use an AI-powered AR system to view a vehicle’s engine and be told by the system which parts need to be fixed and how. 

As a result of these complementary characteristics, AI is starting to replace traditional computer vision methods in AR/VR, with a number of industry leaders projecting that AI will help drive immersive technology adoption in consumer and business segments. Specifically, AI can enhance AR/VR experiences through the application of more realistic models as well as giving people greater ability to interact with the scenes. 

This powerful partnership of AR/VR and AI is due in part to advances in deep learning that apply to 3D model building, increased availability of data and data storage options like the cloud, and increasing levels of computing power. Regardless of the reasons, the integration is expected to provide exciting opportunities across many industries.

 

- Image Processing, Computer Vision, and Neural Networks

Computer vision is a field of machine learning that focuses on interpreting and understanding images and videos. It is used to help teach computers to "see" and use visual information to perform visual tasks that humans can.

Computer vision models are designed to translate visual data based on features and contextual information identified during training. This enables models to interpret images and videos and apply those interpretations to predictive or decision-making tasks.

While both are related to visual data, image processing is not the same as computer vision. Image processing involves modifying or enhancing images to produce new results. It can include optimizing brightness or contrast, increasing resolution, blurring sensitive information, or cropping. The difference between image processing and computer vision is that the former does not necessarily need to recognize content.

Modern computer vision algorithms are based on convolutional neural networks (CNNs), which offer significant improvements in performance over traditional image processing algorithms. CNNs are neural networks with a multi-layered architecture designed to gradually reduce data and computations to the most relevant set. This collection is then compared to known data to identify or classify data inputs. CNNs are commonly used for computer vision tasks, but can also perform text analysis and audio analysis.

 

- Vision-Language Models (VLMs)

Vision-language models (VLMs) are multimodal architectures that use computer vision (CV) and natural language processing (NLP) models to understand image and textual data. The VLM architecture aims to connect visual semantics and textual representation.

   

[More to come ...]

Document Actions