The AI Stack
The AI Stack: A Blueprint for Developing and Deploying AI.
- Overview
The AI Stack is for the development and deployment of Artificial Intelligence (AI), and the strategic investment in research, technology, and organizational resources required to achieve asymmetric capability.
Over the past few years, there has been a drastic acceleration in the development of AI fueled by exponential increases in computational power and machine learning. This has resulted in corporations, institutions, and nation-states vastly accelerating their investment in AI to:
- perceive and synthesize massive amounts of data,
- understand the contextual importance of the data and potential tactical/strategic impacts,
- accelerate and optimize decision-making, and
- enable human augmentation and deploy autonomous systems.
From a national security and defense perspective, AI is a crucial technology to enhance situational awareness and accelerate the realization of timely and actionable intelligence that can save lives. For many current defense applications, this often requires the processing of visual data, images, or full motion video from legacy platforms and sensors designed decades before recent advances in machine learning, computer vision, and AI.
The AI Stack -- and the fusion of the interdependent technology layers contained within it -- provides a streamlined approach to visualize, plan, and prioritize strategic investments in commercial technologies and transformational research to leverage and continuously advance AI across operational domains, and achieve asymmetric capability through human augmentation and autonomous systems.
- Data Collection Layer
AI is dependent on the data that is gathered. Just as our brains take in huge amounts of information from the world around us and use it to make observations and draw conclusions, AI can’t function without information.
In the AI tech stack, this can come from a number of places. Thanks to the ongoing rollout of the Internet of Things, millions of devices worldwide are connected and able to talk to each other, from industrial scale machinery to the smart phones we carry everywhere we go.
The data collection layer of an AI stack is composed of software that interfaces with these devices, as well as web-based services which supply third-party data, from marketing databases containing contact information to news, weather and social media APIs.
Virtual Personal Assistants allow data collection to take place from human speech – natural language recognition will convert speech to data, whether it is background noise or commands which are issued directly to a machine.
- Data Storage Layer
Once you’ve collected data, or set up streams so it is pouring into your AI-enabled organisation in real-time, you need somewhere to put it. Because AI data is usually Big Data – it needs a lot of storage space, and it needs to be storage which can be accessed very quickly.
Often this is where cloud technology will play a leading role. Some organisations have the capability and resources to establish their own distributed data centres, using technology such as Hadoop or Spark, which can cope with the vast amount of information. Often however third party cloud infrastructure – such as Amazon Web Services or Microsoft Azure – provides a more suitable solution.
Storage can be scaled up or down when it is needed, saving money, and these platforms also provide a host of methods for integrating with analytics services.
- Data Processing and Analytics Layer
This is probably what most people consider to the most important element when they talk about artificial intelligence – though without the rest of the stack (collection, storage and output) any insights are going to be severely limited.
AI processing takes in machine learning, deep learning, image recognition, natural language processing, sentiment analytics, recommendation engines – all the hot topic buzzwords we’re used to hearing when organisations are waxing lyrical on the subject of how smart and cognitive their technology is.
These algorithms are often provided in the form of services which are either accessed through a third party API, deployed on a public or private cloud or run “on the metal” in a private data centre, data lake or, in the case of edge analytics, at the point of data collection itself (for example, within sensor or data capture hardware).
The power, flexibility and self-learning capabilities of these algorithms is what really differentiates the latest, current wave of artificial intelligence from what has come before – together with the increase in the amount of data available.
Today the increase in raw power comes from the deployment of GPUs – processors originally designed for the very heavy-duty task of generating sophisticated computer visuals.
Their mathematical prowess makes them ideal for repurposing as data-crunchers. A new wave of processing units specifically designed for handling AI related tasks should provide a further quantum leap in AI performance in the very near future.
- Data Output and Reporting
If the aim of your AI strategy is to get machines working more efficiently and effectively together (perhaps for predictive maintenance purposes, or minimising power or resource usage) then this will be technology which communicates the insights from your operational AI processing to the systems which will benefit from it.
Other insights may be intended for humans to take action on – for example, sales assistants using handheld terminals to read insights and recommendations relating to customers who are standing in front of them. In some cases the output may be in the form of charts, graphics and dashboards.
Virtual personal assistant – technology such as Apple’s Siri and Microsoft’s Cortana - can often play a role here, too – these use natural language generation to convert digital information into human language – which alongside visuals is the most easily understood and acted-upon form of data output for a human.
[More to come ...]