Personal tools

DL Architectures

Deep Learning Architecture_052324A
[Deep Learning Architecture - IBM]
 

- Overview

The architecture of a deep learning (DL) model typically consists of several interconnected layers, each serving a specific purpose in processing and transforming input data to generate useful outputs. 

DL architecture is a neural network structure that uses multiple layers to process data and create complex representations: 

  • Layers: DL architecture consists of multiple layers of artificial neurons that process data. These layers include an input layer, hidden layers, and an output layer.
  • Training: DL models are trained to learn hierarchical representations of data and make predictions.
  • Applications: DL architectures are used in many fields, including computer vision, natural language processing, speech recognition, and more.
  • Inspiration: DL is inspired by the neural networks in the human brain. However, current neural networks are not intended to model the brain's function.

 

DL isn’t a single approach but rather a class of algorithms and topologies that you can apply to a broad spectrum of problems. While DL is certainly not new, it is experiencing explosive growth because of the intersection of deeply layered neural networks and the use of GPUs to accelerate their execution. 

Big data has also fed this growth. Because DL relies on supervised learning algorithms (those that train neural networks with example data and reward them based on their success), the more data, the better to build these DL structures.

 Please refer to the following for more information:

 

- Layers 

The individual layers of neural networks can also be thought of as a sort of filter that works from gross to subtle, increasing the likelihood of detecting and outputting a correct result. 

The human brain works similarly. Whenever we receive new information, the brain tries to compare it with known objects. The same concept is also used by deep neural networks.

Neural networks enable us to perform many tasks, such as clustering, classification or regression. With neural networks, we can group or sort unlabeled data according to similarities among the samples in this data. 

Or in the case of classification, we can train the network on a labeled dataset in order to classify the samples in this dataset into different categories. Artificial neural networks (ANNs) have unique capabilities that enable DL models to solve tasks that ML models can never solve.

 

- Layer Types and Their Functions

In deep learning, each layer serves a specific function in processing data, with the input layer receiving raw data, hidden layers performing complex feature extraction, and the output layer producing the final prediction, where different types of layers like convolutional, pooling, and fully-connected layers are designed for different data types and tasks to extract relevant patterns and features from the input data.

Key layer types and their functions:

  • Input Layer: Takes the raw data as input, initiating the processing within the neural network.
  • Hidden Layers: Multiple layers that progressively extract more complex features from the data, building upon the information from previous layers.
  • Output Layer: Produces the final prediction or classification based on the processed information from the hidden layers.


Specific layer types and their uses:

  • Dense (Fully Connected) Layer: Every neuron in this layer connects to every neuron in the previous layer, capturing global patterns in the data.
  • Convolutional Layer: Primarily used for image analysis, detecting edges and spatial patterns by applying convolution operations.
  • Pooling Layer: Reduces the dimensionality of data by taking the maximum or average value within a local region, helping to focus on important features.
  • Recurrent Layer: Designed for sequential data like text, where information from previous time steps is considered using feedback loops.
  • Normalization Layer: Adjusts the output from previous layers to a more regular distribution, improving training stability.

 

- DL Algorithms (Architectures)

Deep learning (DL) is a subset of ML. It's a type of AI that uses neural networks to learn from large amounts of data. 

Neural networks are made up of interconnected nodes, or neurons, that are layered to resemble the human brain. They mimic how neurons in the brain signal each other, which is why they're called "neural". 

DL models can be taught to perform classification tasks and recognize patterns in text, photos, audio, and other data. 

Here are some DL algorithms (architectures): 

  • Convolutional neural network: Uses filters to learn the features of an image, such as important objects.
  • Recurrent neural network: Uses a sequential approach and performs mathematical calculations in a sequence.
  • Generative adversarial network (GAN): A class of algorithms that consists of two adversarial networks. One network generates realizations, and the other tries to differentiate real from simulated data.
  • Autoencoder: A three-layer neural network that tries to reconstruct the input with minimal error.
  • Multilayer perceptron (MLP): A deep learning method that helps with complex computations and increases the prediction accuracy of the training model.
  • Decision tree: Uses machine and deep learning to automate complex business processes.
  • k-Nearest Neighbor (kNN) classification algorithm: A simple classification algorithm that uses deep learning to classify by measuring the distance between different feature values.
  • Logistic regression: Used as a classifier in the final layer of a deep learning. It is fast and simple, so it is used for large datasets.
  • Cluster analysis: A clustering algorithm that divides data based on similarities. The grouped data are similar to each other more than the other data in other groups.

 

- Recurrent Neural Network (RNN)

A recurrent neural network (RNN) is a deep learning model that converts sequential data input into a specific sequential data output. RNNs are a type of artificial neural network that use sequential data to solve temporal problems. They are characterized by their memory, which means they use information from previous inputs to influence the current input and output.

RNNs are commonly used for: Language translation, Natural language processing (NLP), Speech recognition, Image captioning, Siri, Voice search, and Google Translate.

RNNs are derived from feedforward networks and exhibit similar behavior to how human brains function. They produce predictive results in sequential data that other algorithms can't. RNNs were created because feed-forward neural networks can't handle sequential data, only consider the current input, and can't memorize previous inputs.

 

- Convolutional Neural Network (CNN)

A convolutional neural network (CNN) is a network architecture for deep learning (DL) that learns directly from data.

CNNs are particularly useful for finding patterns in images to recognize objects, classes, and categories. They can also be quite effective for classifying audio, time-series, and signal data.

A CNN can have tens or hundreds of layers that each learn to detect different features of an image. Filters are applied to each training image at different resolutions, and the output of each convolved image is used as the input to the next layer. The filters can start as very simple features, such as brightness and edges, and increase in complexity to features that uniquely define the object.

Convolutional Neural Network (CNN) is the extended version of artificial neural networks (ANN) which is predominantly used to extract the feature from the grid-like matrix dataset. For example visual datasets like images or videos where data patterns play an extensive role.

CNN consists of multiple layers like the input layer, Convolutional layer, Pooling layer, and fully connected layers. 

The Convolutional layer applies filters to the input image to extract features, the Pooling layer downsamples the image to reduce computation, and the fully connected layer makes the final prediction. The network learns the optimal filters through backpropagation and gradient descent.

 

- Deep Belief Networks (DBNs)

Deep Belief Networks (DBNs) are sophisticated artificial neural networks used in the field of deep learning, a subset of machine learning. They are designed to discover and learn patterns within large sets of data automatically. Imagine them as multi-layered networks, where each layer is capable of making sense of the information received from the previous one, gradually building up a complex understanding of the overall data. 

DBNs are composed of multiple layers of stochastic, or randomly determined, units. These units are known as Restricted Boltzmann Machines (RBMs) or other similar structures. Each layer in a DBN aims to extract different features from the input data, with lower layers identifying basic patterns and higher layers recognizing more abstract concepts. 

This structure allows DBNs to effectively learn complex representations of data, which makes them particularly useful for tasks like image and speech recognition, where the input data is high-dimensional and requires a deep level of understanding. 

The architecture of DBNs also makes them good at unsupervised learning, where the goal is to understand and label input data without explicit guidance. This characteristic is particularly useful in scenarios where labelled data is scarce or when the goal is to explore the structure of the data without any preconceived labels. 

 

Machine Learning Algorithms_121124A
[Machine Learning Algorithms]

- Autoencoders

An autoencoder is a type of artificial neural network used to learn efficient codings of unlabeled data, typically for the purpose of dimensionality reduction. An autoencoder employs unsupervised learning to learn a representation (encoding) for a set of data, typically for the purpose of reducing the dimensionality of the data. 

The network is trained to compress the input into a lower-dimensional code and then reconstruct the output from this representation to match the original input as closely as possible, hence the name 'autoencoder'. 

The architecture of an autoencoder is designed to be symmetrical with a bottleneck in the middle. The network consists of two main parts:

  • Encoder: This is the part of the network that compresses the input into a latent-space representation. It encodes the input data as a compressed representation in a reduced dimension. The encoder layer maps the input data to the hidden layer.
  • Decoder: This part aims to reconstruct the input data from the latent space representation. The decoder layer maps the hidden layer to the reconstruction of the input data.

 

The bottleneck, which is the layer that contains the code, represents the compressed knowledge of the input data. The key idea is that the autoencoder learns to ignore the noise and capture the most salient features of the data.

 

- Self Organizing Maps

The concept of a self-organizing map (SOM) was first put forth by Kohonen. It is a way to reduce data dimensions since it is an unsupervised neural network that is trained using unsupervised learning techniques to build a low-dimensional, discretized representation from the input space of the training samples. This representation is known as a map. 

A SOM was influenced by 1970s neural systems’ biological models. It employs an unsupervised learning methodology and uses a competitive learning algorithm to train its network. 

To minimize complex issues for straightforward interpretation, SOM is utilized for mapping and clustering (or dimensionality reduction) procedures to map multidimensional data onto lower-dimensional spaces. The output layer and the input layer are the two layers that make up the SOM. This is also known as the Kohonen Map. 
 
 

[More to come ...]

 

Document Actions