DNNs and Applications
- Overview
Deep neural networks (DNNs) offer statisticians a lot of value, especially in improving the accuracy of machine learning (ML) models. Deep Neural Networks evolved from Artificial Intelligence -> Machine Learning -> Artificial Neural Networks -> Deep Neural Networks.
Neural networks mimic the brain in a way that the network acquires knowledge from its environment through a learning process. Then, intervening connection strengths called synaptic weights are used to store the acquired knowledge. During the learning process, the synaptic weights of the network are sequentially modified to achieve the desired goal.
Another reason to compare neural networks to the human brain is that they operate like non-linear parallel information processing systems that can quickly perform computations such as pattern recognition and perception. As a result, these networks perform very well in domains such as speech, audio, and image recognition where the input/signal is inherently nonlinear.
- Neural Networks in Deep Learning (DL)
Neural networks are the core machinery that make DL so powerful. Neural networks are a set of algorithms, modeled loosely after the human brain, that are designed to recognize patterns. They interpret sensory data through a kind of machine perception, labeling or clustering raw input. The patterns they recognize are numerical, contained in vectors, into which all real-world data, be it images, sound, text or time series, must be translated.
A Neural Network is a computer system designed to work by classifying information in the same way a human brain does. It can be taught to recognize, for example, images, and classify them according to elements they contain. The development of neural network has been key to teaching computers to think and understand the world in the way we do, while retaining the innate advantages they hold over us such as speed, accuracy and lack of bias.
Neural networks help us cluster and classify. You can think of them as a clustering and classification layer on top of the data you store and manage. They help to group unlabeled data according to similarities among the example inputs, and they classify data when they have a labeled dataset to train on.
Neural networks can also extract features that are fed to other algorithms for clustering and classification; so you can think of deep neural networks as components of larger machine-learning applications involving algorithms for reinforcement learning, classification and regression.
- Deep Neural Networks and Living Brains
A new computational model predicts how information deep inside the brain could flow from one network to another, and how neural network clusters can self optimize over time. Researchers at the Cyber-Physical Systems Group at the USC Viterbi School of Engineering, in conjunction with the University of Illinois at Urbana-Champaign, have developed a new model of how information deep in the brain could flow from one network to another and how these neuronal network clusters self-optimize over time.
Their work, chronicled in the paper “Network Science Characteristics of Brain-Derived Neuronal Cultures Deciphered From Quantitative Phase Imaging Data,” is believed to be the first study to observe this self-optimization phenomenon in in vitro neuronal networks, and counters existing models.
Their findings can open new research directions for biologically inspired artificial intelligence, detection of brain cancer and diagnosis and may contribute to or inspire new Parkinson’s treatment strategies.
Similarly, researchers have demonstrated that the deep neural networks (DNNs) most proficient at classifying speech, music and simulated scents have architectures that seem to parallel the brain’s auditory and olfactory systems. Such parallels also show up in DNNs that can look at a 2D scene and infer the underlying properties of the 3D objects within it, which helps to explain how biological perception can be both fast and incredibly rich. All these results hint that the structures of living neural systems embody certain optimal solutions to the tasks they have taken on.
- Synaptic Weight
Synaptic weight is a term used in neuroscience and computer science to describe the strength of a connection between two nodes. It's also used in artificial and biological neural network research.
In biological neurons, synaptic weight refers to the amount of influence the firing of one neuron has on another. In artificial neural networks (ANNs), synaptic weight refers to the connection strength between neurons in different layers of the network. Each input in an ANN has an associated weight that can be modified, modeling synaptic learning.
The value of the weight indicates the strength of the connection. During the training phase of a deep neural network (DNN), the values of synaptic weights are updated to minimize output error. This is done using a back-propagation algorithm until the DNN network reaches a reasonable accuracy.
Please refer to the following for more information:
- Wikipedia: Synaptic Weight