Personal tools

Neural Networks in the Brain and AI

Hoover Tower_Stantford University_050422A
[Hoover Tower, Stanford University]
 
 

- Overview

 

- Size

Our brain consists of about 86 billion neurons, while the number of "neurons" in an AI network is about 10-1000. Taken out of context, that is a substantial gap, but the artificial neurons are more powerful in certain aspects. 

Preceptors, the predecessors to artificial neurons, work in a linear fashion where they take inputs on their "dendrites" and generate outputs on their "axon branches." Several perceptrons lie in a single layer of a perceptron network but are not interconnected. 

Whereas deep neural networks usually consist of input neurons, output neurons, and neurons in the hidden layers, in-between. All the layers are usually fully connected to the next layer, implying that artificial neurons, for the most part, have as many connections as there are artificial neurons in the preceding and following layers combined.

 

- Speed

Biological neurons usually fire signals about 200 times a second. These signals travel at various speeds depending on the type of nerve impulse, ranging from 0.60 m/s up to 120 m/s. Information in artificial neurons is alternatively carried over by the continuous, floating-point number values of synaptic weights (strength or amplitude of a connection between two nodes). 

The speed of calculating an algorithm carries no information other than making the model's execution and training faster. Artificial neurons do not experience 'fatigue.' Given artificial neural networks, models can be understood as a bunch of matrix operations and finding derivatives; these calculations can be highly optimized for vector processors and speeded up using GPUs (Graphics Processing Unit) or dedicated hardware.

- Fault Tolerance

Biological neural networks are fault-tolerant due to their synchronous nature. Minor failures do not result in memory loss because the information is stored redundantly. The brain can recover and heal to some extent. Artificial neural networks are not designed for fault tolerance or self-regeneration, as they are part of a network that has asynchronous computing nodes.

 

- Learning

It is still a mystery how the brain learns; how redundant connections store and retrieve information. Fibers in the brain grow and reach out to connect to other neurons, neuroplasticity causes the formation of new connections and areas to shift and alter function, and synapses can be strengthened or weakened based on their importance. By learning, we build on information that is already stored in the brain. 

Our knowledge deepens through repetition and sleep, and tasks that once a focus is required can be performed automatically once mastered. On the other hand, artificial neural networks have a predefined model where no further neurons or connections can be added or removed. 

During training, only the weights of the connections can change. Networks begin with random weight values and attempt to reach a point where further weight changes would no longer improve performance.

 

 

[More to come ...]
 
Document Actions