Personal tools

Neural Network Activation Functions

Harvard_001
(Harvard University - Joyce Yang)


- Overview

Neural network activation functions are a vital part of neural networks. They are used to solve complex problems and analyze and transmit data. 

Activation functions decide whether a neuron should be activated or not. This means that it will decide whether the neuron's input to the network is important or not in the process of prediction. 

Activation functions add non-linearity to the neural network. They enable neural networks to learn non-linear relationships by introducing non-linear behaviors through activation functions. This greatly increases the flexibility and power of neural networks to model complex and nuanced data. 

Activation functions are used to determine the output of a neural network. They map the resulting values between 0 to 1 or -1 to 1, depending on the function. 

Activation functions can be divided into two types: 

  • Nonlinear activation functions: These functions are the most used activation functions. They make it easier for the model to generalize or adapt with a variety of data and to differentiate between the output.


In biologically inspired neural networks, the activation function is usually an abstraction representing the rate of action potential firing in the cell.

 

 

[More to come ...]
 
Document Actions