Personal tools

Implementing ANNs

Harvard (Charles River) IMG 7721
(Harvard University - Harvard Taiwan Student Association)

 

- Overview

Neural networks are generic models that can solve problems without being programmed with specific rules and conditions. They are inspired by biological neural networks and are part of supervised machine learning. The goal of ANNs is to map input to output. They can be used to solve both regression and classification problems. 

Neural networks typically have different layers, including:

  • Input layer: Picks up input signals and passes them to the next layer
  • Hidden layer: Performs calculations and feature extractions
  • Output layer: Delivers the final result

Some examples of neural networks include: Feedforward neural networks, Backpropagation algorithm, Convolutional neural networks, Recurrent neural networks, and Multilayer perceptron.

Here are some steps for building a neural network:

  • Create an approximation model
  • Configure data set
  • Set network architecture
  • Train neural network
  • Improve generalization performance
  • Test results
  • Deploy model

 

- An Example

As mentioned above, neural networks work like this: we send a set of inputs, process them through a series of hidden layers and activation functions, and finally get a set of outputs. We train these neural networks by training and adjusting parameters according to some cost function. 

By building neural networks, we are teaching computers how to process data to produce the output we want. This has proven to yield powerful results - it helps with language translation, image processing, handwriting recognition, and more.

In Computer Science, we model this process by creating “networks” on a computer using matrices. These networks can be understood as abstraction of neurons without all the biological complexities taken into account. To keep things simple, we will just model a simple Neural Network (NN), with two layers capable of solving linear classification problem.

Let’s say we have a problem where we want to predict output given a set of inputs and outputs as training example like so:

 

- The Training Process

 

Training Examples:

Input 1 Input 2 Input 3 Output
 0  1  1  1
 1  0  0  0
 1  0  1  1

 

Now we predict the output the following set of inputs. 

 

Test Example:

Input 1 Input 2 Input 3 Input 4
 1  0  1  1

 

Note that the output is directly related to third column i.e. the values of input 3 is what the output is in every training example above. So for the test example output value should be 1.

The training process consists of the following steps: 

  • Forward Propagation
  • Back Propagation

 

- Forward Propagation

Take the inputs, multiply by the weights (just use random numbers as weights)

Let Y = WiI= W1I1+W2I2+W3I3
 
Pass the result through a sigmoid formula to calculate the neuron’s output. The Sigmoid function is used to normalise the result between 0 and 1:
1/1 + e-y

 

- Back Propagation

Calculate the error i.e the difference between the actual output and the expected output. Depending on the error, adjust the weights by multiplying the error with the input and again with the gradient of the Sigmoid curve:

 
Weight += Error Input Output (1-Output), here Output (1-Output) is derivative of sigmoid curve.

Note: Repeat the whole process for a few thousands iterations.

  

 

 

 
[More to come ...]
 
Document Actions