Back Propagation Neuron Network Design

I am trying to make a digit recognition program. I shall feed a white/black image of a digit and my output layer will fire the corresponding digit (one neuron shall fire, out of the 0 -> 9 neurons in the Output Layer). I finished implementing a Two-dimensional BackPropagation Neuron Network. My topology sizes are [5][3] -> [3][3] -> [1][10]. So it's One 2-D Input Layer, One 2-D Hidden Layer and One 1-D Output Layer. However I am getting weird and wrong results (Average Error and Output Values).

Debugging at this stage is kind of time consuming. Therefore, I would love to hear if this is the correct design so I continue debugging. Here are the flow steps of my implementation:

1) Build the Network: One Bias on each Layer except on the Output Layer (No Bias). A Bias's output value is always = 1.0, however its Connections Weights get updated on each pass like all other neurons in the network. All Weights range 0.000 -> 1.000 (no negatives)

2) Get Input data (0 | OR | 1) and set nth value as the nth Neuron Output Value in the input layer.

3) Feed Forward: On each Neuron 'n' in every Layer (except the Input Layer):
- Get result of SUM (Output Value * Connection Weight) of connected Neurons from previous layer towards this nth Neuron.
- Get TanHyperbolic - Transfer Function - of this SUM as Results
- Set Results as the Output Value of this nth Neuron

4) Get Results: Take Output Values of Neurons in the Output Layer

5) BackPropagation:
- Calculate Network Error: on the Output Layer, get SUM Neurons' (Target Values - Output Values)^2. Divide this SUM by the size of the Output Layer. Get its SquareRoot as Result. Compute Average Error = (OldAverageError * SmoothingFactor * Result) / (SmoothingFactor + 1.00)
- Calculate Output Layer Gradients: for each Output Neuron 'n', nth Gradient = (nth Target Value - nth Output Value) * nth Output Value TanHyperbolic Derivative
- Calculate Hidden Layer Gradients: for each Neuron 'n', get SUM (TanHyperbolic Derivative of a weight going from this nth Neuron * Gradient of the destination Neuron) as Results. Assign (Results * this nth Output Value) as the Gradient.
- Update all Weights: Starting from the hidden Layer and back to the Input Layer, for nth Neuron: Compute NewDeltaWeight = (NetLearningRate * nth Output Value * nth Gradient + Momentum * OldDeltaWeight). Then assign New Weight as (OldWeight + NewDeltaWeight)

6) Repeat process.

Sorry for the long post. If you know this then you probably know how cool it is and how large it is to be in a single post. Thank you in advance
Last edited on
You should use a toy problem, by instance XOR, to check that your construction and training are correct.
I converted it to apply the XOR. It's not working but you are right, to keep it simple. Thanks for the advice ne555.
Topic archived. No new replies allowed.