Since I'm bitten by the A.I. microbe, I'm experimenting with all kinds of Neural Networks and Genetic Algorithms.
Therefore I created a little simulator to visualize the networks, for experimenting with it.
For the visualizing I used GLEE , a 'new' framework from Microsoft Research that's able to draw flowcharts, hierarchical diagrams, ...
I created the tool so that I can manage which network type that I want to simulate.
In this case I'll choose for the Back Propagation network.
The BP-Network is a network that 'learns' from it's mistakes,
it's trained supervised, meaning that the network knows it's input values and the expected output values for the given inputs.
In the scheme below you can see the layout of a BP-Network.
1 and 0 are the input nodes, who accepts numeric values.
2,3,4 are the hidden layers, expanding the learning possibilities
5 is the output node, which gives a number, and you'll probably have to round it to have a significant result.
We can adjust the number of input , hidden and output-nodes.
Next after defining the network parameters we'll have to provide some input data.
The upper group-box contains the input values, the lower box contains the output values, the amount of values for input and output must be the same, ex. 4 inputs = 4 outputs. (Pretty normal I think)
So in the example below we provide input and output values for the famous XOR-problem.
Input 1 | Input 2 | Output |
0 | 0 | 0 |
0 | 1 | 1 |
1 | 0 | 1 |
1 | 1 | 0 |
The table shows the input for the XOR problem, this data pattern is also inserted in the application below.
After the data-input we'll go to the training part.
Here we'll train the network until all inputs give the right output,
and re-train it until the data-errors are 0. meaning that the output-values are more accurate, or should be more accurate to 0.0 or 1.0 (or the output values you trained it against).
After the training happened, we'll test the network.
As you can see below, in the application you can provide your own input pattern.
In this example we provide 0 and 1 and the output is 0.95, which is rounded -> 1.
If you look at the table above then indeed the output is 1 when the inputs are 0 and 1.
Currently I'm working on the 3 other network types to simulate, BAM, SON and Adeline,
when that part is finished, then I'll make the tool public so that more people can play with it, and understand the working of neural networks better.
I'm also going to include the possibility to remove node links, so that you can train an unbalanced network, which sometimes is trained quicker and can give more accurate output. But that process is more like trail and error :).
Hope you enjoyed this introduction, if you have any questions or comments, don't hesitate to leave them behind or to mail them to me.
Regards,
F.