Started in 1.5 ShapeLogic combines the declarative programming framework with machine learning techniques. First machine learning technique is a multi layer feed forward neural network that is trained externally but run internally.
ShapeLogic 1.6 implement a multi layer feed forward neural network for color particle analyzer and for letter match.
The default network is very simple it is marking particles Tall, Flat based on their aspect ratio.
This categorizer setup file creates the same effect:
========== FEATURES aspect ========== RESULTS tall flat ========== WEIGHTS 1. -1. -1. 1.
This is the code for it in ColorParticleAnalyzer.java.
public void defineNeuralNetwork() { String[] objectHypotheses = new String[] {"Tall", "Flat"}; String[] inputStreamName = {StreamNames.ASPECT}; double[][] weights = ExampleNeuralNetwork.makeSmallerThanGreaterThanNeuralNetwork(1.); _neuralNetworkStream = new FFNeuralNetworkStream( inputStreamName,objectHypotheses, weights,this); _categorizer = _neuralNetworkStream.getOutputStream(); }
The method makeSmallerThanGreaterThanNeuralNetwork(double limit) is very simple and returns :
new double[][] { { //Fist layer //First output Second output 1, -1, //Bias first hidden layer -1., 1 //First and only input is multiplied with this }, };
Link to override code example that shows you how to override this with your own neural network.
Starting in ShapeLogic 1.6 you can load the neural network from external text files.
ShapeLogic has an example file: shapelogic/src/test/resources/data/neuralnetwork/default_particle_nn.txt with this content:
========== FEATURES aspect ========== RESULTS tall flat ========== WEIGHTS 1. -1. -1. 1.
It will produce the same effect as the default color particle analyzer neural network. Except for the categories not being capitalized.
Here is a configuration file shapelogic/src/test/resources/data/neuralnetwork/particle_nn_with_print.txt that does the same except it manually set the columns that are written to the result table to be Category and AspectRatio. When selecting columns you can both use:
========== FEATURES aspect ========== RESULTS tall flat ========== PRINTS Category AspectRatio ========== WEIGHTS 1. -1. -1. 1.
Here are a few more examples of the format of the neural network weights taken from: FFNeuralNetworkTest.java.
public final static double[][] WEIGHTS_FOR_OR = { { //First and only layer //First output -0.5, //Bias first hidden layer 1., //First input is multiplied with this 1. //Second input is multiplied with this } }; public final static double[][] WEIGHTS_FOR_NOT = { { 0.5, //Bias first hidden layer -1. } }; /** Weights found using the Joone Neural Networks. */ public final static double[][] WEIGHTS_FOR_XOR = { { //First layer: 2 input 3 output //Output 0 , 1 , 2 2.7388686085992333, 5.505721328606976, 4.235258932026585, //bias -6.598582463774703, -3.678198637390036, -2.9604962169635076, //Input 0 -6.59030690954159, -3.7790406961228347, -2.845930422442215 //Input 1 }, { //Second layer: 3 input and 1 output //Output 0 -5.27100082610628, //Bias for hidden second layer -10.45330943056037, //Input 0 6.582922049952558, //Input 1 4.7139611039662945 //Input 2 } };
Joone is a very user friendly system that takes input and produces output in several different formats: Database, Excel and flat file. This need to be put in the format described above.
Joone seperate the weights from the the bias. While ShapeLogic used the bias as the first weights.
You can export the weights from Joone, by going in an inspecting both the synapses and the layers. Do a copy and pase them to a spreadsheet or a text file.
The copy does not work under all versions of Linux, but it works fine under Windows.
The implementation a feed forward neural networks into declarative programming framework under ShapeLogic is called: FFNeuralNetworkStream
It consists of 3 linked streams, the 2 last streams are created by attaching a Calc1 to the stream before it:
Dependent streams should be defined in the database or flat file
Machine learning is by now a well established field that has been around for decades. Here are a few techniques: