struct learn{
vector<double> value;
vector<double> target;
};
#include <ctime>
int _tmain(int argc, _TCHAR* argv[])
{
vector<int> nnn;
vector<unsigned> nn;
nnn.push_back(6);
nnn.push_back(6);
nnn.push_back(4);
nnn.push_back(1);
nn.push_back(6);
nn.push_back(6);
nn.push_back(4);
nn.push_back(1);
nn::network n;
n.init(&nnn);
Net myNet(nn);
learn l[5];
l[0].value.push_back(0.0);
l[0].value.push_back(0.0);
l[0].value.push_back(1.0);
l[0].value.push_back(1.0);
l[0].value.push_back(0.0);
l[0].value.push_back(0.0);
l[0].target.push_back(1.0);
l[1].value.push_back(1.0);
l[1].value.push_back(1.0);
l[1].value.push_back(0.0);
l[1].value.push_back(0.0);
l[1].value.push_back(1.0);
l[1].value.push_back(1.0);
l[1].target.push_back(0.0);
l[2].value.push_back(1.0);
l[2].value.push_back(0.0);
l[2].value.push_back(0.0);
l[2].value.push_back(1.0);
l[2].value.push_back(1.0);
l[2].value.push_back(0.0);
l[2].target.push_back(0.0);
l[3].value.push_back(0.0);
l[3].value.push_back(1.0);
l[3].value.push_back(1.0);
l[3].value.push_back(0.0);
l[3].value.push_back(0.0);
l[3].value.push_back(1.0);
l[3].target.push_back(1.0);
vector<double> inputVals, targetVals, resultVals;
int trainingPass = 0;
learn c;
int u = 0;
auto start = std::clock();
auto end = std::clock();
start = std::clock();
for (int a = 0; a < 60000; a++)
{
c = l[a % 3];
myNet.feedForward(c.value);
myNet.getResults(resultVals);
myNet.backProp(c.target);
}
end = std::clock();
std::cout << "his: " << (end - start) * 1000.0 / CLOCKS_PER_SEC << " milliseconds\n";
start = std::clock();
for (int a = 0; a < 60000; a++)
{
n.feed(&c.value);
n.results(&resultVals);
n.learn(&c.target);
}
end = std::clock();
std::cout << "mine: " << (end - start) * 1000.0 / CLOCKS_PER_SEC << " milliseconds\n";
system("pause");
return 0;
}
Inside the namespace nn i tried:
replace pointers with reference
use struct connection instead of class
defined variables: eta, alpha, error_smoothing_factor as static.
Last one did made things little better but still slower.
void feed(vector<double> *in)
{
int n;
for (n = 0; n < layers[0].neuron.size(); n++){
layers[0].neuron[n].outputvalue = (*in)[n];
}
the sizes don't match, *in is smaller.
About the things you've tried, ¿why did you do that?
They are quite irrelevant, except for «define variables as static» which is more conceptual if those atributtes are for each neuron or common to the class.
I'm recoding the whole thing again because of that mistake.
I bet there's something more i can't find.
Also i did it because i'm gonna try making something more to it.
Like you see there in code, the layer is class and connection as well meaning that they will have functions in future.
There will be also extra class what will be creating network classes like network classes creates neurons and conncections.
It's just really interesting messing around with stuff but seeying my code being slower makes me feel quite bad. I guess it is slower right now because in some loops i used wrong size.