What is slowing my code down?

Hello everyone!

Here's my code:
http://pastebin.com/r9shpjbF


and here's a code from where i rewrited it:
http://pastebin.com/p9w7zd5T


this is how i run them:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
struct learn{
	vector<double> value;
	vector<double> target;
};


#include <ctime>
int _tmain(int argc, _TCHAR* argv[])
{
	vector<int> nnn;
	vector<unsigned> nn;

	nnn.push_back(6);
	nnn.push_back(6);
	nnn.push_back(4);
	nnn.push_back(1);

	nn.push_back(6);
	nn.push_back(6);
	nn.push_back(4);
	nn.push_back(1);
	nn::network n;
	n.init(&nnn);



	Net myNet(nn);
	learn l[5];
	l[0].value.push_back(0.0);
	l[0].value.push_back(0.0);
	l[0].value.push_back(1.0);
	l[0].value.push_back(1.0);
	l[0].value.push_back(0.0);
	l[0].value.push_back(0.0);

	l[0].target.push_back(1.0);


	l[1].value.push_back(1.0);
	l[1].value.push_back(1.0);
	l[1].value.push_back(0.0);
	l[1].value.push_back(0.0);
	l[1].value.push_back(1.0);
	l[1].value.push_back(1.0);

	l[1].target.push_back(0.0);


	l[2].value.push_back(1.0);
	l[2].value.push_back(0.0);
	l[2].value.push_back(0.0);
	l[2].value.push_back(1.0);
	l[2].value.push_back(1.0);
	l[2].value.push_back(0.0);

	l[2].target.push_back(0.0);


	l[3].value.push_back(0.0);
	l[3].value.push_back(1.0);
	l[3].value.push_back(1.0);
	l[3].value.push_back(0.0);
	l[3].value.push_back(0.0);
	l[3].value.push_back(1.0);

	l[3].target.push_back(1.0);



	vector<double> inputVals, targetVals, resultVals;
	int trainingPass = 0;
	learn c;
	int u = 0;
	auto start = std::clock();
	auto end = std::clock();

	start = std::clock();
	for (int a = 0; a < 60000; a++)
	{
		c = l[a % 3];
		myNet.feedForward(c.value);
		myNet.getResults(resultVals);
		myNet.backProp(c.target);
	}
	end = std::clock();
	std::cout << "his: " << (end - start) * 1000.0 / CLOCKS_PER_SEC << " milliseconds\n";

	start = std::clock();
	for (int a = 0; a < 60000; a++)
	{
		n.feed(&c.value);
		n.results(&resultVals);
		n.learn(&c.target);
	}
	end = std::clock();
	std::cout << "mine: " << (end - start) * 1000.0 / CLOCKS_PER_SEC << " milliseconds\n";

	system("pause");
	return 0;
}


Inside the namespace nn i tried:
replace pointers with reference
use struct connection instead of class
defined variables: eta, alpha, error_smoothing_factor as static.
Last one did made things little better but still slower.

his:90~
mine:150~

Where do i go wrong?
Thanks!
Last edited on
I've got a segmentation fault on your code
1
2
3
4
5
6
		void feed(vector<double> *in)
		{
			int n;
			for (n = 0; n < layers[0].neuron.size(); n++){
				layers[0].neuron[n].outputvalue = (*in)[n];
			}
the sizes don't match, *in is smaller.


About the things you've tried, ¿why did you do that?
They are quite irrelevant, except for «define variables as static» which is more conceptual if those atributtes are for each neuron or common to the class.

Get a profiler and work in what matters.
I'm recoding the whole thing again because of that mistake.
I bet there's something more i can't find.

Also i did it because i'm gonna try making something more to it.
Like you see there in code, the layer is class and connection as well meaning that they will have functions in future.

There will be also extra class what will be creating network classes like network classes creates neurons and conncections.

It's just really interesting messing around with stuff but seeying my code being slower makes me feel quite bad. I guess it is slower right now because in some loops i used wrong size.
Topic archived. No new replies allowed.