I have somewhat beginners question, hopefully some of you guys can give a meaningful answer:
Consider following 2 samples where both do the same thing:
int external = /* some runtime value */;
1 2 3 4 5 6 7 8
|
// here we precompute factor
int factor = 5 + external ;
int x = 3;
int y = 3;
int result_x = x * factor;
int result_y = y * factor;
|
1 2 3 4 5 6
|
// here we don't precompute factor
int x = 3;
int y = 3;
int result_x = x * (5 + external);
int result_y = y * (5 + external);
|
I understand that efficiency of the first example is linear, that is stack allocation of
factor
is more worth the more times it will be used.
The question is, how to tell how many times
factor
should be used in order for stack allocation of this
factor
to be worthwhile?
I know stack allocation is very fast and that the actual allocation time depends on data type, but in this example we are working with POD's.
Basically which one of the 2 samples is more efficient and what would be general rule for stack allocations, that is when to and when not to use stack allocation for simple code such as this one?
edit:
btw, I don't care about whether the code will look nicer or easier to read, I care only about efficiency.
Also please, I know the computation time in this sample is so small that it literally makes not difference, however consider putting this code into a for loop that loops for about 5 minutes. I'm sure the time between 2 samples won't be same.
Well I could have test this with for loop to figure out, but the point is to tell the difference without testing, because such small calculations are very common as you write code, so not possible to test every time.