I'm doing some scientific programming (dynamic mesh relaxation). My best language is Python, but I'm converting my code to C++ for speed. I've got it working, but have some questions on how to optimize it for speed. My first question is this: if I'm doing a lot of vector operations, is it hurting me to declare new memory
often? E.g., here's a simple vector function
In a outer function, I may call this 4 times, storing the values in variables
n1, n2, n3, n4. Would it be smarter to make n1, n2, n3, n4 global variables so as not have to reallocate memory for each call? I.e., also pass in the pointer
to the appropriate n.
You are right. If it were only 4 it wouldn't effect speed. But I call THAT function once for each of my 1000 edges, and then all the edges once for each of my 100,000 iterations! So we're talking 400 million calls, which does matter. I guess my question is whether the memory allocation time is on the order of the calculation time, or if it is much smaller (or larger!) ?
Avoid dynamic memory allocationif you can. New and delete are extremely slow in C++. Every call to new is typically 80-300 CPU cycles on modern processors if you are lucky (= no cache miss). So you might be much better off with placing your vectors on the stack and copying them (even if it requires copying 32B, it would be still faster). The float calculations are negligible compared to allocation time.
Dynamic allocation of arrays is done when either the number of elements is not known at compile time
or if the size isn't fixed for the lifetime of the array.
Since your vAdd function always allocates a fixed-length array of 3 elements, dynamic memory allocation
is not needed.
Dynamic allocation of arrays is done when either the number of elements is not known at compile time or if the size isn't fixed for the lifetime of the array.
...or if the size of the data is so large, they wouldn't fit on the stack or you need to pass the objects up the stack (which would require copying).