I guess this is a heap/stack question: At the outset,should I write the code so an object is instantiated and accessed with the (.) member selector or written so that a new instance of the object is created and accessed with
(->). I am mainly concerned with speed.
1 2 3
Camera camera;
camera.phi = 7;
-or
1 2 3 4
Camera camInstance;
Camera * pCam = &camInstance;
pCam->phi = 7.
. is the operator used to access a member of an object.
-> is the operator to access a member of an object being pointed to.
Nothing to do with heap and stack. Your own example has everything on the stack and nothing on the heap.
As for which is faster; I expect the key here would be whether or not the object is in the cache or if a call has to go all the way out to the RAM. What puts an object in the cache is when you last used it, or if it happens to be next to something you recently used.
Enter your code into godbolt and see what the assembly looks like for . vs. ->, but in terms of speed this kind of micro-optimisation is basically 100% meaningless.
Yes, micro-optimisation is meaningless at this level; but do you think at a lower level such as embedded, Raspberry Pi etc., such micro optimizations are amplified into real factors? I don't know. Thanks for the godbolt referral.
If you're dereferencing a pointer, there is one extra step; reading that pointer. So by using a pointer you have created that extra step. How long will it take to read that pointer? Depends where it is. Register is faster than cache is faster than slower cache is faster than RAM. Depending on what you're doing, the time taken to access an object can be hundreds of times faster if you code carefully. I suppose it might be possible to sprinkle your pointers all over the memory so that you've deliberately made it expensive to read them, but that's hardly a fair test. The answer is still basically no.
do you think [sometimes] such micro optimizations are amplified into real factors
The answer is "rarely". In any event, it's mostly the compiler's job to micro-optimize.
IOW, the cases where micro-optimizations will introduce a benefit sufficient to solve the problem (i.e., to meet a unsatisfied design constraint) are so rare that there is no point in making such changes until you have measured.
Example:
You've written your code with the best known algorithm & data structures, your problem cannot be simplified any more, purchasing better hardware is not feasible, and your program consistently executes at only n < 100 percent of the required speed. In a case like this, maybe you could profile your code and optimize a hot inner loop somewhere and not be wasting your time.
Remember: programmer time is expensive; computer time is cheap. Write your code for strict correctness and readability, and your programs will usually perform well for free, and require only a fraction of the programmer time that a "clever" solution would.