But isn't ms just the inverse of FPS?
FPS = 1000 / msPerFrame
ms = 1000 / FPS
If a frame takes 20ms to render, it means in one second I can render 50 frames, or 50 FPS.
If my program can output 1500 FPS, it takes 0.66ms to render one frame.
If with one method I have 500 FPS, and the other 1500 FPS, my program runs 3 times slower. How I measure it doesn't matter. If it takes 2ms with one method and then 0.66ms with the other, it is still 3 times slower.
But of course generally you are right, I should switch to ms. With FPS counting I can only see how fast the whole process of rendering one frame is, not individual parts inside.
Anyways, I now put measuring code (nanoseconds, not FPS) around the call to sprite->Draw(), and it seems to "only" run 40% slower, instead of 3 times slower. My guess is that it may be only because of the additional measuring...
But if that is not the problem, why does my program suddenly run so much slower, when I change this ONE line?
I tried it both in release and debug mode, by the way. Same slowdown.
I managed to upload the code to GitHub now, in case someone wants to take a quick look without downloading something:
https://github.com/TheHorscht/test/blob/master/Renderer.h#L113
One line below is the other method thats faster.