gtm wrote: |
---|
Say the game is running through loop n, and the timer function determines that it ran the calculations in < 1/60 of a second (aiming for 60fps max), so it decides not to render the frame and instead jump to game loop n+1.
Now, in loop n, we performed everything just under 1/60, or 0.016s, let's say we did it in 0.012. Lets imagine loop n+1 also runs in 0.012s: logically we would have some sort of logic to draw the frame for sure since the previous wasnt drawn, but this brings up an issue; we just spent .024s to draw 1 frame, aka we dropped our frame rate to 41fps, not capped it.
This is where my mind goes a blur and I imagine some sort of mutlithread system where one does the drawing of whatever game data is currently available, and another does all the game calculations. |
You're welcome gtm. In the case where your n loop doesn't render a frame, you should not update the "prevTime" variable that you keep for storing the time that the last frame got rendered. So, in loop n+1, your currentTime-prevTime calculation will give you the time since the last frame you rendered, not since your last loop (being n). So you would get a rendered frame in n+1 of your example.
Indeed, in the case of your example, the elapsed time would be greater than 0.016s for n+1. However, your frames for that second wouldn't be 41, because that would assume that you are rendering a frame every 0.024s at the least. But, instead, what happened is that you rendered 0.008s later than the shortest possible amount of time (on average, since in practical application you see that we exceeded it in your example) you intended to differentiate your frames by. Which is a very small amount of time (and also keep in mind that most loops will complete much faster, and realize that they have exceeded their 1/60th of a second much faster). Keep in mind that animation over 24 frames per second is considered more-or-less "smooth", and the additional frames are added to make the overall result even more seamless, and also (in the case of real-time rendering) to account for abrupt changes in computational requirements like loading new environments, calculating a considerable amount of NPC behavior, etc. . So setting your frame limit at exactly 60 would involve the possible loss of the amount of time your game loop requires to complete one single iteration, and realize that it has exceeded the 0.012s limit set. The difference is negligible in effect though. Try it out and see the visual and interractive effect. The 60 fps limit essentially is set in that value to compensate for this occasional staggering without creating visual discomfort. You may want to set it a little higher if you wish, although keep in mind that your monitor also has a physical refresh rate that your program cannot exceed.
I don't think there's any need for multithreading for this purpose, I suggest you try out making a test application and test frame limiting at 60fps. You may notice that your fps counter might report something like 59-60fps, as it updates every second. This is the result of what you describe above. Try it out to test the actual effect, though.
Also, you should parametrically update your animations based on seconds to ensure that they update smoothly, because the actual frames per each second will vary, and this variation needs to be reflected (parametrically) to the per-frame updates of the animation. If you're doing 3D graphics, or rectangles moving across the screen, interpolate through your animation curves for the elapsed time since the last rendered frame. If you're doing 2D pre-rendered spritesheets, update the sprite based on it's own assumed frame-rate (e.g., if your spritesheet is designed for 30 fps, having one explicit sprite for each of those frames, and your app has a 60 fps limit, update the spritesheet after every 1/30th of a second, so that your animation of your character is timed correctly and as originally designed/intended).