My experience in games has always been to do things by frames, rather than by real time. Like how it was done on retro systems like the NES and such.
The framerate would be the measurement of time, rather than the clock. So instead of a frame of animation lasting X milliseconds, it would last Y frames. AI and physics logic would iterate every frame, rather than every X milliseconds.
For this purpose, sleeping between frames (provided you're running fast enough) works just fine. Worst case scenario is if sleep is too coarse the video will get a little jerky, but it won't adversely affect any logic, and since frames are ~16.67 ms apart at 60 FPS that's
hardly ever never a problem (in my experience).
The biggest problem with the framerate approach is running on systems where the use has their monitor set to a conflicting refresh rate. IE a 60 Hz game running on a 75 Hz monitor isn't going to look as good.
sleep(5) [snip] it's entirely possible that it'll stall for upwards of 100-150msec, even without any other major processes demanding cpu time from the system. |
I don't know about this. I mean possible, sure, but how realistic is this. I use Sleep reguarly in emulators and other games I've written in the past and
never had a small sleep call take so long to return.
I've been meaning to experiment/research with a realtime approach rather than a framerate approach, but it always seemed way more complicated and more prone to errors.
How do games do it, anyway? I was picturing something like this:
1 2 3 4 5 6 7
|
unsigned lastclock = Tick();
while( game_running )
{
unsigned x = Tick() - lastclock;
RunGameLogicAndAnimations( x ); // run game for 'x' milliseconds
lastclock += x;
}
|
But this has a host of problems:
1) It sucks up 100% CPU time even if nothing is really going on in the game, and afaik, the only way around that is to sleep, so basically you have an "unreliable" timing mechanism or you have a CPU hog. I'm not sure which is worse.
2) 'x' is inconsistent. It will be really small sometimes, and possibly much larger other times. AI, animation etc code has to be able to update in intervals of anywhere between 1-20 ms. This could adversely affect physics and AI calculations, or at the very least make them waaay more complicated.
3) The inconsistent update makes things like movie recording, rewinding, etc much more difficult, since to playback a recorded movie, you'd have to run the iterations with the same values for 'x' as when the movie was recorded.
I don't know... what do you guys think?