How can i make C++ wait? with out stoping other functions?

Pages: 12
Would someone explain why you shouldn't use a sleep() style function in a game? It's all well and good to make these assertions, but no-one has yet shown evidence why not to use it.

sleep() can be useful when used right. Obviously doing something like
1
2
3
4
5
/* [...] */

sleep(2000); /* Sleep for 2 seconds to make it look like we're doing something important */

/* [...] */

is totally retarded, and sticking sleep() in code for the purpose of slowing it down is useless, but it can be used well. For example, what if you need to wait for all of your threads to die before you exit? You could keep a counter that all of the threads increment when they initialize and decrement when they exit, and then spin on the thread counter until it's 0, using sleep() to avoid CPU wastage (you're not doing any serious work, so you might as well yield your timeslice when you don't need it). Or, you could just exit and hope the OS is smart enough to clean up after you.
For example, what if you need to wait for all of your threads to die before you exit?
join(). Called WaitForSingleObject() on Win32.

Or, you could just exit and hope the OS is smart enough to clean up after you.
Unless you were very careful, that's almost never a good idea. What if one thread was in the middle of saving state when main() returned? Then the file will be invalid.
And how do pthread_join() and WaitForSingleObject() work? Probably in a similar way.

Unless you were very careful, that's almost never a good idea. What if one thread was in the middle of saving state when main() returned? Then the file will be invalid.

That was my point.
And how do pthread_join() and WaitForSingleObject() work? Probably in a similar way.
They work though notification, not polling. Basically, a thread tells the OS "let me know when that other thread returns; until then, allocate no time slices for me".
I guess for non-action oriented casual games, sleep() could be just fine. I meant for my original statement to be aimed more at the games with high framerates, complex animations, physics simulation of some type (doesn't even really have to be advanced simulation), non-trivial opponent AI, etc.

All of those things in a game can be adversely affected by using sleep() anywhere else in the game loop itself, with few exceptions. One area where it's fine to use sleep() is in a pause function, where cpu usage should be negligible, and none of those game systems are even running.

sleep() can cause animations to be choppy/jerky (especially if they use some form of interpolation between keyframes).
sleep() can cause bizarre behaviors in the physics simulation (especially if you are using any sort of spring damping constants).
sleep() can cause flaky and unpredictable AI behavior (For example, some AI schemes rely on snapshots of their environment state that get updated on regular intervals. If those intervals aren't regular, the AI routines can be adversely affected).

And last but not least, sleep() is inaccurate. In a game where you want a minimum of 30 frames per second for smooth animations, and ideally 60 frames per second, you're working with 16-30msecs of time per frame to do everything you need to do to display that frame. The only thing that is guaranteed with something like sleep(5), is that your thread will stall for a minimum of 5msec. Not only is there no guarantee that it'll start processing again in 5msec, that's very unlikely. It's more likely it'll come back sometime within 10-15msec, and it's entirely possible that it'll stall for upwards of 100-150msec, even without any other major processes demanding cpu time from the system. Even sleep(0) can stall the process for unpredictable periods of time. In applications where timing isn't important, that's not really an issue. In all but the most casual of games though, timing is very important.
Last edited on
@helios,
Ok, then.

@jRaskell,
Thanks for explaining.
My experience in games has always been to do things by frames, rather than by real time. Like how it was done on retro systems like the NES and such.

The framerate would be the measurement of time, rather than the clock. So instead of a frame of animation lasting X milliseconds, it would last Y frames. AI and physics logic would iterate every frame, rather than every X milliseconds.

For this purpose, sleeping between frames (provided you're running fast enough) works just fine. Worst case scenario is if sleep is too coarse the video will get a little jerky, but it won't adversely affect any logic, and since frames are ~16.67 ms apart at 60 FPS that's hardly ever never a problem (in my experience).

The biggest problem with the framerate approach is running on systems where the use has their monitor set to a conflicting refresh rate. IE a 60 Hz game running on a 75 Hz monitor isn't going to look as good.

sleep(5) [snip] it's entirely possible that it'll stall for upwards of 100-150msec, even without any other major processes demanding cpu time from the system.


I don't know about this. I mean possible, sure, but how realistic is this. I use Sleep reguarly in emulators and other games I've written in the past and never had a small sleep call take so long to return.



I've been meaning to experiment/research with a realtime approach rather than a framerate approach, but it always seemed way more complicated and more prone to errors.

How do games do it, anyway? I was picturing something like this:

1
2
3
4
5
6
7
unsigned lastclock = Tick();
while( game_running )
{
  unsigned x = Tick() - lastclock;
  RunGameLogicAndAnimations( x  ); // run game for 'x' milliseconds
  lastclock += x;
}


But this has a host of problems:

1) It sucks up 100% CPU time even if nothing is really going on in the game, and afaik, the only way around that is to sleep, so basically you have an "unreliable" timing mechanism or you have a CPU hog. I'm not sure which is worse.

2) 'x' is inconsistent. It will be really small sometimes, and possibly much larger other times. AI, animation etc code has to be able to update in intervals of anywhere between 1-20 ms. This could adversely affect physics and AI calculations, or at the very least make them waaay more complicated.

3) The inconsistent update makes things like movie recording, rewinding, etc much more difficult, since to playback a recorded movie, you'd have to run the iterations with the same values for 'x' as when the movie was recorded.


I don't know... what do you guys think?
You could compromise and "sleep" with an empty loop that continuously checks the time. That would still use 100% of a core, but ensure that a frame takes no less than a set period, simplifying time-dependent calculations a bit.
How do games do it, anyway? I was picturing something like this:


This is one of the best articles I've read on the topic: http://gafferongames.com/game-physics/fix-your-timestep/

I don't know about this. I mean possible, sure, but how realistic is this. I use Sleep reguarly in emulators and other games I've written in the past and never had a small sleep call take so long to return.


It's been a very long time since I've really done anything with Sleep() (I'm talking Windows 98 days), so I ran a quick and rather trivial test here and ran into some interesting results.

My test code and sample output:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
#include<windows.h>
#include<iostream>
#include<cstdlib>
#include<time.h>

using namespace std;

int main()
{
	srand(time(NULL));
	
	__int64 ctr1 = 0, ctr2 = 0, freq = 0;
	
	if(!QueryPerformanceFrequency((LARGE_INTEGER *)&freq)){
		cout << "QueryPerformanceFrequency (" << GetLastError() << ")" << endl;
		return 0;
	}
	
	cout << "Performance Frequency: " << freq << endl;

	if(!QueryPerformanceCounter((LARGE_INTEGER *)&ctr1)){
		cout << "QueryPerformanceCounter (" << GetLastError() << ")" << endl;
		return 0;
	}
		
	for(int x = 0; x < 20; x++){
		int delay = rand() % 50 + 1;
		
		Sleep(delay);
		
		if(!QueryPerformanceCounter((LARGE_INTEGER *)&ctr2)){
			cout << "QueryPerformanceCounter2 (" << GetLastError() << ")" << endl;
			return 0;
		}
		
		float delta = ((ctr2 - ctr1)*1000)/(float)freq;
		ctr1 = ctr2;
		cout << delay << " " << delta << endl;
		
	}
	
	return 1;

}
Performance Frequency: 3192040000
Start count: 2543269012439936
20 31.2772
18 31.2414
39 46.9055
48 62.4705
21 31.2477
11 15.6349
2 15.6158
29 31.2502
40 46.9135
15 15.6018
4 15.6107
35 46.8764
26 31.2587
44 46.8904
43 46.85
50 62.5197
7 15.6094
32 46.8859
46 46.8814
24 31.2342

On this pc, sleep has a resolution of ~15+msec. Toss in some actual processing and it could appear to have up to 15msec of variability. Perhaps it has better resolution on other systems. This is an Intel P4 system running 3.2ghz with Win XP, one of my work pcs. I can try it on my home pc tonight (Intel I5 & Win 7), but I doubt it'll be any different. Perhaps on Linux systems it performs better.
Is vsync more reliable? Does this even exist on LCD monitors?
There's refresh rate at an earlier stage. Specifically, how often the display device sweeps its memory and sends a new screen to the monitor. "Tearing" occurs when the update starts while someone was in the middle of drawing the new frame, so you get the new frame for part of the picture and the old one for the rest.
The problem with vsync is that delaying redraws until an update is completed tends to send frames later, particularly when the unsynchronized framerate is much higher than the refresh rate; say 100 vs. 60. The result is apparent sluggishness from the controls.
Topic archived. No new replies allowed.
Pages: 12