The value returned is the current value of the internal representation expressed in terms of the class's period interval, which are not necessarily seconds.
In your case countdown doesn't need to be a duration since Wait(...) uses already seconds. So you can write it like so:
sleep_for is not precise. Looping as above is going to accumulate errors and, if you are waiting for say, an hour, its going to (potentially) be way off.
are you looking for a high precision timer or just a toy?
I'd like something functional, i'm just practicing this stuff for use in a game someday so I need to have something thats reliable and can be used for a game mechanic that runs in the background. It's something that could run for hours or days at a time. I made a little something with that code. It's not finished but you can get an idea what i was going for by looking at it:
it is functional and reliable, its just not precise :) If you run it for days the accumulated error will probably be a few min (it could be a few seconds. I really don't know how current generation thread schedulers work at the nanosecond level). Obnoxious, hard core gamers may notice this, but it would just be a quirk to give them something to talk about. If you are ok with running it for a week and seeing it be < 1/2 an hour off (again, could be 5 seconds, for all I know) then no need to poke at it more.
what exactly am I talking about (I realize I assumed you know what I was going on about)? Sleep_for works off 'at least'. If you say sleep for 1 second, it could do 1 second, or 1.0001 sec or 1.5 etc. It basically takes the system a moment to realize its time to restart your process, a moment for whatever is currently running on the cpu to finish up and be rotated out, a moment for the context switching, etc.
if you do for(... a couple of days, one second at a time) then the first one sleeps 1.0001, the next one 1.0003, the next one 1.0002, ... and those tiny fractions of seconds (or larger values, one last time, I do NOT know this value and am making up an example here) are adding up because the first one slept 1.0001, the second one then starts at 1.0003 (time for things to happen in your code to cycle back here) and adds 'at least' a second to that... Its not a LOT, but you won't be matching any atomic clocks with this approach!
the fewer calls to sleep, the less the error. so if you need to do something once every 60 seconds, then you need to call sleep for (60) instead of 60 calls to sleep for (1) if you care.
ahhh, i see..., yeah that amount of time isnt too bad. Im sure there could be some sort of time correction function that takes the system time and the timers time and sync them somehow, idk, i'm just guessing as well haha. I have no idea how to do that but perhaps something like that would work.
So is my little game above ok so far? I'm trying to styart using chrono, as im sure there could be some interesting game mechanics made with it. But I also need stuff to happen while the timer counts down, i just made it visible for testing purposes, but you wont be able to see the countdown in game, so I could use a thread for each countdown timer to happen in the background while the player does other stuff?
well, apart from calling sleep the least number of times possible, another 'fix' is to sleep for 'some amount less than you needed too' and then 'busy wait' the rest.
something like (pseudocode)
master timer = now;
while(timer active)
{
start = now //chrono has a now() method somewhere
sleep for some amount that is some % until next event (say 850ms if event is 1 sec away)
while(now-start duration < event time); //just do nothing until you hit the time
do event thing;
}
its crude, but it would remove most of the error. the 'until next event' will also correct for error, that can be in sync with master time from original start and correct for the rest of the error (about as well as can be done without going into assembly language).
Does that make sense? The above is probably good enough for most things -- it should be within ms of the expected results even months later -- and the only reason to try to do better still would be like medical equipment software.
Yeah that makes sense thank you. The master time maybe could be system time and i could base something off of that. but I'm literally just getting into chrono, so what would be the most important stuff to lookup and study when it comes to that? im not sure where to begin. The main thing i'd like to know is how to convert time to be able to output to cout?
its not that complex :)
cout << 1.0e-6*(chrono::duration_cast<chrono::microseconds>(end - start).count()) << " seconds\n";
it tends to work in integer increments of the chosen type. so if you ask for seconds, you get like 10, if you ask for miliseconds you get like 10420 which you can output as 10.42
Awesome thank you! I really want to learn this stuff. The issue is a lot of times its not explained well enough for me when i search in google so i leave with no more info than i came in with :/
That is much for better general purpose. It can still 'oversleep' a tiny bit but only the most critical software would care about that (looks like its around 20ms -- most humans can't even register that small a delay). I finally looked it up, the M$ docs say that the min time slice is close to 20ms so the longest you could oversleep by is hopefully 20ms if the CPUs are not overwhelmed (would not be if you care THIS much about timing you would keep the loads light and buy more cpus if need be). Which points to what you said -- you really want to learn this stuff, you have to look up what the OS is doing, what the hardware is doing, and what the code is doing and assemble all that in a way to get what you want out of it.