### Implicit type conversion and time measurement

I'm implementing a game loop in `c++` using the `timeGetTime` function like below:

 123456789101112131415161718192021222324 // ... DWORD oldtime = 0, newtime = 0, delta = 0; //... while (PeekMessage(&msg, NULL, 0, 0, PM_REMOVE)) { if (msg.message == WM_QUIT) { exit(0); } TranslateMessage(&msg); DispatchMessage(&msg); } delta = newtime - oldtime; while (delta >= MS_PER_UPDATE) { //update(); delta -= MS_PER_UPDATE; g_loopCnt++; } //render(); // ...

I know that the value returned by the `timeGetTime` function wraps around to 0 every 2^32 milliseconds, which is about 49.71 days as the official doc says. So is it possible (however, it is very unlikely) that the delta will be a negative number (when the `newtime` will pass the "magic" 2^32 milliseconds barrier). Lets take for an example VERY unlikely but theoretically possible case:

> `newtime` == 304(dec)
> `oldtime` == 0x7FFFF82F (2147481647 dec)

when the game is running longer then these 49.71 days. We'll get:

 123 delta = 304(dec) - 2147481647(dec) == -2147481343(dec signed!); ... while (-2147481343 >= MS_PER_UPDATE) {...}

So there is of course a resulting a negative numer (`-2147481343`) so on the fiest look it won't work here. BUT the compiler will make here an implicit conversion from signed to unsigned resulting in:

while (2147485952 >= MS_PER_UPDATE) {...}
and finally the code WILL work! (thanks to this implicit conversion)
And here is my question. Is it ok/safe to let the code be simple like that and just rely on the implicit conversion made automatically OR maybe I have to take care of the "wrap around" problem myself doing some extra checking/coding?

P.S

If I cannot count on that automatically implicit conversion is it a good solution to just change the operand order when there is about to be an negative delta like below?:

 12 if(newtime < oldtime) delta = oldtime - newtime
;
Last edited on
When a signed is compared to an unsigned, the signed will be 'promoted' to unsigned - possibly with a compiler warning.

Possibly a code comment - or an explicit cast used.
Have you considered using s different time function? The C++ standard library has <chrono> that pretty much fits what you're doing directly without the restriction that concerns you.

Your time function goes back to Window 2000, which was developed in the '90s. We have done better since.
https://docs.microsoft.com/en-us/windows/win32/api/timeapi/nf-timeapi-timegettime
Just because function has existed for a long time doesn't always mean it's bad ;-)

timeGetTime() roughly is the equivalent of clock_gettime(CLOCK_MONOTONIC) on Linux and *BSD.

Also, timeGetTime() has higher resolution than GetTickCount(), especially with timeBeginPeriod() enabled:
https://randomascii.wordpress.com/2013/05/09/timegettime-versus-gettickcount/

I'm pretty certain that std::chrono::steady_clock in MSVC uses these function under the hood too.

If you need even higher precision, QueryPerformanceCounter() is the way to go...
Last edited on
the basic thing is if you use an unsigned 64 bit timer, it will cycle to zero much less often.
if the counter were in cpu clocks and ran at 3 billion per second (3 ghz roughly), you get approximately 6 billion seconds before it wraps. I think that is about 70K days of high res ticks, and of course if you measure MS or something other than cpu ticks, it will run many years instead of lots of days... orders of magnitude more days.
Just figure out how long you want to run between restarts/reboots/whatever and math out how to deal with it. Systems that run a few years without a reboot are rare, but they do exist... VERY few do this for many decades on end, though. 70k days is about 200 years.
** pardon if I botched any rough math there. I did all that in my head so its very rough.
Last edited on
A possible consideration of using <chrono> is cross-platform, if the code is needed to be used on more than Windows based PC.