### Differing (Bad) Behavior Between unsigned int And long With -= Operator Hi All,

The following section of code was designed to show a wrong bit of programming, but it has a weird side effect that I cannot explain. It seems that when using
 -= 1.0

on an 'unsigned int', the variable's bits wrap around from zero to the maximum value as expected, and then continues to decrement. But on an 'unsigned long' the variable oscillates between 0 and the
maximum value for an unsigned long.

I suspect this has something to do with the bit representation of the variables, or of -1.0 when cast into an unsigned int or long, but cannot figure out why. Any ideas what makes the difference in this otherwise
wrong code?

My compiler is Gnu g++ on a Macbook.

UPDATE: It seems any level of optimization above -O0 causes the unsigned int and unsigned long to ignore the -=1.0 (which is smarter, maybe).
Thanks!

 ``1234567891011121314151617181920212223242526272829303132333435`` ``````#include using namespace std; int main () { cout << endl; // This is weird: f decrements and overflows, // acting as the maximum number, and then // decrementing. g seems to alternate decrementing // to the largest, then incrementing // back to zero: unsigned int intMinusOne = 0; unsigned long longMinusOne = 0; cout << "unsigned int = " << intMinusOne << " - unsigned long = " << longMinusOne << endl; for (int count = 0; count < 5; ++ count) { intMinusOne -= 1.0; longMinusOne -= 1.0; cout << "unsigned int -1.0 = " << intMinusOne << " - unsigned long -1.0 = " << longMinusOne << endl; } cout << endl; return 0; }``````

Here is the output:
 ``` unsigned int = 0 - unsigned long = 0 unsigned int -1.0 = 4294967295 - unsigned long -1.0 = 18446744073709551615 unsigned int -1.0 = 4294967294 - unsigned long -1.0 = 0 unsigned int -1.0 = 4294967293 - unsigned long -1.0 = 18446744073709551615 unsigned int -1.0 = 4294967292 - unsigned long -1.0 = 0 unsigned int -1.0 = 4294967291 - unsigned long -1.0 = 18446744073709551615 ```

Last edited on Try
-= 1
rather than
-= 1.0

Then it won't do the operation in doubles first, before converting to unsigned int. Thanks, lastchance.

That would fix it, but my question is "What makes the difference in the bad behavior between the unsigned ints and the unsigned longs?" i.e. Why do the two variable types behave differently?

Thanks!
Last edited on `longMinusOne -= 1.0` is equivalent to `longMinusOne = longMinusOne - 1.0`, and `longMinusOne - 1.0` is of type double.

If longMinusOne == 0 then longMinusOne - 1.0 == -1.0, which when implicitly converted to unsigned long is 2^64 - 1, because -1.0 is downgraded to a 64-bit -1, which is represented in memory as 0xFFFFFFFFFFFFFFFF, which is also the in-memory representation of 2^64 - 1.

But if longMinusOne == 2^64 - 1, longMinusOne - 1.0 == (double)2^64. This is because double doesn't have enough significant bits to represent 2^64 - 2, so the floating point implementation is forced to use the closest value, which is 2^64. In fact, for values of this magnitude, double is only able to represent multiples of 2048. Obviously when this value is converted back to unsigned long, only the least significant 64 bits of the double value can be used, which all happen to be zero. Thus, longMinusOne returns to zero. Thanks for the detailed answer, helios!