-0 doesn't seem possible with integers (although I can get floats to be -0.0 in some cases). but what exactly is the value of an int with only the signed bit set? I tried this:
int i=1<<31;
and cout the value, and it prints -2147483648, which after popping this value into a calculator, is exactly the value I suggested, except that it's negative. it's like the computer is taking the numerical value as if it were unsigned, but yet also considering the highest-order bit a floating point indicator.
I thought this was neat. I presume this is undefined behavior, but if anyone has any insight into this, I'd be interested to hear.
I don't really have a question, I just thought this was neat.
-0 doesn't seem possible with integers (although I can get floats to be -0.0 in some cases).
The representation of integer values is implementation-defined. Some encodings allow negative zero.
but what exactly is the value of an int with only the signed bit set?
Integer encodings don't necessarily have a "sign bit". Two's complement, for example, is defined as such:
1. A string of n zeroes is the integer 0.
2. If x ≠ 2^(n-1) - 1, then adding 1 to the representation of x yields a representation of y such that y=x+1
3. If x = 2^(n-1), then adding 1 to the representation of x yields a representation of y such that y=-(x+1)
The sign of a value is implied by whether it's higher or lower than a certain value. There isn't a sign bit that you can flip to get the negative of a number, like with floats.
I thought this was neat. I presume this is undefined behavior
Yep. 1<<31 is a 1 followed by 31 zeroes, which happens to correspond with how your platform represents the value -2^31.
By the way, a 1 followed by 30 zeroes and then another 1 is -2^31+1 .
That is correct. Left-shifting a signed integer so far that it overflows is undefined behavior. Doesn't matter what 2's complement arithmetic says, the compiler is allowed to drop code that depends, for example, on a signed integer changing sign after left shift, because it "can't happen".
I didn't think it was an overflow, considering (unless I'm rusty with my bitwise operations) 1<<31 set's the leftmost bit in the 32-bit integer. which, as I had always thought, was the bit that denotes a negative (signed) integer.
and floats, you say they have a signed-bit, helios? I attempted to figure out how floats at the bit-level once. it was pretty complicated, really.
What causes the undefined behavior is the storing of the result into an int.
1 and 31 are of type int. the result of 1<<31 is of type int. Seems counter-intuitive to me that assigning an int to an int would result in undefined behavior.
If std::numeric_limits<int>::digits == 31 and std::numeric_limits<int>::max() == 231 - 1
(32-bit integer), 1 << 31 is undefined.
The behavior is undefined if the right operand is negative, or greater than or equal to the length in bits of the promoted left operand.
...
The value of E1 << E2 is E1 left-shifted E2 bit positions...
if E1 has a signed type and non-negative value, and E1 × 2E2 is representable in the result type, then that is the resulting value; otherwise, the behavior is undefined.
wow. of all the books, articles, conversations, etc... this is the first time I've ever even heard of numeric literal suffixes other than f or F for float. in fact, I wasn't even sure what to call them when I googled it.
so shifting also considers the type of the lvalue supplied?
unsigned int x = (unsigned short int)1 << 24; // undefined behavior because of the cast?