#include <iostream>
#include <limits>
using std::cout;
using std::endl;
using std::hex;
using std::showbase;
using std::numeric_limits;
int main(int argc, char** argv, char** envp)
{
/* Conversion from unsigned integer */
// Most significant bit is 1, ui = 0xFFFFFFFF
unsignedint ui = numeric_limits<unsignedint>::max();
// Prints 0xffffffff
cout << "ui = " << showbase << hex << ui << endl;
// Most significant bit of "ui" is extended in "ull" ?
// -> In my platform, NO, ull = 0x00000000FFFFFFFF
unsignedlonglong ull = ui;
// Prints 0xffffffff
cout << "ull = " << showbase << hex << ull << endl;
// Most significant bit of "ui" is extended in "sll" ?
// -> In my platform, NO, sll = 0x00000000FFFFFFFF
longlong sll = ui;
// Prints 0xffffffff
cout << "sll = " << showbase << hex << sll << endl;
/* Conversion from signed integer */
// Most significant bit is 1, si = 0xFFFFFFFF
int si = -1;
// Prints 0xffffffff
cout << "si = " << showbase << hex << si << endl;
// Most significant bit of "si" is extended in "ull" ?
// -> In my platform, YES, ull = 0xFFFFFFFFFFFFFFFF
ull = si;
// Prints 0xffffffffffffffff
cout << "ull = " << showbase << hex << ull << endl;
// Most significant bit of "si" is extended in "sll" ?
// -> In my platform, YES, sll = 0xFFFFFFFFFFFFFFFF
sll = si;
// Prints 0xffffffffffffffff
cout << "sll = " << showbase << hex << sll << endl;
return 0;
}
compiled with Microsoft Visual C++ 2010 Express, prints this output:
So, in my platform, conversion from an unsigend integer primitive data type to any bigger integer primitive data type never extends the most significant bit of the former integer and conversion from an signed integer primitive data type to any bigger integer primitive data type always extends the most significant bit of the former integer. This is convenient to mantain the same value when converting between integer primitive data types of the same signedness (i.e, signed integers or unsigned integers).
But, Does the C++ standard guarantee that this behaviour is always the same in all platforms ?
2 If the destination type is unsigned, the resulting value is the least unsigned integer congruent to the source integer (modulo 2n where n is the number of bits used to represent the unsigned type). [ Note: In a two’s complement representation, this conversion is conceptual and there is no change in the bit pattern (if there is no truncation). — end note ]
3 If the destination type is signed, the value is unchanged if it can be represented in the destination type (and bit-field width); otherwise, the value is implementation-defined.
Of course, this is if I have interpreted your question correctly, but that is probably the relevant part in the standard. Also, you do realize that you can download draft versions of the standard? Though, it can be a bit hard to find things sometimes...