The issue is that on most systems, char is signed. So that when it is promoted to int, the sign bit gets extended - hence all the ff's. Anding with oxff makes it unsigned. An alternative is:
#include <iostream>
#include <vector>
#include <iomanip>
int main()
{
unsignedchar c1 { '@' };
// assigning a signed int (-1 is an int) to an unsigned char is a conversion
// can't use uniform initialization without casting
unsignedchar c2 = -1 ;// you should get a warning there is a conversion mismatch
unsignedchar c3 { static_cast<unsignedchar>(-12) };
std::vector<unsignedchar> vc { c1, c2, c3 };
for (auto& c : vc)
{
// +c 'fools' std::cout c is a number, not a character
// sign isn't changed
std::cout << "0x" << std::hex << +c << ' ';
}
std::cout << '\n';
}
integer literals are of type int. So 0xff is actually 0x000000ff (for 32 bit integers). Anding this with a char (signed or unsigned) gives an int with the lower bits representing the char. Casting an int to an int doesn't involve any promotion - so the sign bit doesn't get extended. I was a bit lax with the explanation in my previous post.
Thank you, for your time, if was very helpful. I had do some thinking, but learned something, or at least re-remember something that I forgot about bitwise operators.