I'm trying to make a basic adder, why? Just for fun.
I thought this would be rather trivial but I've stumbled a little. Firstly when I try to print (x << 7) it prints 384?? I was expecting it to print 128, my guess would be that ofstream doesn't have a method that takes in a int8_t so instead it converts it to an integer(4 bytes), would this be correct?
but the next part leaves me a little puzzled, I shift x by 7 places (x << 7) and shift y by 7 places (y << 7) this in theory well at least in my theory should leave me with 1000000 and 00000000, then I xor both the numbers and should get 10000000 or 128. So I expect 128 to be printed yet I get some weird question mark ascii symbol. Any idea what's going on?
Firstly when I try to print (x << 7) it prints 384?? I was expecting it to print 128, my guess would be that ofstream doesn't have a method that takes in a int8_t so instead it converts it to an integer(4 bytes), would this be correct?
No.
7 is an int, so x is promoted to int before applying <<, thus bit 8 is preserved in the result.
I shift x by 7 places (x << 7) and shift y by 7 places (y << 7) this in theory well at least in my theory should leave me with 1000000 and 00000000, then I xor both the numbers and should get 10000000 or 128. So I expect 128 to be printed yet I get some weird question mark ascii symbol. Any idea what's going on?
There are operator<<() overloads for std::ostream for both signed char and unsigned char, which both do the same thing: print the character that corresponds to that byte. (std::int8_t and std::uint8_t are signed char and unsigned char respectively.)
To print an un/signed char as if it was a number first cast it to int:
std::cout << (int)add;
In your example above, directly casting to print will sign-extend the value, causing -128 to be printed. To treat it as an unsigned value first remove the signedness:
std::cout << (int)(std::uint8_t)add;
Or better yet, don't do bit twiddling on signed values. That's generally the best strategy to avoid weird behavior.
@OP For this sort of stuff it's often better to use bitsets to start with and use them throughout. Convert to int etc at the end only of any calculation.
As you said the literal 7 is an int by default? so x is converted or promoted to an int in this case.
std::cout << (int)(std::uint8_t)add;
Never came across this syntax before, casting like this makes sense std::cout << (int)add;
but what affect does putting two data types in sequence have on the variable being casted?
If int_8 is essentially a signed char, why do so many people use it rather than just char? I mean it doesn't seem like there is any benefits other than readability, also causes you to use an extra include.
but what affect does putting two data types in sequence have on the variable being casted?
Here, cout treats uint8_t as a character type and shows the char - not the integer value. For cout to show the required integer value - as opposed to the char representation, then an additional cast to int (or unsigned) is required.
And the source of that can be seen at first hand by reading the header file stdint.h
Go figure the original naming logic of that 8 bits given they meant 'char' in the first place.