im making a function that uses bitwise operands to multiply a number by two, i got that for positives you just << by 1 but how would you do it for negatives?
I'm not telling you to change the signature of the function. Just cast the value inside the function. Otherwise, what you're doing doesn't make sense because you want positive values to behave as negative values. It's like expecting that 2+1=1.
and it still returns the error:
ERROR: Test float_twice(-2139095040[0x80800000]) failed...
...Gives 16777216[0x1000000]. Should be -2130706432[0x81000000]
Gah! I'm so stupid. Now I see it. The test vector is not using two's complement at all!
To check if the number is negative, bitwise-AND with 0x80000000.
To invert the sign, bitwise-XOR with 0x80000000.
Don't use signed types anywhere.
Leave the rest unchanged.
EDIT: I changed the first of the values. Also note that this representation may have two zeroes: 0 and 0x80000000.
elseif((uf & 0x8000000) > 0){
i know this is very very wrong but i cannot think how i am supposed to use the bitwise and to check if it is negative or not without using signs