I made a simple program (x86) for displaying binary representation of 32-bit-floats, but it somehow does not match the IEEE-754 definition at all. I thought I might be lacking consideration of little endian but even so it does not look like that is all there is to the issues I am having. E.g. the sign is completely out of place.
#include <iostream>
usingnamespace std;
int main() {
char lInput[256] = " ";
float lFloat = 0.0;
bool lExit = false;
// Simple exit condition
while(!lExit) {
printf("Enter float (%i bytes): ", sizeof(float));
cin.getline(lInput, 256);
if(lInput[0] == 'x') {
lExit = true;
} else {
lFloat = ((float)(atof(lInput)));
printf("Binary representation: ");
for(int i = 1; i <= (sizeof(float) * 8); i++) {
// Odd forth- and back-casting of lFloat because compiler does not allow use of &-operator on float.
printf("%c", ((1 << (sizeof(float) - i)) & (*((unsignedint *)&lFloat))) ? '1' : '0');
// Space between sign, exponent and mantissa.
/*if(i == 1 || i == 9) {
printf(" ");
}*/
}
printf(" (%f)\n\n", lFloat);
}
}
printf("Hit any key to exit this program.\n");
system("@pause>nul");
return 0;
}
Now I get output that makes sense, but somehow the last 4 bits of the float are the first 4.
E.g. 12.345 outputs as
11110100000101000101100001010001
when it should be
01000001010001011000010100011111
Notice how the 1111 at the beginning actually belongs to the end.
No idea where this oddity is coming from now.
I thought about endianness, but that norm puts bytes in reverse order, not takes a nibble from the end and puts it in the front. Hell that data type noone even knows today. Only thing I can imagine is the dirty casting job getting something wrong, but unsigned int and float are of same size, under my circumstances at least, and nothing more should matter.
Thanks for the suggestions, but rather than just replacing my solution with a different one, I'd like to know what's wrong with mine in the first place.