IEEE-754 irritation

Hello.

I made a simple program (x86) for displaying binary representation of 32-bit-floats, but it somehow does not match the IEEE-754 definition at all. I thought I might be lacking consideration of little endian but even so it does not look like that is all there is to the issues I am having. E.g. the sign is completely out of place.

What am I missing?

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
#include <iostream>

using namespace std;

int main() {
	char lInput[256];
	float lFloat;
	
	ACTUALPROGRAM:
	
	printf("Enter float (%i bytes): ", sizeof(float));
	cin.getline(lInput, 256);
	
	// Simple exit condition
	if(lInput[0] == 'x') {
		goto EXIT;
	}
	
	lFloat = ((float)(atof(lInput)));
	
	printf("Binary representation: ");
	
	for(int i = 0; i < (sizeof(float) * 8); i++) {
		// Odd forth- and back-casting of lFloat because compiler won't allow usage of &-operator on float directly.
		printf("%c", ((1 << (sizeof(float) - i)) & (*((int*)&lFloat))) ? '1' : '0');
		
		// Space between sign, exponent and mantissa.
		if(i == 0 || i == 8) {
			printf(" ");
		}
	}
	
	printf("\n\n");
		
	goto ACTUALPROGRAM;
	
	EXIT:
	
	printf("Hit any key to exit this program.\n");
	system("@pause>nul");
	
	return 0;
}


Thanks in advance!
My implementation of float seems to conform to http://en.wikipedia.org/wiki/Binary32

Rather than trying to implement the whole thing in two lines, break it out a bit so you can be clear on what you're coding. And get rid of that goto.
My main mistakes appears to be the for loop.

Changed it from
for(int i = 0; i < (sizeof(float) * 8); i++) {
to
for(int i = 1; i <= (sizeof(float) * 8); i++) {

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
#include <iostream>

using namespace std;

int main() {
	char lInput[256] = " ";
	float lFloat = 0.0;
	bool lExit = false;
	
	// Simple exit condition
	while(!lExit) {
		
		printf("Enter float (%i bytes): ", sizeof(float));
		cin.getline(lInput, 256);
		
		if(lInput[0] == 'x') {
			lExit = true;
		} else {
		
			lFloat = ((float)(atof(lInput)));
			
			printf("Binary representation: ");
			
			for(int i = 1; i <= (sizeof(float) * 8); i++) {
				
				// Odd forth- and back-casting of lFloat because compiler does not allow use of &-operator on float.
				printf("%c", ((1 << (sizeof(float) - i)) & (*((unsigned int *)&lFloat))) ? '1' : '0');
				
				// Space between sign, exponent and mantissa.
				/*if(i == 1 || i == 9) {
					printf(" ");
				}*/
			}
			
			printf(" (%f)\n\n", lFloat);
		}
	}
	
	printf("Hit any key to exit this program.\n");
	system("@pause>nul");
	
	return 0;
}


Now I get output that makes sense, but somehow the last 4 bits of the float are the first 4.
E.g. 12.345 outputs as
11110100000101000101100001010001
when it should be
01000001010001011000010100011111

Notice how the 1111 at the beginning actually belongs to the end.
No idea where this oddity is coming from now.
Endianness?
I can't see how that expression is correct or maybe it just went over my head. I just can't see where you pick out bit values.

I would have thought you'd need to iterate over the bytes in a float, and for each byte print the bit values.
I thought about endianness, but that norm puts bytes in reverse order, not takes a nibble from the end and puts it in the front. Hell that data type noone even knows today. Only thing I can imagine is the dirty casting job getting something wrong, but unsigned int and float are of same size, under my circumstances at least, and nothing more should matter.
Last edited on
This is more what I had in mind:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
#include <iostream>

int main()
{
	float f = 0.15625;  // known representation
	for (size_t idx = 0; idx != sizeof(float); ++idx)
	{
		unsigned char &byte = reinterpret_cast<unsigned char*>(&f)[idx];

		for (size_t pos = 0; pos != 8; ++pos)
		{
			bool bit = (byte >> pos) & 0x01;
			std::cout << (bit ? "1" : "0");
		}
	}
	std::cout << std::endl;

	return 0;
}
What about this?
std::cout << std::bitset<32> ( *reinterpret_cast<unsigned long*>(&f) ).to_string();
Thanks for the suggestions, but rather than just replacing my solution with a different one, I'd like to know what's wrong with mine in the first place.
Topic archived. No new replies allowed.