Trouble converting binary int to float

Apr 13, 2011 at 5:48am
I have written a parser in java that generates a binary data file via the java.io.DataOutputStream.writeFloat / writeChar. It is worth noting that writeFloat converts the float argument to an int using the floatToIntBits method in class Float, and then writes that int value to the underlying output stream as a 4-byte quantity, high byte first. The .dat file output as follows (in hex):

0067 40B00000 C0000000 00000000 40000000 3F800000 3F800000 3FC8F5C3 000A

(which corresponds to : 'g' 5.5f -2.0f 0.0f 2.0f 1.0f 1.0f 1.57f '\n')

I then stream the file in c++ using ifstream::read. The first character is obtained as follows:

1
2
3
char type[2];
m_StreamingLevelFile.read((char*)type, sizeof(char)*2); //read two bytes
//type[1] represents the character we want 


The next seven floats are read using:

1
2
3
4
int intBuffer;
m_StreamingLevelFile.read((char*)&intBuffer, sizeof(int)); //read 4 bytes
intBuffer = intToBigEndianess(intBuffer); //change endianess
float concernedFloat = intBitsToFloat(intBuffer); //convert the int back to a IEEE 754 single-precision float 


Where intToBigEndianess is:

1
2
3
4
5
6
7
8
//Convert little endian int to big endian int
int intToBigEndianess(const int x)
{
    return  ( x >> 24 ) |  // Move first byte to the end,
            ( ( x << 8 ) & 0x00FF0000 ) | // move 2nd byte to 3rd,
            ( ( x >> 8 ) & 0x0000FF00 ) | // move 3rd byte to 2nd,
            ( x << 24 ); // move last byte to start.
}


And intBitsToFloat is an nvidia function I stole that is supposedly equivalent to java intBitsToFloat:

1
2
3
4
5
6
7
8
9
10
11
//returns the float value corresponding to a given bit represention.of a scalar int value or vector of int values
float intBitsToFloat(const int x)
{
    union {
       float f;  // assuming 32-bit IEEE 754 single-precision
       int i;    // assuming 32-bit 2's complement int
    } u;
        
    u.i = x;
    return u.f;
}


The final character is read the same way as the first.

This code functions correctly except for on the last float group (3FC8F5C3), and any group whose 3rd/4th bytes are not zero. It may be of interest that the hexadecimal output in c++ of the concerned group is ffffffc3, where the rest of the float groups match the hex editor.

Can anybody see where I am going wrong?

[EDIT: SOLVED]

I found where the problem was stemming from, my intToBigEndianess needed to operate on an unsigned int, and changing it to this solved my problem:

1
2
3
4
5
6
7
8
    //Convert little endian int to big endian int
    unsigned int intToBigEndianess(const unsigned int x)
    {
        return  ( x >> 24 ) |  // Move first byte to the end,
                ( ( x << 8 ) & 0x00FF0000 ) | // move 2nd byte to 3rd,
                ( ( x >> 8 ) & 0x0000FF00 ) | // move 3rd byte to 2nd,
                ( x << 24 ); // move last byte to start.
    }


Last edited on Apr 13, 2011 at 5:55am
Topic archived. No new replies allowed.