Float to HEX and vise versa

Mar 5, 2012 at 11:34pm
Can someone point me to a good reference about this please?
I've tried google but the results I've gotten keep telling me loop through a *= 16 crap which did not give the expected result.
My test number is 32.76 which is supposed to come out as 42030A3D but my attempts keep giving me something that starts with 208, I'd like a reference that shows me the process involved and not just some function.
Mar 5, 2012 at 11:58pm
Here is an extraordinarily ugly and dangerous thing.


1
2
3
4
5
6
7
8
#include <iostream>
using namespace std;
int main()
{
  float a = 32.76;
  int* q = (int*)&a;
  cout << hex<<  *q << endl;
}

Last edited on Mar 5, 2012 at 11:59pm
Mar 6, 2012 at 2:36am
32.76 decimal == 20.c28f5c28f5c2 hexadecimal but when I convert the decimal part manually it == .4609375 ????
Mar 6, 2012 at 8:08am
That's pretty much along the same lines as the *= 16 crap, I wanted the version that produces the IEEE version/s
Mar 6, 2012 at 9:24am
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
std::string ieee_float_to_hex( float f )
{
    static_assert( std::numeric_limits<float>::is_iec559,
                   "native float must be an IEEE float" ) ;

    union { float fval ; std::uint32_t ival ; };
    fval = f ;

    std::ostringstream stm ;
    stm << std::hex << std::uppercase << ival ;

    return stm.str() ;
}

int main()
{
    std::cout << ieee_float_to_hex( 32.76 ) << '\n' ; // 42030A3D
}

Last edited on Mar 6, 2012 at 9:26am
Mar 6, 2012 at 10:10am
Thank you but this is just the function, I wanted to be able to study the process involved as well so that I would not be restricted to using the std namespace or other namespaces - there are cases where I don't have access to that and need an alternative solution which I cannot produce if I don't know or understand the process involved.
Mar 6, 2012 at 11:02am
What do you think it does? What magic do you think is happening? JLB's code and my code do the same thing; you take the float, which ultimately is JUST some numbers in memory, and you INSIST to the compiler that actually its an integer and you want to see the hex value of that integer. That's it. I do it by casting, he does it (a lot more safely, and with a bonus static_assert checking it's the encoding system you wanted) with a union.

There is no "process" involved, no magic. You just take the variable and force the compiler to consider it an integer, and then output it in hex.


Last edited on Mar 6, 2012 at 11:09am
Mar 6, 2012 at 11:19am
I get the reinterpreting of the value but it does not answer my original question of how to convert the float to begin with, e.g does the left most hex indicate sign or how many decimal points? or is right most, how does the machine actually get the value that is to be reinterpreted as unsigned int, I'm not talking about just reinterpreting the value but how it gets that original string of hex to begin with and subsequently be read to reproduce that float.
Mar 6, 2012 at 11:40am
Inside the machine, there are no floats, no ints, no char or string or anything. It's just memory locations, and a number stored in that location.

Let's say that we have a memory location, and it has a value stored in it. It's the number 72.

Is that the integer 72? Or is it the char 'H'? They are both represented in memory by the number 72. There is no difference in memory. The only difference is how we choose to interpret it.

We can choose to interpret it any way we like. We could choose to interpret it as here:
http://en.wikipedia.org/wiki/Single_precision#IEEE_754_single-precision_binary_floating-point_format:_binary32

That wiki page explains how a four byte value is interpreted as a floating point number under one possible way of doing so.

how it gets that original string of hex to begin with

It's just a number, in memory. It got put there when you set a value to a variable.
Last edited on Mar 6, 2012 at 11:40am
Mar 6, 2012 at 12:05pm
Thank you, I should be able to use that.
Mar 6, 2012 at 12:12pm
Whilst it is an entertaining intellectual exercise to take the individual bits and write a function to manually calculate what floating point number they represent, do remember that using that function in anything other than demonstration code is just silly :)
Mar 6, 2012 at 12:57pm
I know, I'm just writing for javascript (part of an effort to create dynamic help pages for a tool I made) at the moment and using only c++ style isn't gonna get me anywhere but knowing how it is originally done in the machine will help me to do it right in the javascript itself.
Mar 6, 2012 at 5:23pm
> Whilst it is an entertaining intellectual exercise to take the individual bits
> and write a function to manually calculate what floating point number

Take four bits at a time and it will trivially map to a hex digit. The order in which the nibbles are considered would be governed by endianness.
Mar 6, 2012 at 5:35pm
Moschops reply was more helpful, anyway I'm almost there with the javascript function however just decimals proceeded by 0. seem to be producing slightly incorrect binary sequence rendering the resulting HEX wrong. For anyone who's curious here's the code I have so far (might not be very readable though).

Edit: I finished the function, for anyone interested I left the code at a more suitable location:
http://www.webdeveloper.com/forum/showthread.php?s=9da39b0389dccadebcc23e476beb9f61&t=257745
Last edited on Mar 6, 2012 at 10:03pm
Topic archived. No new replies allowed.