Can someone point me to a good reference about this please?
I've tried google but the results I've gotten keep telling me loop through a *= 16 crap which did not give the expected result.
My test number is 32.76 which is supposed to come out as 42030A3D but my attempts keep giving me something that starts with 208, I'd like a reference that shows me the process involved and not just some function.
Thank you but this is just the function, I wanted to be able to study the process involved as well so that I would not be restricted to using the std namespace or other namespaces - there are cases where I don't have access to that and need an alternative solution which I cannot produce if I don't know or understand the process involved.
What do you think it does? What magic do you think is happening? JLB's code and my code do the same thing; you take the float, which ultimately is JUST some numbers in memory, and you INSIST to the compiler that actually its an integer and you want to see the hex value of that integer. That's it. I do it by casting, he does it (a lot more safely, and with a bonus static_assert checking it's the encoding system you wanted) with a union.
There is no "process" involved, no magic. You just take the variable and force the compiler to consider it an integer, and then output it in hex.
I get the reinterpreting of the value but it does not answer my original question of how to convert the float to begin with, e.g does the left most hex indicate sign or how many decimal points? or is right most, how does the machine actually get the value that is to be reinterpreted as unsigned int, I'm not talking about just reinterpreting the value but how it gets that original string of hex to begin with and subsequently be read to reproduce that float.
Inside the machine, there are no floats, no ints, no char or string or anything. It's just memory locations, and a number stored in that location.
Let's say that we have a memory location, and it has a value stored in it. It's the number 72.
Is that the integer 72? Or is it the char 'H'? They are both represented in memory by the number 72. There is no difference in memory. The only difference is how we choose to interpret it.
Whilst it is an entertaining intellectual exercise to take the individual bits and write a function to manually calculate what floating point number they represent, do remember that using that function in anything other than demonstration code is just silly :)
I know, I'm just writing for javascript (part of an effort to create dynamic help pages for a tool I made) at the moment and using only c++ style isn't gonna get me anywhere but knowing how it is originally done in the machine will help me to do it right in the javascript itself.
> Whilst it is an entertaining intellectual exercise to take the individual bits
> and write a function to manually calculate what floating point number
Take four bits at a time and it will trivially map to a hex digit. The order in which the nibbles are considered would be governed by endianness.
Moschops reply was more helpful, anyway I'm almost there with the javascript function however just decimals proceeded by 0. seem to be producing slightly incorrect binary sequence rendering the resulting HEX wrong. For anyone who's curious here's the code I have so far (might not be very readable though).