Anyway, not sure if this will help: But make sure you're doing all the shifting on unsigned types.
So change char to unsigned char.
And as lastchance suggested, some example input & output would be nice.
Given an input of your example {240, 79, 3, 62), what do you expect the output to be?
Edit: The issue with your post is that on line 5, the result of those buffer ORs is going to be an integer. You're just assigning an integer to a float, not the actual bits.
You'll need to do pointer hackery to get what you want to work.
Sadly I do not know the result value.
In reality I decode a picture from a binary file.
The unsigned tipp made it!
Now everything works fine, but I am a little bit confused.
I do not know the effective bith deph of the picture.
If I read the minimum and maximum pixel intensity values from the image I get e.g. the following: Min: 1.02101e+09 Max: 1.05957e+09
If I normalize the all the values (Min = 0, Max = 255) the picture looks fine!
But I have not clue why the values are that high.
Do you have any idea?
So might this be a casting issue or is it probably the orignal data?
It would be really nice to know the image's real format and how you got those 2 values.
Are you SURE its not a standard RGB or RGBA or HSL type image in raw bytes and someone's funky code just manhandles the types awkwardly? Or perhaps its undoing the image's compression, which does often involve floating point types but that is as an intermediate between compressed and RGB or similar formats?
The original imge is a hyperspectral image stored as ENVI (uncompressed binary file + header). There I have selected one spectral channel. But all channels have these high values, which make limited sense.
These values I determined with OpenCV. So I also normalized the image.
I suspect the software which created the ENVI files made some unknown preprocessing.
In that case yea, who knows what the expected values are going to be... maybe you have the right answer and just can't verify it easily? Do you have a baseline image with known data you can check against?
I did a very small amount of HS imagery long ago. I don't remember much of it, I just helped someone else a little... seems like we mixed and matched them into 24 bit images until we saw something interesting … was from something nasa took 15 years or so back...
In the char arrary I have the binary representations of the unsigned integers as the following: buffer[0] = 222, buffer[1] = 216, buffer[2] = 247, buffer[3] = 60.
This converts wrong.
1 2 3 4 5 6 7 8 9
char buffer[4] = "";
//now filles with the data from above.
unsignedchar *uBuf32bit = (unsignedchar *)buffer;
float value_32= uBuf32bit[0] | (uBuf32bit[1] <<8) | (uBuf32bit[2] <<16) | (uBuf32bit[3] <<24);
//outcome: value = 1.02288e+09
//correct result would be: value= 0.0302548