Right now I have a program that takes data from a machine and stores it into a binary file on the fly so that I won't run out of memory and the buffer on the machine that takes the data won't overflow. I now want to convert this data to a text file. My problem is that when I write the information in binary, it gets written in little endian. When I then try to read out the data in order to convert it, it gets read out in big endian. Huh? I'm not even playing around with anything here.
For the conversion I have this:
1 2 3
|
Bin1.read(temp,2);
value = (unsigned char)temp[0]*(unsigned char)temp[1];
csv << "," << value;
|
Where csv and Bin1 are obviously my file variables. Writing the data the first time around is exactly the same, where I just have "Bin1.write(temp,2)". It's a two byte unsigned integer, so the plan is to just multiply the first byte by the second, since it's like being in base-256 instead of base-10. Anyway, like I said, when I write the file in binary, it's written in little-endian. I know this because when I open the data with an external program, I tick "little-endian" and it shows me the correct data. After conversion to text, it looks just like the binary file, had I opened it in big-endian.
The only thing about switching endian mode I have found only switches the byte order. Since I am multiplying the two bytes anyway, it doesn't matter. I need to switch the
bit order, and I don't know how to do that easily.
EDIT: Nevermind, I did a stupid thing. It works now that I swapped the code to this:
1 2 3
|
Bin1.read(temp,2);
value = (unsigned char)temp[0]+(unsigned char)temp[1]*256;
csv << "," << value;
|