Integer sizes with cstdint...

I am currently working on a Windows and Linux program that will share configuration files. It will be compiled as both x86 and x64 on both platforms. With that in mind I have started to use cstdint. My question is whether or not the integer types inside (uint16_t, uint32_t primarily) will always be two and four bytes. If they are then I am good and my binary configuration files may be shared between platforms and architecture. If not, what are my options?
uint16_t will always be exactly 16 bits.
uint32_t will always be exactly 32 bits.
The C++ standard does not guarantee that a byte is 8 bits but I think that Windows and Linux has been designed to work with 8 bit bytes only so it would probably be problematic and very stupid if a compiler tried to use a different byte size, so I think you are safe to assume that is always the case.
Alright, how about the other sizes? My concern is that I write a configuration file using my Linux laptop while out, come home, copy that file to my Windows 7 box, and then things do not read in correctly. Currently I am saving uint16_t as two bytes.
 
fwrite(&some_uint16_t_variable, 2, 1, pFile);

In other words I am not calling "sizeof(uint16_t)" during the writing. This ensure that a "short" is written and read as two bytes, but I need to know that this is the case on Windows, Linux, and Mac x86 and x86_64.
closed account (Dy7SLyTq)
yes they will. i couldnt find it but i asked this question a while back and everyone assured me that while its not guaranteed by the standard, it is kept similar throught mac -> linux -> windows -> unix
If the types are defined, then yes, they are exactly the specified number of bits.

POSIX guarantees that a byte is eight bits. (It has to to break the least amount of extant code.) Mac and Windows have eight-bit bytes. (You would have to be playing with fairly exotic hardware to not have eight-bit bytes.)


fwrite( (const char*)&something_that_isnt_a_char_array, n, 1, pFile ) is almost ALWAYS WRONG.

The safest binary option is to serialize properly, meaning that you should convert everything to individual bytes. Google "serialization" for more. And some stuff here that might help you.

Binary I/O helpers:
http://www.cplusplus.com/forum/beginner/31584/#msg171056

Binary I/O on standard streams:
http://www.cplusplus.com/forum/beginner/11431/#msg53963

Why endianness matters:
http://www.cplusplus.com/forum/general/11554/#msg54622

If you are serializing floating point numbers, convert them to string and serialize the string.

Hope this helps.
Can you explain to me why my fwrite example is wrong? I will look up serialization later this evening when I get home, but I have been storing binary data that way since DOS on a Vendex Headstart 286 and have never had an issue with it. I even found that example in an OLD C book from the late 80's or early 90's.

*EDIT*

I am now starting to use iostream since I am using C++. Would this information still matter?

*EDIT*

I took some time to view serialization and I am seeing stuff about how it allows you to save an entire class to file and read it back in properly, but I am not doing that at this time. Are you saying I should serialize normal variables somehow?
Last edited on
Topic archived. No new replies allowed.