Size of unsigned long and unsigned long long cross platform

Hello,

I am working on two different machines. One is a Mac with an intel core 2 Duo Processor and other is a quad-core Intel Xeon Processor running Ubuntu 10.5. They are both 64-bit processors.

On the mac, the size of an unsigned long is 4 bytes and the size of an unsigned long long is 8 bytes. Whereas, on the Ubuntu machine both types come out to 8 bytes. Note that I used sizeof() to get the sizes.

I was wondering if someone could explain why the Ubuntu Machine is doing this?
The C++ standard simply states that sizeof(int) >= sizeof(long) >= sizeof(long long).

So that is just another way they could do it.
Hmmm... That's weird. I wrote code that uses 64-bits with the data type unsigned long (since on the Ubuntu machine it came out as 8 bytes). That same code wouldn't work probably on the mac then since its unsigned long is only 4 bytes.

What if I only needed 4 bytes of memory? If I were working on the Ubuntu machine, would I always be wasting 4 bytes every time I created an unsigned long? That doesn't seem right.
If you want 4 bytes *exactly*, use an array of chars, which are required to be 1 byte.

The int/long are made depending on what the processor/machine wants to call that, so you shouldn't depend on them being a specific length.
I'll definitely keep that in mind! However, what If I needed to be bit-shifting the 4 bytes? I can't bit-shift an array of chars. This is crazy. Bit-shifting must not be very portable.

If you need to bitshift, you could try using a bitset.
stdint.h defines integral types of fixed sizes and sizes with certain guarantees (like int32_t, int_fast32_t, int_least32_t).
The C++ standard simply states that sizeof(int) >= sizeof(long) >= sizeof(long long).

ISO C++ 1998 does not support long long.
However sizeof(long) <= sizeof(long long)
Last edited on
^Whoops, yeah that's right. Basically everyone supports it though, and I think C++0x would follow the same rules anyway when they add it.
I don't have an OS X installation, but FWIW:

On 32-bit OS X installations, sizeof(long) should be 4 (32 bits), and on 64-bit OS X installations (only available with OS X Tiger v10.4 and newer), sizeof(long) should be 8 (64 bits). 32-bit OS X uses ILP32 model, and 64-bit OS X uses the LP64 model just like GNU/Linux, Solaris and virtually any other available Unix and Unix-like system that is 64-bit. If you're reasonably interested in the reason behind the change for UNIX (not OS X-specific), check out http://www.unix.org/version2/whatsnew/lp64_wp.html.

Regardless of whether your processor is 64-bit capable or not, if you are using a 32-bit kernel, then you will be limited to 32-bit programs that use the ILP32 data model. Otherwise you are using the LP64 data model (on most 64-bit Unix and 64-bit Unix-like installations). Of course, this shouldn't be a problem on a Mac. Certain processors are released with certain machines, and since you are running an Intel Core 2 Duo, you should be running a 64-bit kernel.

Warning about OS X Snow Leopard (v10.6):
If you're running Snow Leopard, it apparently defaults to a 32-bit kernel. Click on the Apple menu and choose "About This Mac", then click the "More Info..." button. Click on the "Software" category and in the "System Software Overview" find the "64-bit Kernel and Extensions" item. If it says "No" then you are running in 32-bit mode, in which case you should reboot and hold down the "6" and "4" keys during the boot. Go back to the item and it should say "Yes".
- Source: http://www.askdavetaylor.com/snow_leopard_running_32_bit_64_bit_32bit_64bit.html

For more details on the specifics of the data type models and sizes used in OS X, check out http://developer.apple.com/library/mac/#documentation/Darwin/Conceptual/64bitPorting/transition/transition.html#//apple_ref/doc/uid/TP40001064-CH207-CHDGGBDA
As pointed out by the Apple document referenced above, Mac OS X uses the LP64 size model, while Linux adpoted the ILP64 model. Basically, this means "int" and "unsigned" are still 32-bit values on 64-bit Mac OS X, while they are 64-bit values on Linux. If you explicitly want a 32-bit integer or a 64-bit integer, use the typedef types int32_t, uint32_t, int64_t, and uint64_t usually declared in sys/types.h.
softweyr wrote:
usually declared in sys/types.h.

kbw in his earlier post was right -- they are in <stdint.h> per C99 (and <cstdint> in C++0x). This should be preferred over <sys/types.h> as the more portable solution.

http://en.wikipedia.org/wiki/Stdint.h

Virtually all modern C++ compilers support C99.
Topic archived. No new replies allowed.