I think the title makes my question clear enough :P. So, what is the difference? I know the pointers are 64bits big on 64bit systems and 32bits on 32bit systems. But is that the only difference? I always thought you almost had to recreate the whole application ;). Lets see.
One pops up in my mind: Your address space is 63 bit in 64-bit systems, whereas its only 31 bit in 32-bit systems. This means: It's possible to have more than 2 gig of RAM per application. ;)
ok that means if we have not done any Mathmetical operation on pointers in code, then we don't have to do any change in that code.
if i am wrong then please correct me!
ok that means if we have not done any Mathmetical operation on pointers in code, then we don't have to do any change in that code.
if i am wrong then please correct me!
I think you are wrong, though the opposite is true: If you have mathematical operations, you most probably have to adjust your code. ;-)
There is plenty room for things that worked on 32-bit machines without beeing actually hard-required to work from a pure-standard-C++ point, and which may or may not work on 64 bit machines. Take sizeof(some_struct_containing_pointer) and reinterpret_cast (or the old-style C-cast) for example.
Is sizeof(int) now 8 bytes on 64-bit machines? (I don't know, but it would sound reasonable to me). Then you can probably inspect all your sloppy written networking code. If not, you can spend time looking at all HANDLE stuff that used to store pointer in integers.
Ok, so I have to watch out for any thing that is not specified in the C++ standard to be a specific thing, but I already guessed that :P. Anyway, a new question pops up. I'm actually using (unsigned) __int8, __int16, __int32 and __int64 in my application. I #define them to things like uint8, I would have to type a lot else :P . My question: That would case any problem right?
'int' has no meaning to the machine. The one who assigns meaning to types is the compiler. In other words, a compiler for x86-64 might make sizeof(int)==2 and still be compliant.
Generally speaking, though, an int is typically the size of a register in the target.
I'm actually using (unsigned) __int8, __int16, __int32 and __int64 in my application. I #define them to things like uint8, I would have to type a lot else :P . My question: That would case any problem right?
__int# are compiler extensions, but they have the advantage that their size is unambiguously defined, as opposed to, say, 'int'. Unlike with standard types, it's safe to assume their size without using sizeof.