The size of pointers in 64 system

According to cpp reference, a pointer in 64 system is 8 bytes = 64 bits = 2 ^ 64
( However, the address in the debug shows a memory address can be represented in 8 digit hexadecimal e.g. 0x0095F24E 16 ^ 8 = 2 ^ 32 = 32 bits = 4 bytes, meaning 4 bytes of memory is sufficient.

I am confused right now: did my calculation went wrong? If not, then what is the point of having other 4 bytes of memory doing in a pointer?
The pointer in a program compiled as 64-bit is 8 bytes (64 bits). However if the program is compiled as 32 bit, then the pointer is 4 bytes (32 bits). It looks like the program was compiled as 32 bit?

pointers are more or less directly hitting an offset in your system's ram.
if your system is 64 bit, you probably have 32 or 64 or so gb of memory. Memory is in base 2, of course, so a gb of memory is really 2^30, so you have 2^30*64 = 6.8e10 if you have 64gb ram.

a 32 bit pointer cannot address that many values: it can only address ~4.3e9 values (4gb of memory). every bit doubles the values...

windows can run a 32 bit program on a 64 bit machine, but the 32 bit program cannot use the extra memory because its pointers are too small. the OS can even give it high value ram offsets and manage that behind the scene (so the 32 bit pointer is offset by a 64 bit number to get the true location), but the program can use at most a fraction of the ram available (which is fine, it was designed for those small values).

fun history lesson, dos was a 16 bit OS and they had a 32 bit overlay on later versions that used the higher ram parts (386 and up were 32 bit machines, maybe 286, memory is fading a bit). So dos could talk to many MB of ram, but only in 16 bit chunks, the overlay had to compute offsets and swap things around to allow it to use all your memory :)
Last edited on

Thanks for the fast reply.
I "guess" I am using 64 bit g++ to compile: following is the output of the terminal

C:\Users\Simon>g++ --version
g++ (GCC) 9.2.0
Copyright (C) 2019 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO

C:\Users\Simon>gcc -v
Using built-in specs.
Target: x86_64-w64-mingw32
Configured with: ../src/configure --enable-languages=c,c++ --build=x86_64-w64-mingw32 --host=x86_64-w64-mingw32 --target=x86_64-w64-mingw32 --disable-multilib --prefix=/c/temp/gcc/dest --with-sysroot=/c/temp/gcc/dest --disable-libstdcxx-pch --disable-libstdcxx-verbose --disable-nls --disable-shared --disable-win32-registry --with-tune=haswell --enable-threads=posix --enable-libgomp
Thread model: posix
gcc version 9.2.0 (GCC)

Google says gcc defaults to be 64 and the target in gcc -v says x86_64-w64-mingw32. (So many numbers so I am confused XD)

Wow, it is like magic! How does that overlay even work?

Here is my understanding and correct me if I am wrong: If a seller is selling 6 boxes labeled 1 - 6 (unique memory address) and even if one person can only speak number in the range 1 - 3 (unique pointer combination), but somehow the seller knows the person is referring to box 4, 5,6 or box 1,2,3 ???
There was what was called extended and expanded memory. See

and much fun was had in assembler code with TSR etc !
> According to cpp reference, a pointer in 64 system is 8 bytes

cpprefrence is talking about the data models in common use;
the size of pointer referred to is the size of pointer to object.

Size of pointers to (non-static) member functions could (would typically be) larger.

For example:
#include <iostream>
#include <cstring>

struct A
    virtual void foo() ;
    void bar() ;
    int v = 9 ;

#define print_size(x) ( std::cout << "sizeof( " #x " ) == " << sizeof(x) << '\n' )

int main()
    const int* ptr_object = nullptr ; // pointer to object
    auto ptr_fun = &std::strlen ; // pointer to function
    auto ptr_mem_object = &A::v ; // pointer to member object
    auto ptr_mem_fun = &A::bar ; // pointer to member function
    print_size(ptr_object) ;
    print_size(ptr_fun) ;
    print_size(ptr_mem_object) ;
    print_size(ptr_mem_fun) ;

sizeof( ptr_object ) == 8
sizeof( ptr_fun ) == 8
sizeof( ptr_mem_object ) == 8
sizeof( ptr_mem_fun ) == 16
Last edited on

Thanks for the info. At my current level, I do not understand a thing and it feels like magic right now XD. Maybe I will come back later and be able to understand eventually.
Correct me if I am wrong: even though the typical memory address needs only 4 bytes of memory to store, to be a "good" pointer, additional information needs to be used for extending to memory and thus the additional 4 bytes?

So, for example, what I see in VS 2019 community -> debug -> a memory of a pointer (this pointer )

0x0092F7DC || 98 ee b9 00 --> ingredients for some black magic?
0x0092F7E0 || 58 1b ba 00 --> memory of the object in 0x00ba1b58

(p.s. I am not used to this read in backward practice yet... )
Correct me if I am wrong: even though the typical memory address needs only 4 bytes of memory to store ...
Yeah, that's wrong. There's no such thing as a typical address.

In the beginning, there was just a computer word. That was the size of memory, registers and how much the ALU would use at a time.

The word size varied, but in Unix world, a crisis happened when a computer came along (PDP-11) that had 16bit words, but 8 bit memory. That spawned a whole rethink about the system programming language because it could no longer address individual bytes, only words (every other byte). So the memory model was revised and a new systems programming language borne to deal with it. The new language was C (replacing the old one, B), and the new types were:
char = byte
int = word
pointer = word
a signed/unsigned qualifier
float / double

There was a similar sort of thing again when Intel made their first 16bit processor. There was no 16bit memory for go with it, so they used 8 bit memory. That processor was the 8088. It was only when 16bit memory became available that the 8086 was released.

Those 16bit processors had an innovative addressing scheme to get more that 2^16 addresses by specifying an address with 2 registers. It was a bit too early in time to go full 32bit, but they had this segment/offset, where the 64k segments overlapped each other by 16 bytes. Why 64k? Because the design was based on 8 bit tech. Later on, spare space was used to install Expanded memory cards that would swap additional memory in 4 segments. Bonkers.

When Windows moved from 16 to 32bit, they continued to support 16bit apps in a Windows-on-Windows layer (WoW), and did pretty much the same again when they moved from 32 to 64bit. Windows rans the 32 processor in segment/offset mode, where the segment was always zero, giving a "flat" 32 bit address space. I don't know what they did for 64bit as they took forever to start using 64bit apps.

The original ideal behind C++'s allocator was to encapsulate these different addressing schemes in a portable way. I have no idea what allocators are used for now other than custom heaps.

The main points are:
1. addressing is invented, it can be pretty much anything
2. the software must support whatever the underlying hardware does, or you can't use it
3. it would be nice if 32bit meant 2^32 addresses, and so on, but it's pretty much arbitrary, and hardware isn't always nice
Last edited on
> The original ideal behind C++'s allocator was to encapsulate these different addressing schemes in a portable way.

An example of a custom allocator that encapsulates some messy details is boost's interprocess allocator (essentially, the address it holds is a relative address)
Topic archived. No new replies allowed.