||=== Build: Debug in chomework (compiler: GNU GCC Compiler) ===|
main.c||In function ‘main’:|
main.c|18|warning: cast from pointer to integer of different size [-Wpointer-to-int-cast]|
main.c|18|warning: cast from pointer to integer of different size [-Wpointer-to-int-cast]|
main.c|6|warning: unused variable ‘a2’ [-Wunused-variable]|
main.c|5|warning: unused variable ‘a’ [-Wunused-variable]|
||=== Build finished: 0 error(s), 4 warning(s) (0 minute(s), 0 second(s)) ===|
What's interesting with my VS2019 compiling as x64 (debug & release) the casts to int fail.
1>C:\Programming\My Projects\Project2\Project2\Source.cpp(17,28): error C2440: 'type cast': cannot convert from 'char **' to 'int'
1>C:\Programming\My Projects\Project2\Project2\Source.cpp(17,13): message : The target is not large enough
1>C:\Programming\My Projects\Project2\Project2\Source.cpp(17,45): error C2440: 'type cast': cannot convert from 'char **' to 'int'
1>C:\Programming\My Projects\Project2\Project2\Source.cpp(17,30): message : The target is not large enough
That's because you used a C++ compiler which is pickier than a C compiler. My C++ compiler also generates errors, but as C "only" warnings are emitted.
A pointer will have the same storage requirements, no matter what data type it points to. A char* and int* has the same size. The only difference is if compiled as 32-bit or 64-bit.
1 2 3 4 5 6
#include <stdio.h>
int main()
{
printf("%zu, %zu\n", sizeof(char*), sizeof(int*));
}
A pointer will have the same storage requirements, no matter what data type it points to. A char* and int* has the same size. The only difference is if compiled as 32-bit or 64-bit.
Perhaps I'm being too pedantic, but I want to clarify that this restriction is actually from the Visual C++ compiler, and not from the C++ standard itself.
And pointers to member functions can be something other than 4 or 8 even in msvc.
The standard may not mandate the size of pointers, but it isn't just MSVC++ that has pointers be the same size regardless of the fundamental data type being pointed to. MinGW/GCC has the same behavior as MSVC.
a pointer is basically an array index into memory.
if you are on a 32 bit machine (OS, if not hardware) then you can only address 2^32 units of memory. It does not need to be 64 bits, and 16 won't do the job.
This is why a 32bit OS on your new 64GB ram powerhouse can only use 4GB of the memory it holds(!!!).
there is an optimal size for a pointer on the box, which is the number of bytes of ram the machine has. But for portability, they use the max value the OS/hardware can support, which is happily matched to int sizes, and currently, is a 64 bit value.
oddity: with 64 bit, it may actually come to pass that 128 bit machines appear long before we reach even what 64 bits can address in memory, and they may decided to stick to 8 bits if that happens. Always before, the machines could reach the max capacity (see 32 bit and 4gb issue again). You can use a memory manager to go over the limit too (see dos). It uses a second pointer with the first and an extra program/service to manage that -- this is how dos got past the 16 bit limit in the late days on 32 bit machines.
To me, the premise of the question is wrong, because the OP code relies on the 2 variables being placed in adjacent memory locations. Sure, that is very likely going to happen when the 2 assignments happen on consecutive LOC, but one shouldn't rely on that. It would be a different story if they were 2 items in an array.
Basically IMO the question is an obfuscated way of asking: what is the sizeof a pointer.
Furry Guy wrote:
Something to remember in the future... Compile C code AS .c code. Not as .cpp code.
I am sure I read somewhere (Core Guidelines?) that one should compile the C code with a C++ compiler, because C++ is better at dealing with types. I am not sure if VS uses different compilers based on the file extension, conceptually similar to gcc for C, and g++ for c++ ?
Either way, warnings or errors, it's an indication the code is probably wrong.