I know there are many reasons why const is better than define for declaring a const. However, wouldn't const actually create a variable in RAM on some platforms, while a #define will simply put the data in ROM? If I use const to replace #define would I use alot more RAM?
wouldn't const actually create a variable in RAM on some platforms
It's not possible to know to what kind of memory device an object will go without knowing internal details about the OS. Maybe it will go to RAM, maybe it will go to the swap file.
Regardless, the compiler will usually only allocate memory for a const if its address is taken or if it's a user type. Otherwise, most compilers will optimize the constant away.
while a #define will simply put the data in ROM?
ROM cannot be modified, which means data cannot be loaded onto it.
Macros are text replacements on the source, which means that these two snippets compile to the same binary code:
1 2
#define PI 3.141592
double x=PI;
double x=3.141592;
In short, under most circumstances a const and a macro perform equivalently, but a const has added benefits, such as having a well-defined type.
Not on a modern computer, but you are right. #define will hard code the value into the compiled program, possibly making the executable a bit bigger, where as defining a variable as const will store it in RAM during execution. However in most cases the variable will take no more then a handful or two, bytes of memory.
1 2 3
constint i = 5; //~4 bytes, about .039 KB, or about 3.7*10-7% of a GB of RAM
#define i 5 //this should not change the size of the executable,
// your source file may be around 5 bytes smaller
Thanks for the replies. So it is true that using #define to replace constant are more deterministic than using const in terms of memory usage, although some compiler may optimise the const anyway. In some program, there may be hundreds or even thousands of constant so if all const are loaded to RAM, then it could be a problem.
In some program, there may be hundreds or even thousands of constant so if all const are loaded to RAM, then it could be a problem.
If you have a program with hundreds or thousands of constants, then the program is also absurdly large, and the memory constants use is negligible. Even if that's not the case, there are only two possible cases:
1. The constant is of a built-in type.
2. The constant is of a user type.
For case #1, the constant will probably not be larger than 8 bytes. 8*1000 (which I remind you is an absurd amount of constants) < 8 KiB, which is negligible even in small programs.
For case #2, you can't use macros anyway.
In any case, you can't make the value 3.141592f magically go away. You either store it some memory location so the CPU can read it, or you embed it into an instruction. You'll have to keep it somewhere.
Yes, 1000 constant is probably excessive, but 100-200 is very reasonable, which can take about 400 bytes. 8Kb is negligible in a PC environment, but not in embedded app where the micro may only have 4k RAM and 64k ROM. 400bytes eat up ~10% of the RAM (and you still need a copy in ROM to initialize the RAM copy.
No. Not for constants of the form const T x=y, anyway. Something like CRC32's lookup table uses 256 elements and 1 KiB of memory.
the micro may only have 4k RAM and 64k ROM
Like I said, you'll have to keep the value around no matter what. I don't see any way that would make a+0xdeadbeef translate to a smaller memory footprint than a+b.