Generally speaking a program fragment like this:
1 2
|
int a[5];
for (int i = 0; i < 10; ++i) a[i] = i;
|
will work. It may crash sometimes, or crash in debug builds, but it will work in most cases in release builds. Still,
the program containing the fragment will usually fail. It may work by chance, but generally you should expect that it fails. So, this is a bug.
To summarize, the rule that you need to allocate the proper amount of memory is not because the particular program fragment will crash if it tries to write something outside of its bounds, but because the program as a whole will misbehave. And this is what makes it so hard to debug. You will get the error where you least expect it - far from where the cause is.
Sometimes illegal access will crash the program. In hosted environments, i.e. those with OS installed, there is usually virtual memory management in place. What this means is that the programs access one big data container instead of the physical memory itself, but they are not aware of that. It seems as if the code accesses the actual memory, but in fact the instructions lead to accessing some data structure that the CPU itself maintains. One consequence is, that if you have not declared your intention of storing something in particular memory range, then the CPU will signal the OS that something wrong is happening and the OS will terminate your program (or other stuff, but let's keep it simple.) It does not happen every time, because the CPU structure is not fine grained - it splits the address space in 4K blocks or smth like that. So illegal modification inside some 4K block that the program already uses will probably corrupt data, but will not trigger the CPU alarm.
There are freestanding environments, like those used for software in cars, washing machines and such. The memory access is direct (..lets assume). What this means is that no one cares what the program accesses, because the device is running just one program. Then it becomes internal issue for your code to coordinate its activities, so that two program fragments do not step on each other's feet. The library routines like malloc and free help you to do exactly that. Using malloc tells the run-time - give me some memory range that I can use to store information and don't let others use it for different purpose until I free it. Of course, your code has to coordinate with the memory management routines and behave properly, or the trick wont work.
Regarding realloc. The proper way to allocate memory in C++ is to use new, new[], delete, delete[]. This guarantees that the initialization code in object constructors is always executed before the storage is used and the clean-up code in object destructors is always executed before the storage is released. If you work with fundamental types (char, int...) or with POD types (C-like structures) that is not an issue, but you should try to be consistent. However, for some systems realloc provides faster alternative then allocating new memory block and copying the data from the old one. First, realloc can use free space after the current block to make it grow. The old contents are preserved. However, there is no guarantee that such trailing free space is available, and realloc may have to find large enough spot to place the new enlarged block, and also copy the contents of the old one. So, you can not expect that the location of the data will be preserved. You have to update all pointers and references to the new location after realloc. Still, realloc can perform better than manually doing malloc and copying in some OSes. This has to do with the virtual memory management I mentioned before.
In summary: realloc will provide block of memory with the size you request. The old data will be preserved, but is likely that the location will change. realloc returns the new location. You should not use realloc to grow buffers by small amounts, because it is generally not a cheap operation. realloc should be used in preference to malloc and memcpy.
Regards