I have noticed that heap corruption is one of the hardest things to fix. I find a common one for me is deleting heap memory that has already been deleted, like:
1 2 3 4 5 6
int * myPtr = newint[100];
//code
delete [] myPtr;
//more stuff, forget I already deleted
delete [] myPtr;
//But it is still pointing somewhere at the heap!
I guess I have a couple questions. Would it be a bad idea to just set every pointer to NULL after I delete the memory it points to?
1 2 3 4 5 6 7
int * myPtr = newint[100];
//code
delete [] myPtr;
myPtr = NULL;
//more stuff, forget I already deleted
delete [] myPtr;
//Now it's not pointing anywhere, and deleting a null pointer is completely legal.
Is this a bad idea? Also, is it bad to always use an assert() to make sure I'm in bounds when accessing the elements of an array?
An even better idea is to encapsulate memory allocation with classes, so allocation/release, and in some cases range of access, can be controlled.
The current standard provides the half-baked auto_ptr template for wrapping allocations, and boost provides a variety of more suitable wrappers, including a shared_pointer for reference counted resources.
Use a managed pointer class instead of raw pointers. std::auto_ptr<>, boost::scoped_ptr<>, boost::shared_ptr<>,
boost::weak_ptr<>, boost::intrusive_ptr<>, std::unique_ptr<> (C++0x). I work on a multi-million line project
with tons of memory allocation, but the code I write -- very rarely do I ever actually need to delete anything; all
my pointers are managed.
But if you do need to use raw pointers, then setting them to 0 after deleting is a good practice, except in destructors
(because the object instance that contains the pointers is going away anyway). The standard allows deletes of NULL
pointers (it does nothing), and any good OS with memory management will cause your program to crash on the
first dereference (read or write) of a NULL pointer.