I do not know how to solve this issue. If I use an array of pointers to objects and want to release all the memory used I do not succeed. Here is a simple example with objects of class Integer defined as:
class Integer
{
public:
int value;
Integer(int a) {value=a;};
}
i.e., deleting each object right after being created, all the memory is released.
I am not sure whether is a problem with the minimum memory used by malloc but I think it should be completely dealloc when delete is called.
I have tried with redhat gcc 4.1.2 and 4.41. The same problem happened with an STL vector of pointers to Integer (or any other class) instead of an array was used and I deleted one by one of the objects before deleting the whole vector.
I checked the memory consumed by the process using ps (I inserted a time delay) before creating the objects and after deleting them and the whole array.
It's most probably held in the heap. It's unlikely that it'll go back to the OS immediately unless it's a large block that isn't fragmented.
The heap takes big chunks from the OS, on Windows the minimum block size you can ask for is 4K. The heap suballocates smaller blocks from the larger chunk and can only return the larger chunk back to the OS when all the smaller suballocated blocks are back. It may choose to hang onto blocks as it sees fit.
Yes, I think is held in the heap. But, is not this a bug? Once I have deleted all the objects, should not all the smaller blocks be back and therefore the larger chunk? This is a big problem when many little objects have to be allocated. In a computer program where the code above is at the beginning, it seems that the memory taken from these many objects is not released until the whole process finishes.
About the bug: my feeling is that only blocks that are completely used are back to the OS when they are completely deallocated. But if it they were partially used, they will not be returned to the OS until the process finishes. I say this because in the code above many objects are used, and it seems that a different block is used for each object. If instead of many "little objects" I create 2 or 3 very large objects (each one with a large array) there is not such problem.
Any suggestion? Is there a way to force a chunk to be truly released?
It could be that the OS will give you memory and only take it back once you die, so that if you are repeatedly newing and deleteing stuff, then you won't be wasting its time.
Perhaps is a problem related with what firedraco says: once the program ask for a memory block to the OS, it won't give it back if only a sub-block in it was used (allocated and deallocated). But the big problem is that it seems that each object requires a new block (or more) and the block cannot be used to allocate another sub-block in it, so for a program creating many little objects the memory leak is very important.
Is there a way to force the program allocate more than one object in the same block or releasing the whole block even if the object used only a part of it?
Or another way to proceed (perhaps I am completely wrong understanding the way the heap is used in c++)?
Perhaps is a problem related with what firedraco says: once the program ask for a memory block to the OS, it won't give it back if only a sub-block in it was used (allocated and deallocated).
This isn't true of the Microsoft C heap. I do realise this is a Unix forum, but I have spend a lot of time on exactly this issue on Windows and know this to be fact in that environment.
But the big problem is that it seems that each object requires a new block (or more) and the block cannot be used to allocate another sub-block in it.
I don't believe this to be the case.
Is there a way to force the program allocate more than one object in the same block or releasing the whole block even if the object used only a part of it?
You can use your own heap if you're unhappy with the standard one.
perhaps I am completely wrong understanding the way the heap is used in c++
I don't think that's the case, you seem to understand it well.
new and malloc (and any other memory allocation under linux) use sbrk(3) to get a "huge" chunk of memory, this is expensive since it's a system call, then it partitions it and gives you as much as you requested. It does this until this chunk is used out and then requests a new one.
Memory in Linux is only released back to the OS when the application quits. If the system runs out of memory an application is killed to free it's memory. (I'm not kidding about this. It really kills an app to get more memory.) There is no "unsbrk" mem sys-call.
To check for leaks use valgrind(1).
To your case:
First you have 50000 instances at once -> ~ 50000 * 8 KB = 400000kB
In the seconds case you only have one Instance at a time -> 8 KB
Ok, so I am convinced now. The memory is not released until the application quits, but a block should be available to be used again by the same process once objects stored before were released. I have just made some test and this seems to be the case.