Removing Data Stored on the Heap (Preventing Memory Leaks)

Mar 20, 2011 at 6:08am
Hi all,

For those of you whom don't know me, you can check out this link if you're interested: http://www.cplusplus.com/forum/general/38389/.

For those of you whom are interested, I'm now doing incremental compilation (turns out that our textbook actually has a whole five or so pages dedicated to inc. building, so it was really useful. This was opposed to our Year 11 textbook which described it but never explained it).

Currently I'm using MS VC++ 2010, but am thinking of returning to Code::Blocks as VC++ seems to rebuild the entire solution upon each build instead of an incremental one.

===END OF INTRO===

Note that I'm assuming that you've read the above. Sorry.

Like every game, mine has an initialisation subroutine, a game loop and an exiting subroutine. Unlike most that I've seen however, mine has the added extra of changing them from void subroutines to a Boolean function and an integer funtion respectively.

The ultimate goal of this is to safeguard the code and prevent someone who is running an Intel 386 with Windows 3.11 from running the game. Basically, if the requirements aren't met, the game will never run (of course, the user will be notified why).

Conversely, the exiting function is designed to return ALL data that is still being used. It also unloads files, etc. Now my main concern is memory leaks because of pointers and objects that were never deleted. Basically, what I want to do is find any objects that may still exist and delete them before shutting down.

The silver lining to this is that you don't need to know what objects exist or do not exist. Basically, it's the sweeper that finds everything out there and deletes it.

I know classes have their own destructor, which probably takes care of them, but pointers are still an issue for programmers.

Cheers
Last edited on Mar 20, 2011 at 6:09am
Mar 20, 2011 at 7:18am
Currently I'm using MS VC++ 2010, but am thinking of returning to Code::Blocks as VC++ seems to rebuild the entire solution upon each build instead of an incremental one.



Build with /Gm option.
Configuration properties -> C/C++ -> Code generation -> Enable minimal rebuild.
Mar 20, 2011 at 7:47am
On a contemporary OS, the dynamic memory you allocate will be freed automatically when the program exits. The C++ run-time wont do jack, but the OS does not even care what part of the memory is static, which is stack, which is heap, etc. All that the OS cares about is that your app has claimed so many pages of virtual memory and is now done. The OS destroys the pages indiscriminately like the judgment day sweeps over the world.

On the other hand, if you still want to call the destructors of all dynamically allocated objects that are leaked, you should know that this is pain. This is what Java does when it launches its garbage collector. You want to create garbage collector?

First, there is no way (at least no portable way) of walking the allocated blocks in the heap. You can overload the global new, new[], delete, delete[], and their nothrow versions. You have to implement them using malloc and create some global structure which memorizes the allocated blocks, so that you can reclaim them later.

Problem 1. This does not solve the issue with calling the destructors at all, because malloc/free doesn't understand the clean-up business. For this you will additionally need to create generic type aware smart pointers, in which case you may not need garbage collector at all.

Problem 2. If you call the destructors, it may be important to specify the order in which you will do that. One dynamically allocated object may access another dynamically allocated object in its destructor, but what if you have destroyed that one already (even if you have not freed the memory - it is still a dead object). Imagine a circular linked list - which node you should destroy first, second, etc. Reverse to the construction order sounds plausible, but is not panacea. With dynamically allocated objects ownership and lifetime are independent, and do not nest as with scoped objects.

Alternatively, you can focus on preventing the leaks using RAII instead - std::auto_ptr, std::vector, etc; and reference counting - boost::shared_ptr. Then, when the app has to exit - throw some exception that you will only catch in the body of your main function. This will trigger all destructors of all automatic objects, and unless you have cyclic references, this will also trigger the destruction of the dynamically allocated objects that are managed using RAII and reference counting.

Regards
Mar 20, 2011 at 8:19am
simeonz, does that mean that performing cleanup code at the end of a program is unecessary, since the OS will do it anyway? Does that extend to unloading DLLs in memory?
Mar 20, 2011 at 8:56am
Xander314 wrote:
simeonz, does that mean that performing cleanup code at the end of a program is unecessary, since the OS will do it anyway?
Not necessarily. There are things like database connections and other services that may have to be notified that you release your claims. I was referring to memory. The remainder of my reply was about handling clean-up in the more general sense.

Does that extend to unloading DLLs in memory?
To be honest, I don't know. It probably does. My guess is that the OS would close the open files, unload the loaded dlls, memory maps and various system objects (handles and such). I am certain about the memory part. But it is very likely that it does the other stuff too. Of course, unloading a DLL from one process does not unload it from the memory entirely. Other programs may still need it.

The first time I discovered the virtual memory management, I felt like Columbus discovering the New World. The OS is not a thin mediator between your program and the system resources. It is very intelligent, very proactive, and sometimes very intrusive. This is one arbitrary article that I recently discovered:
http://msdn.microsoft.com/en-us/magazine/cc301727.aspx
From just skimming through it I can see how complicated the DLL machinery is under the hood. I haven't read it in detail, but it still kind-of blows me away.

Regards
Mar 20, 2011 at 11:13am
Sorry, when I said "unloading DLLs" I did in fact mean "decrement the number of handles to the DLL" or whatever else it might be that happens inside UnloadLibrary.

And thanks for the article link - I haven't read all of it yet either, but it's interesting stuff!
Mar 20, 2011 at 11:57am
My guess is that the OS would close the open files, unload the loaded dlls, memory maps and various system objects (handles and such).


You are wrong, at least in windows environment. Using functions like CreateFile(), OpenSCManager() and such without closing handles manually before the program exits produces the "file still in use" error when trying to delete/move the file from another program (That's why programs like Unlocker exists anyway). Do that in a loop and watch when system crashes.
Some handles are cleaned up automatically, but not all.
Mar 20, 2011 at 12:56pm
@modoran
I am not sure. This would mean that there are orphaned handles in win32, and I think that this problem would've been solved by now. Specifically, when you mention Unlocker, it shows the process that owns the handle. So, this means that the handle is not orphaned yet. Or am I wrong?

EDIT: I admit that the used handles pile up, as if they are leaking. On the other hand, I am not sure that this is due to the fact that the internal structures continue to be in use. It may be just that the allocation of handles is inefficient and the old id-s are not recycled.
Last edited on Mar 20, 2011 at 1:00pm
Mar 20, 2011 at 1:11pm
No. It show processes that created the handle. If you call TerminateProcess() or write your own program without calling CloseHandle() when you have a file open with CreateFile() the handle will never be freed until next reboot.

Unlocker implements a kernel mode driver which can read/write memory lopcation where the "handle" is stored (which is just a number i think).
Unlocker also use a DLL which is injected into every process to keep track of every allocation.
Mar 20, 2011 at 1:23pm
Ok. I believe you. As I said, I was only guessing and you seem to know the stuff. Unlocker have always shown processes that currently exist, so I assumed that it is no accident. Sometimes it shows nonsense though - like sndvol32, which arguably does not hold my folders open.

The OS could easily dispose of all resources allocated for a given process when the process terminates. There is no logic in leaving them to hang. But apparently, some things remain in the 80s.
Mar 20, 2011 at 1:31pm
Actually, looking back at Unlocker, it has a "Kill Process" button. Which means that we are talking about some running process to which the handle belongs. Why would you want to kill the process if not to free its handle. Unless it is possible to have empty process column in Unlocker. Is it? I haven't encountered that yet.
Mar 20, 2011 at 7:25pm
I googled and found this link:
http://msdn.microsoft.com/en-us/library/aa364225%28v=vs.85%29.aspx

Here is a quote from it:
When a file is opened by a process using the CreateFile function, a file handle is associated with it until either the process terminates or the handle is closed using the CloseHandle function.

I think it is very probable that all handles and resources are handled in this manner. It sounds very unlikely to have it otherwise. I wish I could find an article on that topic, but I am too lazy to fish for it at the moment.

Regards
Topic archived. No new replies allowed.