If dynamic memory is deallocated when program terminates, what is the point of destructors?

Destructors, in my understanding, are used to de-allocate any dynamic memory once an object is destroyed. They are used to prevent a memory leak from affecting a program before it terminates. When are objects destroyed? When the compiler knows they will never be called or accessed again? I am just confused by how it all works. If the compiler doesn't know the future, then how could it know to de-allocate the memory reserved for an object? What is the point of a destructor if it would never be invoked? There is clearly something I am misunderstanding.
The larger point, that of deallocation even with the knowledge that memory is freed at end of execution, is not just about the allocation of some few memory allocations. During execution time of many ambitious applications, there are continual and repeated allocations and deallocations during run time. If these are not cleared, the accumulation of allocated memory could quickly consume all available RAM. It is, therefore, important for such applications to carefully deallocate memory no longer required.

Yet, it goes beyond merely the exhaustion of available RAM. In most modern systems applications must shared the resource of a single computer. As such, it is a burden upon the user's compute to consume RAM with abandon. A poorly behaved application, for which many examples are to be found, the careless consumption of resources, pending application termination, is a waste of the overall computer. Who wants to use software which crowds out all other application one might be running at the same time?

As to when, the answer is about scope. Objects are destroyed under a few circumstances. In C++, a container may be formed on the stack, which means it will exist and consume resources only while the function in which it is declared is executing. When that function ends, all objects allocated on the stack are automatically destroyed. This requires no special knowledge from the compiler, per se - it is merely part of the machine's design. The other situation where objects are destroyed are upon explicit call for destruction, either by the destruction of an object which contains it, or the management of a container which calls for destruction of that which it contains, or of an explicit call to delete. There is no need to see into the future, these things happen at the moment, at least in the context of C++ applications. Other languages, like Java and C#, use other means which do seem a bit more "magical", but are completely explained in their designs.

Also, it is important to realize that destructors do not always mean deallocation. There are a few complex situations you'll eventually study, starting with in place construction and explicit calls to destructors, but they are a special situation - one made common with smart pointers in C++. The point here is that deallocation is separate from destruction. Destruction is not actually about deallocation, but about releasing resources an object owns (which could be RAM, or file handles, or GUI objects). Whatever resources the object manages, and thus owns, are why destructors are called. When a destructor is called, the deallocation will likely follow except in special circumstances, so the two are highly related. The destructor function must complete before deallocation begins, however.

In C++, optimization may create situations where destructor function are elided, in limited situations where resources do not require effort to release, but otherwise there are no situations where in C++ language theory a destructor is never invoked. If an object is created, it is destroyed. That is a solid, reliable standard of the language. It is even maintained in the presence of exceptions. One might assume that some types of singletons may not actually invoke a destructor at program termination, but this would actually be abnormal.

You may benefit by searching for the term RAII. The name is unfortunate, as its inventory admits, but the concept is simple. The constructor/destructor relationship is a powerful way to enforce the destruction of related resources. If an object represents a file, it would be reasonable to assume the destructor would close any file handle that it might have open. If the object represents memory, as many containers do, including smart pointers, the destructor is always called to release the memory being controlled. It is the very heart of C++. Everyone writing in C++ should thoroughly understand this relationship.

perhaps a history lesson too.
Older OS from the 80s did NOT free the memory when the program closed. Little memory leaks over time from many programs would force system instability / memory loss and require frequent reboots. Rebooting big machines is a big deal; we have all hands on deck and do it in screwy off-hours (early morning on weekend) when few customers will be impacted (Not for memory leaks, but for OS patches or other reboot required things).

I don't know but some embedded and small systems may still not reclaim the memory. Not everything, even today, even HAS and OS to babysit the programs.

On top of this, imagine something like this web site, which is up and running nearly 24/7. What if the core code behind this site leaked a few KB / hour and the programmer just lazily let the OS fix that when the program ends … oh wait, the program never ends, its always running! Many services and such run nonstop!

The bottom line is that you are letting something nice (its actually self defense against bad programmers) that the OS does for you control your approach to coding if you just let the OS control it. You assume there is an OS and that it has this feature and that your program will terminate before any leaks it has become a problem. That may be a fair assumption, but its not good practice. There is always one doofus in every crowd that leaves your program up for the weekend and does not want to come back to 1000000000000000000 pop up windows that the system is running low on memory. Want to see it?

int main()
{
int * ip;
while(true)
{
ip = new int[1000];
sleep(5000);
}
}

fire that up and let it run on your system all week.
Last edited on
Thank you both for your thorough responses!

My follow up question would be, aside from considering the management of resources by the OS, would a destructor ever be useful in the context of a single program? Because it seems like every object's respective destructor wouldn't be called until the program ended making a destructor useless in the context of managing resources while the program is running.
Last edited on
there are tons of other uses for destructors apart from dynamic memory which is largely not used directly anymore (though your vectors and strings and such use it inside, and their dtors are cleaning up for you too).

Not every object has a program-long lifetime. Most do not in larger programs. (Almost, small string optimization etc can kick in) every time you have a function with a local string or vector, you have a new/delete pair behind the scene.

but consider this: your program detects an unfixable problem: the network is down and all communications (required to operate) are lost. It wants to crash out gracefully and close. Destructors can close files, making sure they are saved and in a good state, release other resources (network connections, threads, there are many other non memory resources that you can lock and need to return to the OS). Dunno if winx is the same but windows used to have finite pools of some of these resources. Many, many libraries have explicit cleanup methods that the user must call as well. Dtors can tap those.

If you do not need them, leave them off. Most classes lack a user written dtor. But some need it. When you need one, you have the tools to write it. I think this is what you are sort of discovering but not quite seeing the whole picture... I can't recall the last time I needed to write a dtor. So you are not 'wrong' you are just 'not there yet' on this stuff.
Last edited on
I guess I was thinking about how if you initialized a bunch of separate instances of a class in main() and the entire program ran a long time, that would take up a lot of RAM. Can destructors help with that? Objects created within functions will be destroyed when the function terminates so destructors would obviously be handy in those situations since it would free up the memory taken up by the local variables. But what if there were a ton of objects created in main() that would just be sitting there waiting days, weeks, ect.. for them to be de-allocated? Or is that just a situation where creating that many instances of a single class is a bad idea?
But what if there were a ton of objects created in main() that would just be sitting there waiting days, weeks, etc. for them to be de-allocated?

To make the claim that a program doesn't leak resources, those resources must be returned on time. A program that settles for "resources are returned eventually" doesn't qualify.

- A "resource" is anything that exists in limited supply. Free store space (RAM) is one (typically plentiful) resource.
- "On time" means "shortly after the resources are no longer needed".

Typically the scope of objects should be limited as much as possible. This helps ensure that objects that aren't required don't hang around for too long.
Presumably if an object exists it's because it's needed. It's the programmer's job to decide when an object should be created and when it should be destroyed. There are cases where allocating all the program's resources up front is better, and cases where allocating as necessary is better. Again, it's the programmer's job when and why to choose one or the other. Choosing the wrong strategy will sometimes not make much of a difference, and other times may seriously hurt performance (either of the application or of the system as a whole).
It isn’t just about freeing memory. It is about restoring the executing environment to its proper shape. This can be as simple as returning a referenced variable to its initial value.
you can destroy objects in main.

1
2
3
4
5
6
7
8
int main()
{  
   {
     object foo;
     foo.use();
   } //destroyed here!!

}


you can also use a pointer to allocate and destroy in main, or a pointer using container (better) like a vector, push-back and pop-back would create/destroy a copy of the thing on demand (possibly with some optimization discretion under the hood).


Allocate and de-allocate is expensive, so there is a juggling act between the principle given by Mbozzi about destroying it after use in a timely manner and thinking ahead. That is, this is kind of derpy:

1
2
3
4
5
6
7
8
9
10
int main()
{
     while(true)
      {
           //I need a thing!
           thing x(stuff);
           use thing; 
          //oh, looks like I am done with that sucker
       } //allow it to be destroyed
}

you are just wasting time creating it and destroying it in a loop. create ONE of them, reuse it in the loop, and destroy it AFTER here. (Which shows that you were not truly done with it, but its easy to make this kind of mistake in a large block of code. I have seen this kind of thing and fixed it quite a few times over my career.

Last edited on
Thank you for all of your responses!

I am still thing I am still confused about something. Say an object exists in main() and just sits there for a long time, having memory reserved for it while the program goes on. Is there ever a time an object that exists in main() will be destroyed before the end of program execution? That is why I was asking about the compiler "telling the future". Or would it be better to initialize as few instances of a class in main() as possible for this reason?
@vaderboi,

Your view of an object instantiated in the main function, and it's operation lifespan, is not a typical use case of design. It could be used for some situations, and indeed there are some, but it isn't an important point to contemplate.

For one thing, if, as you put it, an object "exists in main" - which is to say it is on the stack, fashioned in the main function and thus has scope in the main function - is not dynamically allocated anyway. It is on the stack, not allocated out of the heap. By their nature, they should be small (the stack is a limited resource).

The pattern more likely is that an object which manages an application is fashioned on the stack in main, or is a singleton in global storage. That object, however, is not particularly large. It isn't a significant memory burden. It may, in turn, track and manage a huge amount of storage, all dynamically allocated by what the program does.

Still, it is not typical that a program starts, allocates huge amounts of memory that hangs around, then exits.

What is typical is that large allocations are required to manage large volumes of information, but are then destroyed to make room for yet others. This is typical, say, for a spreadsheet program like Excel. The spreadsheet(s) may take a lot of room. When one closes the current spreadsheet, and then opens a new one, the old spreadsheet's RAM has been destroyed.

It is clear you're early in the study. This is the kind of knowledge you'll gain through practice. At this point you've asked the same basic question repeatedly, and you've been given similar answers more than once. It isn't actually forming a strong image in your mind, though, and that will only happen as you try it.

You will then see the patterns emerge, and your practice will make much clearer to you what discussions like this simply can't. Nothing works quite as well as doing it.

@Niccolo- Actually you answered my question quite well by essentially saying "object initialization in main() is not worth considering". I thought it was something that one should consider. But thank you for clarifying. I find asking questions in these forums helps me out immensely. Muddying through on my own, textbooks, or documentation can sometimes be more frustrating than helpful. It just depends on where I'm at.

Thank you everyone. I appreciate your thorough responses.
Topic archived. No new replies allowed.