Avoiding stack overflow from deeply-nested destructors

I have a relatively large DAG (several million nodes). Since each node has (shared) ownership with its children, the stack is overflowing when large sub-graphs are destroyed (thanks to recursive destruction through std::shared_ptr).

I know I can walk the tree depth-first and destroy nodes iteratively, but I would prefer to have this done without having to manage memory manually -- which essentially defeats the point of shared_ptr.

Are there any ways I can limit stack consumption without having to do my memory management by hand?
Last edited on
perhaps you could increase the stack size
https://linux.die.net/man/2/setrlimit (RLIMIT_STACK)
Herb Sutter's keynote at CppCon16 was exactly about automatic cleanup of lists, trees, and arbitrary graphs

but I would prefer to have this done without having to manage memory manually -- which essentially defeats the point of shared_ptr.

The purpose of shared_ptr is to model shared ownership, not memory management. If it's really DAG, what are the multiple equivalent owners of any given node?

I think in my case I'll be able to get away with increasing the stack size, but that's a relatively brittle solution - I imagine that even compilation flags might change the amount of stack consumed by some constant factor, not to mention changes in the Node destructor (which is defaulted at this point). My stack is already quite large (8MiB maximum).

Cubbi wrote:
The purpose of shared_ptr is to model shared ownership, not memory management. If it's really DAG, what are the multiple equivalent owners of any given node?

Smart pointers exist to model ownership, but they manage object lifetime and (by default) memory, too.
The parent or parents of any given node share equivalent ownership of that node.

And thanks for the tip -- I'll have to watch it ( https://www.youtube.com/watch?v=JfmTagWcqoE ).
Topic archived. No new replies allowed.