One example is if you have a container of unknown size of polymorphic object pointers. Pointers or references are needed for polymorphism to work, and you can't store references in an array. If, at some point during the program, you create another polymorphic object within a limited scope, you can add it to your array by doing something like,
1 2 3
|
DerivedClass obj;
BaseClass* p = &obj;
arr[i] = p;
|
, but once obj goes out of scope, you now have a junk pointer (you're now pointing to junk). It's really hard to do something like this without using dynamic lifetimes.
Another example is when you have a vector of some really big object type. Even if the stack had unlimited space, if the vector needs to re-allocate after you push_back, then every single object needs to be copied over. If you just have a vector of pointers, then the cost of doing this is relatively minor. But now you're dealing with pointers, and you run into the same issues as the last example.
Note: Using raw "new" and "delete" is usually discouraged in C++. The standard library provides "smart" pointers (unique_ptr and shared_ptr) that help manage this.
keskiverto wrote: |
---|
Size of stack allocations is static constant that must be known at compile-time |
Can you explain this more? I'm not sure I understand. Certainly there is a static limit to the amount of stack allocations there can be (else you get a stack overflow), but the amount of stack allocations must still be partially dynamic, or else you couldn't do things like conditional recursion.
1 2 3 4 5 6 7
|
void recurse()
{
int a;
std::cin >> a;
if (a == 42)
recurse();
}
|
Edit: Of course, the size needed for each function push call is still known at compile-time, regardless of the number of invocations. I suppose this is what is meant. Guess I answered my own question.