1) When you don't know how much memory you will need at compile-time. For example, allocating an array, when the length of the array depends on something that happens at run-time.
2) When you need to control the lifetime of the thing being allocated, rather than just allowing it to be destroyed when it drops out of scope.
If, instead, you dynamically allocate the object, then it persists until you call delete. That means that you can carry on using that object, even outside the scope in which it was created.
Whether you know it at compile time or not, very large things usually need to go on the heap. Eg if you are making a 1000x1000 matrix of <complex>, even if that will it on the stack, its probably better if it didnt go there. The stack is (artificially, and by the OS) a limited resource (per program, so it does not hurt other software running at the same time if your fill YOUR stack).
So I would boil the answer down to size of the thing and lifespan of the thing as the two biggest criteria. The size part can be broken down into {too big} and {size unknown}.
something like:
is this huge? if yes, heap. (huge varies by system & era & things, but more than a few MB isnt a bad starting place)
is the size unknown? if yes, heap.
is the lifespan bigger than the scope where it was created? If yes, heap.
else stack is ok.
there may be a few more criteria in specialized code, but the above may help decide.
-- it may not matter. You should be using a great many containers that handle this for you, and not a lot of DIY memory management, in most code.