I don't really see a great reason to learn smart pointers but I think I should know how to use them so at least when I read someone else's code I don't get confused
I seem to use smart pointers in a wrong way
because after pause two is printed the program still use around 4 MB
it should be using only around 200KB or less...
So perhaps std::shared_ptr is what you should use instead, just to be safe.
I don't really see a great reason to learn smart pointers but I think I should know how to use them so at least when I read someone else's code I don't get confused
They are good to use because you don't have to worry about memory leaks so much.
1 2 3 4 5 6 7 8 9 10 11 12
void func()
{
int *p = newint[100];
// code, code, code
// exception!
// code that is never reached
delete[] p; // never reached, memory leak
// if p was a smart pointer, it would have a destructor
// which would release the memory even in case of exception
}
I have seen that code that say about raw pointer being not exception safety..
I just feel strange that that kind of code could happen
I mean
why not just use vector instead if that is the case ?
or perhaps raw array if I've already known the size at compile time...
I haven't use exception yet,,,
I will learn about it with probably the same reason I learn smart pointers..
but I don't know, perhaps it will be so useful and perhaps not
I just think that there have to be an absolute situation where I must use smart pointers...
I think you are right about the vector size
though it can't revert to the starting memory
and shared_ptr too
My program starts at 196KB of memory
At peak allocation it takes Around 55MB
After deleting everything it takes about 1.3MB
about this
I thought that new can only allocate 16B or more
making every allocation( newfloat(i) ) takes 20B
4 ( the pointer ) + 16 ( the allocated memory in heap ) = 20B
so expect 1000000 * 20B = 19.07 MB
but well, 55MB is way beyond 19.07,, there seems to be a major inefficiency in allocating so many allocation
I mean why not just use vector instead if that is the case ?
or perhaps raw array if I've already known the size at compile time...
You can. Those are very common approaches.
Most of the time, if you can avoid dynamically allocating with new, you should.
Smart pointers are nice for when dynamic allocation is necessary (ie: when you need polymorphism). Since any manual allocation with new requires a delete, it's much easier, much safer, and much less error prone to let the smart pointer automatically take care of the cleanup for you, rather than doing it manually.
My program starts at 196KB of memory
At peak allocation it takes Around 55MB
After deleting everything it takes about 1.3MB
How are you measuring memory usage? If you're using something like Task Manager that is a very rough estimate of how much memory your program is using. It's close to memory allocated to your process (ie, the OS may decide to give your program more memory than it actually needs in anticipation of your program requiring more memory later).
It might also be caused by memory fragmentation.
there seems to be a major inefficiency in allocating so many allocation
There is. Again... it's best practice to not dynamically allocate unless you have to. This is one of the reasons why.