Smart pointers

Nov 23, 2013 at 11:18am
I don't really see a great reason to learn smart pointers but I think I should know how to use them so at least when I read someone else's code I don't get confused

so I code something here
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
#include <memory>
#include <iostream>
#include <vector>

using namespace std;

int main()
{
    vector< unique_ptr<int> > arr;

    cout << "before operation" << endl;
    cin.get();

    for( int i = 0; i < 1000000; ++ i ){
        arr.push_back( unique_ptr<int>( new int(i )) );
    }


    cout << "pause one\n";
    cin.get();

    arr.clear();

    cout << "pause two\n";
    cin.get();


    return 0;

}


I seem to use smart pointers in a wrong way
because after pause two is printed the program still use around 4 MB
it should be using only around 200KB or less...

so which part of the code is wrong ?
Nov 23, 2013 at 12:27pm
http://www.cplusplus.com/reference/vector/vector/clear/

Reference wrote:
A reallocation is not guaranteed to happen, and the vector capacity is not guaranteed to change due to calling this function.

So this means that, even though its contents are emptied, the vector may still hold on to the memory.

About your code, I don't think using std::unique_ptr in Containers is a good idea. The reason is this:

http://www.cplusplus.com/reference/memory/unique_ptr/operator=/

So perhaps std::shared_ptr is what you should use instead, just to be safe.

I don't really see a great reason to learn smart pointers but I think I should know how to use them so at least when I read someone else's code I don't get confused

They are good to use because you don't have to worry about memory leaks so much.

1
2
3
4
5
6
7
8
9
10
11
12
void func()
{
    int *p = new int[100];

    // code, code, code
    // exception!
    // code that is never reached

    delete[] p; // never reached, memory leak
    // if p was a smart pointer, it would have a destructor
    // which would release the memory even in case of exception
}

Last edited on Nov 23, 2013 at 12:27pm
Nov 23, 2013 at 3:51pm
I have seen that code that say about raw pointer being not exception safety..
I just feel strange that that kind of code could happen

I mean
why not just use vector instead if that is the case ?
or perhaps raw array if I've already known the size at compile time...

I haven't use exception yet,,,
I will learn about it with probably the same reason I learn smart pointers..
but I don't know, perhaps it will be so useful and perhaps not

I just think that there have to be an absolute situation where I must use smart pointers...

I think you are right about the vector size
though it can't revert to the starting memory
and shared_ptr too



My program starts at 196KB of memory
At peak allocation it takes Around 55MB
After deleting everything it takes about 1.3MB


about this
I thought that new can only allocate 16B or more
making every allocation( new float(i) ) takes 20B
4 ( the pointer ) + 16 ( the allocated memory in heap ) = 20B
so expect 1000000 * 20B = 19.07 MB
but well, 55MB is way beyond 19.07,, there seems to be a major inefficiency in allocating so many allocation


Nov 23, 2013 at 4:48pm
I mean why not just use vector instead if that is the case ?
or perhaps raw array if I've already known the size at compile time...


You can. Those are very common approaches.

Most of the time, if you can avoid dynamically allocating with new, you should.

Smart pointers are nice for when dynamic allocation is necessary (ie: when you need polymorphism). Since any manual allocation with new requires a delete, it's much easier, much safer, and much less error prone to let the smart pointer automatically take care of the cleanup for you, rather than doing it manually.

My program starts at 196KB of memory
At peak allocation it takes Around 55MB
After deleting everything it takes about 1.3MB



How are you measuring memory usage? If you're using something like Task Manager that is a very rough estimate of how much memory your program is using. It's close to memory allocated to your process (ie, the OS may decide to give your program more memory than it actually needs in anticipation of your program requiring more memory later).

It might also be caused by memory fragmentation.

there seems to be a major inefficiency in allocating so many allocation


There is. Again... it's best practice to not dynamically allocate unless you have to. This is one of the reasons why.
Nov 23, 2013 at 5:27pm
Thanks everyone for your answers...
Topic archived. No new replies allowed.