Difference in efficiency between pointers and nonpointers

Is there any difference?

I read somwhere that memory allocation was expensive.

example:

1
2
3
string *temps(new string("Hello World"));
cout<< *temps<< endl;
delete temps


1
2
string temps("Hello World");
cout<< temps<< endl;


AFAIK there is no difference... Please, only experienced programmers respond with answers.
Stack is Last-In-First-Out and heap has to find available memory first. So generally speaking Stack should be faster. Though the stack has a reserved amount of space.
Of course there's a difference, but it doesn't have anything to do with pointers-vs-nonpointers. It has to do with spurious memory allocation and the propensity for errors to propagate in the code.

Both factors favor the second snippet of code.


I read somwhere that memory allocation was expensive.

It can be.
Functionally, so long as you dereference your pointers and clean up with delete what you have new'd there is no difference.

However, I can think of no good reason to go to the heap if you do not have to (e.g., you need the object to persist beyond what automatic storage will give you, so you need to dynamic amount of memory). At best, it simply adds another level of indirection (one more dereference you must make) before you get to the actual data you are trying to read.
> I read somwhere that memory allocation was expensive

std::string itself allocates memory dynamically.
1
2
std::string temps("Hello World"); // one dynamic memory allocation
std::string *temps(new std::string("Hello World")); // two dynamic memory allocations 

So, one would expect the version using new to take roughly twice the time.
It is easy to measure it, on the platform that is used.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
#include <iostream>
#include <memory>
#include <iomanip>

int main()
{
    constexpr std::size_t TESTSZ = 32 * 1000 * 1000 ;
    constexpr std::size_t UBSZ = 21 ;

    std::srand( std::time(0) ) ;
    std::size_t nauto = 0 ;
    std::size_t ndyn = 0 ;
    std::cout << std::fixed << std::setprecision(2) ;
    
    auto begin_auto = clock() ;

    // create, access and destroy 32 million strings (average size: 10 characters)
    {
        for( std::size_t i = 0 ; i < TESTSZ ; ++i )
        {
            std::string str( std::rand() % UBSZ, ' ' ) ;
            nauto += str.size() ;
        }
    }
    auto end_auto = clock() ;

    // create, access and destroy 32 million strings (average size: 10 characters)
    {
        for( std::size_t i = 0 ; i < TESTSZ ; ++i )
        {
            std::unique_ptr<std::string> ptr( new std::string( std::rand() % UBSZ, ' ' ) ) ;
            ndyn += ptr->size() ;
        }
    }
    auto end_dynamic = clock() ;
    
    std::cout << "automatic: " << double( end_auto - begin_auto ) / CLOCKS_PER_SEC << " secs  "
              << "average size: " <<  double(nauto) / TESTSZ << '\n'
              << "  dynamic: " << double( end_dynamic - end_auto ) / CLOCKS_PER_SEC << " secs  " 
              << "average size: " <<  double(ndyn) / TESTSZ << '\n' ;
}
automatic: 1.47 secs  average size: 10.00
  dynamic: 2.87 secs  average size: 10.00

http://coliru.stacked-crooked.com/a/a5a28c68fbd8b22d
so, then, what about a cstring compared to a regular string:

1
2
3
4
5
char *ch = new char[6];
//i forget if assignment allocated memory for it or not...
*ch = "Hello\0";
cout<< *ch;
delete[] ch;
Nothing's stopping you from testing it yourself (aside from the errors in that code snippet.)

One would expect them to take about the same time, although an implementation of std::string which uses a small string optimization would edge out the cstring version given the average size of the strings in JLBorge's code.
Topic archived. No new replies allowed.