This is a memory manage topic. Which of the following is more efficient from the running time point of view?
Suppose you have an array which never will have more than 1000 integers inside (i.e. max size = 1000). But certainly it can be a few of them some time and the full size in some other time, but you always can figure out exactly when this happens and also you always know a priori what will be the size of the array. You are going to use a dynamic array. In this situation, which is more efficient?
1) Allocate a dynamic array for the max size (i.e. arr = new int[1000]) and then work with this array all the time (don't worry about if the content of the array is lost and/or over written).
2) Say that in the first use of the array its size will be 10, so allocate a dynamic array for 10 integers (arr = new int[10]). After you have used it, you proceed to deallocate it (delete [] arr). Now, you know that the next use of arr will have a size of 500 integers. You repeat the steps: arr = new int[500] and after it has been use, delete [] arr. In the next step you know that it will have the full size (1000 int). So repeat again the process, and so on.
I am not sure but I feel 1) is more efficient. Could someone comment on the same?
Well certainly from a CPU utilization /runtime standpoint 1 allocation is more efficient than multiple allocations/deallocations. From a memory standpoint strictly speaking the second option is more optimal, although 1000 integers is less than 4K with 32bit integers and less than 8K with 64bit, both of which are negligible in today's world.
"Fast" is different from "efficient". In programming, something may be efficient in terms of time, RAM, or executable size. Speed only refers to the first form of efficiency.
That's not a dynamic array. That's a dynamically allocated array. It's not the same.
Option 1 is obviously faster. It couldn't be slower, it wouldn't make sense.
Pushing the array onto the stack is even faster, and statically allocating the array is still faster, I think.
Thanks for the insights. The above is just an example, I am working with large arrays. So, you say that pushing onto the stack is faster but I am working with large arrays so I want to put them into heap. But anyway, I see that option 1 is still faster even in the heap. Another question regarding the memory managment. What of the following would be faster and more efficient working with vector class?:
a) declare a vector "vector<int> vec" and then reserve some memory using vec.reserve(size) where size is a large number and then use push_back function to add elements to the vector.
or
b) declare a vector using "vector<int> vec(size)" where size is a large number and then assign the final values to the vector accessing with [] operator similarly as we would do with push_back function in a) (i.e., from vec[0] to vec[size-1]).
I think option (a) is faster because it does not actually allocate the integers whereas (b) does, and vector will zero-initialize elements. Regardless I would use (a) as helios said for other reasons.
I would use iterators for two reasons:
1) Using operator[] has the drawback that it allows you to access beyond the end of the vector,
which is much harder to do if you use iterators;
2) Iterators allow for more generic programming.
As an example of #2, consider this code:
1 2 3
vector<int> v( 10, 4 );
for( vector<int>::size_type i = 0; i < v.size(); ++i )
cout << v[ i ] << ' ';
Simple enough, but now suppose I want to change my data structure
from vector to deque. deque does not have operator[] which means I
have to rewrite my code: