Striving to create as dynamic and adaptive code as possible poses a problem with arrays. Let's say I have an array of ints and want to be able to access the length of the array at any time.
There are 3 ways to handle this potential problem, in my eyes:
Seperate variables
1 2 3 4
int length_of_array; // make sure this gets the right value
int* Array = newint[length_of_array];
// Access provided by length_of_array
Adding it within the array it self
1 2 3 4
int* Array = newint[length_of_array+1];
length[0] = length_of_array; // of course not really legit, but this is just to show the point
// Access provided through the first int of the array itself or just by referring to the array as an int rather than an int*.
Comparing total size with individual sizes
1 2 3
int* Array = newint[length_of_array];
// Access provided through calculating sizeof(Array)/sizeof(Array[0]) or similar commands
All have at least one bad side; the first being that separate variables are used, the second having the disadvantage of only being possible to apply to int pointers (or similar) and therefor when used with other types can cause some heavy inconsistency (and thus unreadability) and the last being that each time you check the size you need to perform a calculation.
They have downsides, all of them, but which should be used mostly for performance's sake (taking the storage of values into account, too).
vector<int> Array(length_of_array);
// Access provided through Array.size();
Anyway, the 'Seperate variables' approach is the only one that makes sense to me here.
I'm not sure what you mean by the 2nd example, and the 3rd example wouldn't work at all, as sizeof(Array) would give you the size of a single pointer, not the size of the entire array.
By the second way, I mean that I add the length as the first element of the array/vector. I HAVE seen the third way being implemented properly, just don't know how it was done exactly.. Finally, I tend to stick to arrays whenever possible. Call it personal preferences. :P
I mean that I add the length as the first element of the array/vector
That is near identical (performance wise) to approach #1. Although it would make your array more awkward to index, so I wouldn't recommend it.
I HAVE seen the third way being implemented properly,
No you haven't. It only works you have an array name, but it does not work with pointers or dynamically allocated arrays.
1 2 3 4 5 6 7 8 9 10 11 12
char foo[100];
cout << sizeof(foo); // this will print '100' as you expect
// but only becaues 'foo' is an array name
char* bar = foo;
cout << sizeof(bar); // this will print '4' (or really, sizeof(char*))
// because 'bar' is not an array name
bar = newchar[100];
cout << sizeof(bar); // same -- will print '4'
// 'bar' is still not an array name
Finally, I tend to stick to arrays whenever possible. Call it personal preferences. :P
I'll call it foolhardy. There are situations where allocating the arrays youself is the right approach, granted. But the tradeoff to doing it yourself is higher maintanance, greater risk of bugs/leaks, and having to resolve problems that have already been solved (ie: this thread)
I just want to become comfortable with using arrays (or dynamically allocated arrays) whenever possible. The performance win is slight, but is there nonetheless.
Edit:
Thanks for pointing out the right usage and, that little example "code" explains some mistakes I made in the past.
Just be aware that basically what you're saying is you prefer to do something by hand rather than to automate it, in order to gain an insignificant performance advantage. I mean, if you're not letting the vector take care of the bookkeeping, you'll have to do it yourself. If applied to everything, you'll be unrolling loops and copying and pasting code rather than using, say, virtual functions. You know what's been said about premature optimization:
Donald Knuth wrote:
Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered. We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil.