My project requires a ton of multidimensional vectors. My original solution was to define them all at the start of the program so they'd be available at all times.
Now, I'm at the final stage of the project and I'm looping the program's functionality over a large amount instances, which have nothing to do with each other. They're solved sequentially, so all data from the previous instance can be discarded and the variables reused. To play safe, I .clear() all lists to prevent any carryover from previous instances. However, I was wondering:
If I have a vector<vector<int>> variable (lets call it "numbers") of dimension L*K, and I simply use numbers.clear(), will it get rid of all variables? Or will the contents of the 1D vectors remain but "get lost" since they don't exist by themselves? Should I loop through numbers call a .clear() on all numbers.size() 1D vectors?
If the latter: what would be more efficient, looping through everything to .clear() it, or simply redefining all variables in a "per-instance" scope so the variables are destructed after each instance finishes?
This is probably a silly question, but I'm confused due to the fact that the 2D vector isn't actually a single variable, but a container that contains a ton of other variables (or am I look at this all wrong?).
std::vector takes care of all the memory it allocates. That is, if you clear vector< vector< int > > all the memory is freed (note that if you have vector< vector<int*> > you'll have to delete each int* yourself)
Thanks for clearing (hah) that up for me, hamsterman!
On the article you posted:
I had read it before, but decided to go with it anyway.
Firstly, considering the first few notes of the author: I don't find them confusing and they're very easy to work with [to the extent that I use them]. Utilizing a [w*d] vector instead would be simply impossible for me to debug, considering my indeces actually have a meaning (generally an 'id' or 'order' with significance).
I'm not sure what the author meant when he said "If you need to pass this array to a function, none of the above functions will work."; all I know is I've been passing vector<vector>>'s and even vectors of that to functions (by reference, obviously) without problem. Any decent programmer would probably cringe at the sight of this, but it works wonders for me.
As said before, the 1D transformation just isn't an option for me. 99% of my "programming time" is spent debugging to check whether my logic is solid. The MVStudio Watch functionality works perfectly with vectors of vectors [of vectors], providing me with quick access to the stored values so I can easily do manual checks on the calculations. Having to calculate what the location would be in a [w*d] vector would make this much harder, especially since both 'w' and 'd' vary strongly depending on the instance I'm using to test things.
Writing my own class for MD arrays is quite the hassle; quite frankly, there's nothing easier than accessing elements of a vector of vectors. Add to that built-in functions like .size(), copy(), sort() and and constructor that allows for default size and values, and you've got a golden deal for a newbie programmer like myself.
I'm fully aware that my program won't win any prizes for its coding, but considering I've only been doing this for less than a year, in my limited spare time, I'll be very happy if the end result of my work produces the results I need. If I can continue in this field of study, I'll be happy to buy me a book on proper coding conduct and squeeze out that extra performance by avoiding MD vectors. :)
Honestly, if I'd send you my code, I'm certain the use of MD vectors will be one of the last of your worries. :p
The problem with vectors of vectors is the slight memory overhead (for 10*10 array of ints you'll need sizeof(int)*100 + sizeof(vector)*11 bytes of memory).
Also, every vector can have a different size, so there is always the danger that they will.
I do understand your reasoning though.
by the way, stuff in <algorithm> can be used for normal arrays too