well i am working on v.size() around 10^5 |
While a million elements may, on the surface, sound like a lot of data to work with, the reality is, in the eyes of a multi-gigahertz cpu, it's not a lot of data at all.
That being said, the next question would be what, in general, are you doing with all that data? As my first post said, if you are only trivially accessing it (as in say, doing a simple sort), then you may see
some increase in performance going to an array, as helios said, 10% or less, and that's whether reading or writing (likely a combination of both).
On the other hand, if you're performing some non-trivial calculations on all that data as well, say processing some analog input data, morphing some image data, etc, then these calculations are going to represent the bulk of the cpu usage, and access time becomes trivial, which will then make the difference between using vector or array beyond trivial.