I'm implementing a container with special access to its elements. Under the hood I store the elements in a std::array<T>. Since the project that I'm working on requires efficiency and high performance I was wondering whether to switch from std::array<T> to a plain old C style array. Considering that accessing an element in an std::array<T> costs as much as a function call, a call to the function std::array<T>::operator[](size_type);.
I generally dislike using plain C style arrays, because Stroustrup always says not to use them (he repeats it many times in his book Programming and TC++PL) since he says that they don't know their own size and they turn into a pointer to its first element at the slightest provocation. But here the underlying array is wrapped in a class, with a member variable that keeps track of its size, and access to elements is through a member function which range checks the index.
Will I gain a performance boost if I switch to a plain C array?
std::array has no member variable to track its size*. It's a template. It wouldn't be suprising if the generated object code is the same as a c-style array after the compiler provides proper inlining optimizations.
*What it does have are redefined types in the classic STL style to make the class automatically work with the algorithms found in the STL and to provide a means for some snazzy static analysis techniques. Take that evil c-style arrays!
**also, std::array does no range checking, are you thinking of std::vector::at(unsigned)?
std::array has no member variable to track its size*
There is a max_size() function that will return the size you used when you defined the array.
**also, std::array does no range checking,
While this is true when using the operator[], much like std::vector, there is the at() function that does preform range checking, throwing an exception when accessing the array out of bounds.