How to use a non-const int to assign array size?

This is purely a conceptual question. I am aware I can use dynamic containers like std::vector, but I want to know if this is possible. Compiler is c++14.

If I have some array size:
1
2
int array_size = 4; 
std::array <int, array_size> arr; // error, array size is a non-const  


okay, so I make the following change:
1
2
const int array_size = 4; 
std::array <int, array_size> arr; // compiles fine  


If I do:
1
2
3
int array_size = 4;
const int j = array_size; 
std::array <int, j> arr; // initializer of j is not a constant expr.  


My question is, how can I take a variable x of type T, extract the value it is holding, and convert it to a const variable cx of type T?

I would appreciate any advice you have for me.
The major problem is that array sizes must be compile time constants, not variables. If you must have a runtime variable set the size, you should consider std::vector instead of std::array.

Thank you for your response, @jlb. I understand that, but I was wondering if I can take a variable of type int and convert into to const int to initialize my arrays?
No, that conversion is a runtime conversion not a compile time conversion.

I recommend you check out std::vector, it doesn't need a compile time constant to set the size.



Got it, thank you @jlb.
many compilers do allow this if not put into stricter compile modes. It is frowned upon, though, as it is a language extension that is not to be trusted in serious code.
it is legal in C, so "A" way to do it is to use a C file in your project to do this one thing.
Last edited on
C++17 added deduction guides to the C++ containers, including std::array. That way you don't need to specify the data type or the size of the container, only an initializer list when instantiating.
https://en.cppreference.com/w/cpp/container/array/deduction_guides

C++20 added std::to_array.
https://en.cppreference.com/w/cpp/container/array/to_array

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
#include <iostream>
#include <array>
#include <vector>

int main()
{
   std::array arr1 { 1, 2, 3, 4, 5 };
   std::cout << "arr1 size: " << arr1.size() << '\n';

   std::vector vec1 { 1, 2, 3, 4, };
   std::cout << "vec1 size: " << vec1.size() << "\n\n";

   std::array arr2 { 'A', 'B', 'C', 'D', 'E', 'F' };
   std::cout << "arr2 size: " << arr2.size() << "\n\n";

   std::array arr3 { "ABCD" };
   std::cout << "arr3 size: " << arr3.size() << "\n\n";

   double c_arr[] { 1.2, 3.4, 5.6, 7.8 };

   std::array arr4 { std::to_array(c_arr) };

   std::cout << "arr4 size: " << arr4.size() << '\n';

   for (const auto& itr : arr4)
   {
      std::cout << itr << ' ';
   }
   std::cout << "\n\n";

   std::array arr5 { std::to_array("Foo") };  // includes the null terminator '\0'
   std::cout << "arr5 size: " << arr5.size() << '\n';

   for (const auto& itr : arr5)
   {
      std::cout << itr << ' ';
   }
   std::cout << '\n';
}
arr1 size: 5
vec1 size: 4

arr2 size: 6

arr3 size: 1

arr4 size: 4
1.2 3.4 5.6 7.8

arr5 size: 4
F o o

A std::array's size still has to be known at compile time.

To have a runtime variable sized regular array is possible using a pointer and new[]/delete[] manual memory management.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
#include <iostream>

int main()
{
   int size {};

   std::cout << "What size do you want? ";
   std::cin >> size;

   int* arr { new int[size] };

   for (int i { }; i < size; ++i)
   {
      arr[i] = i;
   }

   for (int i { }; i < size; ++i)
   {
      std::cout << arr[i] << ' ';
   }
   std::cout << '\n';

   delete[ ] arr;
}

A std::vector is still a better choice for a generic array-like runtime variable sized/sizeable container. The container manages its memory without you having to worry about it.
Thank you for your response @GeorgeP! The reason I want to stick to statically sized arrays is because they seem to be more efficient when it comes to memory allocation. Vectors seem to push efficiency down (increases runtime) slightly... however slight it might be, the little inefficiencies add up if the software is running a large number of iterations.
an aggravation of OOP, which means also the STL containers, is that it is not always cheap to allocate and deallocate small temporary objects. You can mitigate this in your code with some memory management of your own, such as having the temporary become a private class variable that can be used by any of its functions and always exists rather that create and destroy nonstop, or similar tricks that keep the item around, trading memory wasted for speed. However be sure to profile that the creation/destruction cycle is actually a problem. Another way is a tight loop in a function that does this, factor it out of the loop, or a loop spamming a function, pass the item in rather than create it inside, and so on. Just moving the create/destroy pair around a little in key places, often just a small # of places, can have big rewards.
https://www.learncpp.com/cpp-programming/6-ways-to-write-better-code/
“Premature optimization is the root of all evil.” - Donald Knuth

One of the biggest problems new or overzealous programmers run into is trying to write code that is as fast as possible at the expense of things like code readability. This is almost always a bad idea. It IS a good idea to pick an algorithm that’s right for the problem you’re trying to solve -- for example, if you’re doing lots of element insertions and deletions, a linked list is probably going to be a better choice than an array. But that doesn’t mean you have to design an algorithm that squeezes out every last bit of performance out of the linked list. Efficiency generally comes at the expense of legibility, and honestly, with a few exceptions, legibility is more important, because at some point, you’re going to have to fix a bug, or expand your code, and code that’s tricked-out to be as efficient as possible isn’t going to be conducive to either of those things.

Once your code is written, you can always profile it to find out where the ACTUAL bottlenecks are, rather than prematurely act on where you perceive the bottlenecks may be (emphasis added). With properly implemented code that utilizes concepts such as encapsulation, swapping out one algorithm for a better one when needed is often no problem.
AND
Don’t be too clever by half.

This is sort of along the same lines as [previous]. It’s almost always a better idea to write code that is clean, straightforward, and legible than code that is as efficient as possible.

The efficiency you think you are creating by doing things manually is more than likely a mirage. Current C++ compilers are damned good at creating code that works well for all but the most peculiar edge cases. Lots of really smart programmers have spent years creating/updating the C++ standard specifications to not be a sluggard.
The reason I want to stick to statically sized arrays is because they seem to be more efficient when it comes to memory allocation. Vectors seem to push efficiency down (increases runtime) slightly... however slight it might be, the little inefficiencies add up if the software is running a large number of iterations.

It is possible that std::vector is the root cause of a performance problem, but experience shows this to be quite unlikely. Instead, it is more likely that inefficiencies surrounding std::vector are caused by less-than-ideal usage patterns.

Some less-than-ideal usage patterns include
- allocating many vectors in a loop when one or several could be reused instead;
- incurring unneeded copies of elements;
- forgetting to reserve memory space for elements ahead-of-time;
- forgetting to mark the element type's move constructor noexcept;
- not making use of allocators.

Don't follow anyone's advice blindly. Just do what's measurably best for your product.

Oh right, and you have to turn on the compiler optimizer. Else the compiler may insert extra code to help catch bugs, which can slow things by an order of magnitude.
Last edited on
Topic archived. No new replies allowed.