Hello,
I'm trying to set up automated parameter testing and wish to keep it modular. To do so, I've modeled each parameter as an object, being:
1 2 3 4 5 6 7 8
|
template<class T>
struct RANGE {
const T _start;
const T _end;
const T _inc;
RANGE (T _s, T _e, T _i) : _start(_s), _end(_e), _inc(_i) {};
COUNT getSize(void) const { return (COUNT)((_end-_start)/_inc) +1; }
};
|
So, testing the range [_start, _end] with _inc increments.
Now, to assign memory for data points, I need to know the total amount of entries I'll be getting. This is easy: multiply the Size of each range.
1 2 3
|
RANGE<float> fr(-0.5, 1.51, 0.1);
RANGE<float> gr(-1.0, 5.01, 0.2);
const COUNT entriesPerFile = fr.getSize() * gr.getSize();
|
(Note that the actual numbers of the parameters are numeral constants; they appear on one of the first lines of the main class. There's is no reason the compiler would have any doubts to the value of the 'getSize()' return.)
Yet, when I do this:
ENTRY entries [files*entriesPerFile];
I get "expression must have a constant value" complaints.
Any way to get around this?
(I'm not looking for "use a vector/dynamic memory". I'm curious to the nature of constness, mostly.)