Pardon me for piling on here, but I'm going to expand on @jonnin's and @Repeater's points about defines.
One in a while someone gains a bad idea that just seems to stick in their mind. I offer this on the motivation of trying to counsel someone studying the language in order to be helpful, but I can't help but use a tone that sounds scolding. I ask for a pardon, though, because I'm an old hand at this and the technique is rather offensive :)
I note that it was "handed" to you, through your review, in a contest.
It sounds like you picked up a technique in a contest where the original "professional" (and we've all used that term ironically) was probably motivated for speed of typing code, which echoes Repeater's point that this was at the expense of readability, and common sense if not logic and reason.
Let me focus on this:
FOR(i,1,row)
In this use, no one reading the code can tell how the test in this loop is implemented, or how the row is incremented, or if "i" is indeed local to the loop. It also only works for the "long long int" type, which I admit is a lot of typing for an integer of that form. It obscures what is being said.
I've been a developer for decades, and I operate my own firm. I've hired hundreds of programmers in my career. In case there is any question about the wisdom of using a define in this way, let me spell it out in stark terms. This is a job ending practice. I might issue one and only one warning, but a persistent reliance upon macros (the defines) is a reason to fire a programmer. I just could not accept the corruption this puts in the code base for the reasons given by Repeater and jonnin, it ends up wasting everyone else's time - and that's money.
There are better approaches, too. Modern C++ concepts express these ideas far better than this.
Take "lli":
#define lli long long int.
The old fashioned, built in C types are a problem. They are not clearly defined. They change sizes depending on the compiler and the platform target. Even when targeting a 64 bit build on the same operating system, one compiler may set the "int" as 64 bits, while another is 32 bits.
It isn't that these are "bad", but they are not explicit. Even a "char" type is not always 1 byte long. Though, frankly, in 40+ years I've never seen common use where it isn't, there are "exotic" situations where it may be 2 bytes. To combat this, there are new types which make these universal.
int64_t, for example, is always a 64 bit signed integer. To know the actual size of the "long long int", I'd have to look up how a particular compiler set for a particular target interprets that, but I can rely on the fact that "int32_t" will be 32 bits, and "int8_t" will be 8 bits.
So, even the short acronym of the define, "lli", echoes the ambiguity known in the language.
It could also have been done with a typedef
typedef long long int lli;
The difference, however, is in several layers. First, this isn't a mere text substitution. This means "lli" is a recognizable type. This technique is widely exampled, without using defines. In many math oriented systems, there is some type of "Real" created, like:
typedef double Real;
This is done because the code itself can now be written to use the "Real" type to represent some kind of floating point (or even fixed point) underlying type, and mutate as required, merely by changing the typedef.
While the "define"
could seem to do that, it fails in important ways.
Macros (the defines) do not respect namespaces, but typedefs do.
Thus, if I combine libraries for different math purposes, and if they both define a typedef for "Real", there would be no confusion because the typedefs can be wrapped within namespaces:
1 2 3 4 5 6 7 8 9 10 11
|
namespace A
{
typedef double Real;
}
namespace B
{
typedef float Real;
}
// A::Real is a different type than B::Real
|
Note the comment at the end of that example.
This is how professional developers do this kind of thing. I've used the older style (there are newer "using" directives with more features, a kind of template typedef is available there - typedefs that take parameters to finalize the actual type expression).
This is why a macro like "lli" in your example is so egregious to us, because the exact same "lli" text would be so much more feature rich, and language compliant,
within the language. "define" is, essentially, a text substitution
before the language. Where a typedef is a C/C++ language construct, "define" actually isn't. "#define" is a brutal, coarse "rock tied to a stick" type of hammer.
So, now, back to:
#define FOR(i,k,n) for (lli i=k;i<=n;i++)
Clearly, the "pro" you observed was trying to avoid repeated "<=" and "=" and "++" keystrokes, as well as that repeated "long long int" text. If under a timed contest, this could be a speed asset. Fine. Just, don't learn anything else from that.
Next, however, is the storage approach. This code allocates room
on the stack
The larger that gets, the worse this is. The stack should never be used for such large storage. The stack is a limited resource, and this is...just lazy.
More to the point, however, is that the code accepts parameters that will likely use a small subset of this (who would sit there and type a million entries to fill up to 1,000 rows with 1,000 columns each?).
There are various ways to do this dynamically.
vector< vector< lli > > arr;
There are slight inconveniences, like having to create each row, but it uses only the standard STL.
However, there really is no need to use 2d (or other multi-d) containers in most cases. Linear algebra uses typical 3x3 and 4x4 matrices, and those have specific math algorithms associated with them, so there's a valid exception. If, however, you're just making a 2d grid, it can be fashioned out of a simple array or vector with a touch of math.
unsigned pos = r * rowsize + c;
This little thing gives a pos such that a single dimension array operates like a 2d array of rows and columns. It is "row major", meaning the storage is organized as a series of rows. It could be reversed:
unsigned pos = c * colsize + r;
This is less common, but organizes storage as a series of columns.
The point here is that there are multiple approaches, many of which are quite liberating.
Boost has multi-dimensional containers, which can have "views" in them (a rectangular sub-region of the original 2d matrix).
You fashion a kind of view by taking in "a, b, c, d" from which you form the sum.
Imagine a container that let you lift a view of this "rectangular sub-region" of the data, then just ask for the sum.
Boost has one. If you used that, it would be much more feature rich.
There are so many ways to do this that I dare not launch into example (I was about to, but I have to get back to work on my car).