A small array is too big??

Oct 20, 2009 at 6:01pm
Hi everybody,

I apologize if the question is really easy... but I cannot figure out why I get an error message by my operating system (not the compiler, the program works if the array is say [50][50])

double x[1000][1000];

for (int i=0; i<100; i++)
{
for (int j=0; i<100; j++)
{
x[i][j] = i+j;
}
}


It is really weird since other programs like Matlab handle big 1000x1000 matrices without difficulties...

I am writing some code where I should make use of some 2-dimensional arrays of that size (ie.[1000][1000]) but I cant make it work...(though it works perfectly if I use 50x50 arrays)

Where is the problem??

Thanx

giulio
Oct 20, 2009 at 6:07pm
Well your for loops are wrong since they only loop to 100 instead of 1000. What is the error that you are getting? What compiler? What OS? 1000*1000 = 1000000 which is 1 million. That is pretty big for a stack array but I don't know how big your stack is.
Oct 20, 2009 at 6:16pm
Yeah, that's probably an 8 MiB array. Way too large for just about any stack.
http://www.cplusplus.com/doc/tutorial/dynamic/
Oct 20, 2009 at 7:17pm
I am sorry the loops were meant t go up to 1000 but that s not quite the point since even the program

double x[1000][1000];

produces the same error....!!

The compiler is Dev-C++ and the error I get is not from the compiler but from Windows itself...
I get the usual message "an error as occurred with Untitled1.exe" and at the end I have the option to Notify the problem to Microsoft and I choose "Dont's send"...

let me ask a question then...how come Matlab can easily handle a 1000x1000 matrix whereas c++, which is much more powerful, cannot do such a thing?
Oct 20, 2009 at 7:42pm
Because it's allocating it in the heap. Read the link I posted.
Oct 21, 2009 at 4:06am
All of the array things, a million doubles, are on the stack. The reason Matlab handles them is because of the way it is coded. The stack is way too much.
Oct 21, 2009 at 4:28am
why are you using doubles? Try short int unless your numbers are larger than 65535 or outside the range of -32768 to 32767.

Would that cut the memory requirements down 75%?
Oct 21, 2009 at 2:54pm
ok i am starting to understand... thank you

i am using double bc in the code i am writing those are decimals... actually they are probabilities, ie reals bw 0 and 1... you think there is a better type i can use..?

btw... i think that if i use the vector class in the std lib i can handle much bigger vectors

in fact the command

double x[1000000] //one million

causes the program to fail...whereas the command

vector<double> x(1000000]) //one million

is ok...

am i right?
Oct 21, 2009 at 3:20pm
http://www.cplusplus.com/doc/tutorial/variables/

scroll down to "Fundamental data types"
Topic archived. No new replies allowed.