In the following code I get an exception that disappears when I comment out the line:
std::cout << "Element at m[" << i << "][" << j << "][" << k << "] = " << m[i][j][k] << endl;
from this code:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
std::cout << "begin allocation:" << std::endl;
int m[4000][7000][20] = { {{0}} };
// output each element's value
for (int i = 0; i < 4000; ++i)
{
for (int j = 0; j < 7000; ++j)
{
for (int k = 0; k < 20; ++k)
{
std::cout << "Element at m[" << i << "][" << j << "][" << k << "] = " << m[i][j][k] << endl;
}
}
}
std::cout << "end allocation:" << std::endl;
I just increased the dim values and now I am getting buffer overrun errors at the time of posting.
buffer 'm' of size 0 bytes will be overrun; -2055527296 bytes will be written starting at offset 560000
I want to learn why the exception in the first place and also why i am getting this buffer overrun. Would it be better to manage this with a vector array?
m has over half a billion elements and a total size well over 2 gigabytes.
It's unlikely to fit on the stack.
And it's insane to print out all the elements.
If you really, really want to have 2,240,000,000 bytes (over 2GB) of data (assuming 32 bit int) then this will not fit on the stack. You definitely need to use dynamic memory from the heap.
But why do you need to store this many elements? What are trying to do? Does each element have a value - or is this some sparse array?
When you comment out the cout line, the compiler can see that you're not doing anything with m - and so doesn't create it! With the cout, the array is created as it's used and hence the error.
//std::cout << "Element at m[" << i << "][" << j << "][" << k << "] = " << m[i][j][k] << endl;
As you stated seeplus uncommenting out that line results in the exception being thrown. But I still don't understand why.
Dizzy Don the print option for the 2GB was only to see whether smoke came out of my machine.
I was able to reduce the memory to 700 x 400 x 2 = 560,000 elements from 560,000,000 (above). This is 2,240,000 mb an order of a 1000 from the first estimate. Assume 32bit ints.
Is that now heap memory or stack?(I don't mean this snarky-like, which is better really)
I am creating a 3d lookup table in which a 2-value index accesses 2 values unique to that 2-value index number. I am experimenting with this to see how fast I can get the access method to look-up for such a large array in real time.
You could mark it static, and then it will be in yet another area of storage.
But you have yet to supply an adequate reason for an array of this size. What exactly is the look-up table (which you have spent several different threads trying to construct) supposed to be looking up?
If the cout statement is removed, all you have are 3 loops with the values not used within the loops. ie the code does nothing. Most compilers in release mode with optimisations turned on are smart enough to recognise this and not generate any code for the loops. The compiler can also see that m isn't then used - so it doesn't try to allocate space for it.
Per the first code posted at top, the uncommented line below it does not cause an exception should the memory load be reduced from [4000][7000][20] to [400][70][2]. A memory issue. How do I know what dimensions and subsequent memory load wont cause overflow.
Lastchance's code below is able to go with a higher memory load for example[700][400][20] without any exception -is due to static/const use? How does this code eek out more memory?
1 2 3 4 5 6 7 8 9 10 11 12 13
constint NX = 400, NY = 700, NZ = 2;
staticint m[NX][NY][NZ] = {}; // Note static
for ( int i = 0; i < NX; i++ )
{
for ( int j = 0; j < NY; j++ )
{
for ( int k = 0; k < NZ; k++ )
{
cout << "Element at m[" << i << "][" << j << "][" << k << "] = " << m[i][j][k] << '\n';
}
}
}
cout << "Done!!!\n";
I was intrigued to find out just how large static memory could be on my system. I tried searching but to no avail. This is the output of ulimit -a on my Linux system:
real-time non-blocking time (microseconds, -R) unlimited
core file size (blocks, -c) unlimited
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 62428
max locked memory (kbytes, -l) 8192
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 62428
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
Or does the linker decide how much is needed and takes it from the available RAM?
@TheIdeasMan,
I'm afraid that I don't know. I know that static memory is in two separate data segments, one for initialised and one for uninitialised static/global variables. I presume it is set by your RAM, what you can actually address, and what your operating system is prepared to give you.
One of the difficulties in trying to find out is that a lot of web links are confusing stack and static memory.
The absolute address addressing mode can only be used with static variables, because those are the only kinds of variables whose location is known by the compiler at compile time. When the program (executable or library) is loaded into memory, static variables are stored in the data segment of the program's address space (if initialized), or the BSS segment (if uninitialized), and are stored in corresponding sections of object files prior to loading.
Right, so it talks about data segment and BSS segment, just as you mentioned :+) On my system the data segment reports as unlimited, (output of ulimit -a command above), so presumably that means it takes what it wants at compile time, the rest is heap memory. I guess it will be quicker at runtime because it was allocated at compile time, as opposed to the book keeping required to allocate at runtime.
If i remove the static from lastchance' code below I get buffer overflow error which disappears after I put the static back in. Am I right to conclude that this is because static allocates memory differently? Is static going to allocate on the stack?
staticint m[NX][NY][NZ] = {};
seeplus
With MS VS, the default stack size is 1MB.
Ive got some reading to do. Iin the meantime can you please tell me is the defualt stack size for the program only but also to include memory size?