matrix allocation never throws bad_alloc!!

If I try to allocate more memory than I have (ram+swap) for a vector a bad_alloc exception is thrown.
However if i do the same for a bidimensional array no exception is thrown and my linux becomes very slow (top says that not all the swap is freed).
Any idea why?
I attach the code.
char ** allocate_matrix(long int n, long int m) {
char ** ptr=0;
long int i=0;
try { ptr=new char * [n]; }
catch (const bad_alloc & ba) {
cerr<<"bad_alloc caught in Csystem constructor: "<<ba.what()<<endl;
delete [] ptr;
throw 0;
}

try { for(i=0; i<n; i++) ptr[i]=new char [m]; }
catch (const bad_alloc & ba) {
cerr<<"bad_alloc caught in Csystem constructor: "<<ba.what()<<endl;
for(long int j=i-1;i>0;i--) delete [] ptr[j-1];
delete [] ptr;
throw 0;
}
}

char * allocate_matrix(long int n) {
char * ptr=0;
long int i=0;
try { ptr=new char [n]; }
catch (const bad_alloc & ba) {
cerr<<"bad_alloc caught in Csystem constructor: "<<ba.what()<<endl;
delete [] ptr;
throw 0;
}
}



int main(){

char ** ptr=0;
// char * ptr=0;
long int n;
long int m;
cout<<"input line number n: ";
cin>>n;
cout<<endl;
cout<<"input column number m: ";
cin>>m;
/*
try {ptr=allocate_matrix(n);}
catch(int & a) {
cout<<"a: "<<a<<endl;
}
delete [] ptr;
*/
try {ptr=allocate_matrix(n, m);}
catch(int & a ) {
cout<<"a: "<<endl;
}
for(long int i=n;i>0;i--) delete [] ptr[i-1];
delete [] ptr;


return 0;
}
I believe when you allocate memory in Linux, it simply gives you a bunch of "virtual memory" until you actually start writing stuff to it. std::vector however, calls the constructor for each element so it will get the memory.
std::vector needs to be used nested in order to produce a 2-dimensional array (matrix). I've tried that and gives the same problem.
I have been playing with this the whole day. Apparently it has something to do with the number of lines. If you you allocate char ** s with one line only then it works perfectly. As soon as you try to allocate more than the available memory it throws an exception.
The problem arises when the matrix has more lines ( million in my case, although with doubles). I guess that with the first allocation
s=new char * [n], memory is given for the n pointers. But the computer does not know yet how much memory each of those pointers have to point to!! And may be there is not enough contigous memory for each of s[i] pointers.
s[i]=new char [m]
This is a guess of a beginner! Can anybody confirm? and in case, is there no other way than to use a 1-dimensional vector and use it a 2-dimensional one???
By default the linux kernel uses an optimistic allocation strategy which works as firedraco said... all mmap() does is allocate more virtual pages. The virtual pages have no physical pages mapped to them until the memory is first accessed (which probably occurs when the constructors are run).

What is happening is the kernel swap thread is waking up and consuming virtually all of the CPU to try to reclaim enough pages to backfill your allocation. This is a known and unfortunate behavior of kswapd.

With the optimistic allocation strategy in place, the only way you'll get bad_alloc is if you attempt to allocate a block of memory that is either larger than your process' virtual address space or there is no contiguous block of virtual memory space large enough for the allocation. *Most* of the time, with this allocation strategy, the end result is the OOM killer waking up and nuking another process to backfill your allocation. Which basically means that it is totally pointless to catch bad_alloc.

You can change Linux's default allocation strategy. There is a kernel parameter in /proc/sys/kernel (somewhere, I believe) named something like "overcommit_memory". By default it is set to 1. Set it to 0. This changes mmap() behavior such that physical pages are allocated immediately upon the mmap() call. Therefore, if mmap() returns a valid pointer, (and therefore malloc or new return a valid pointer), you know you have the memory.

thanks jsmith.
I have found that i should change /etc/sysctl.conf adding vm.overcommit_memory=2.
I'm not an expert....do i need to reboot afterwards? Is it dangerous?
thanks again
I don't think you need to reboot. The worst case scenario is the kernel caches that value in the process control block when the process is created and uses the cached value for the lifetime of the process (in which case only new processes use the new value). But that would seem silly to do, so I'm guessing the kernel just always looks at the kernel parameter directly, in which case any change to value would take effect immediately.

I don't think it will be dangerous.

What you might see is processes consuming a lot more memory. For example, a process might malloc 10MB for some reason. If it never uses it, or only uses the first 1MB, with overcommit_memory on, the buffer will only consume 1MB of physical memory. With overcommit_memory off, all 10MB of physical memory will be allocated regardless of usage. Effectively wasting memory...
Topic archived. No new replies allowed.