In the course of my program, I am trying to allocate an array of type double and length ~20,000 elements. The program is unable to do so and throws a bad_alloc exception. I did some reading and I see that UNIX has no heap size limits on a per program basis. Does Linux have the same methods? I tested the code that does the array allocation in a separate program and it runs perfectly so it seems to me that there just isn't memory available. I tried increasing my per user limits by using the unlimit command in c-shell but that didn't help any. Is there some other way that works better to do this? I found here that I can edit the limits.conf file, but I'm not certain what I should edit it and to what.
I can post code if necessary but the call is buried deep in a program that uses a lot of objects and pointer references.
Is this failing on the first attempt? Or does the program allocate a large number of other objects elsewhere? What amount of memory is your application consuming? 32bit or 64bit?
It allocates other objects elsewhere....I wouldn't call it a large number, maybe 10 or so custom written each holding maybe 20 or so doubles, plus a few others which contain pointers to these objects for a total of about 15 to 20 objects. By the time it shows up in the task manager it is using 188KB, at this point the program pauses because the child processes are not transfering data to the father process. The system is 64 bit, 3GB of RAM, 6GB Swap space (not being touched by anything).
The pointers are all stored in a custom object. At the moment, I havn't used a single STL container (although I'm sure it may have made a few things easier.)
There seems to be another bug with another 20,000 element array at the same time but my test program was able to allocate a vector class of a number of elements far beyond what I will need to use so I am going to just attempt to convert over to that.