Memory Release Problem

I have a problem with releasing memory. I have made a test program that demonstrates the problem I am having. The problem is, I have a class with a private member. This private member is a vector whose individual elements are themselves a vector of char *. In other words:

vector < vector < char * > > all_seqs;

When I iterate through this vector and then allocate memory using malloc with the char pointers, I can see that the memory is taken up. When I try to re-iterate through these vectors and free the memory, the OS doesn't show the memory as being released. When I try doing all this memory allocation and freeing just in main, the memory correctly shows up as being released.

I'm running this code on a Linux machine running Cent OS. I don't think the problem is dependent on the OS, but in case anyone is wondering, that is the OS I'm on. I just think that there is some fundamental characteristic of classes in C++ that I'm just not understanding here. Below is the test code that shows the problem I am running into. The calls to sleep are in there so that I have time to look and see that the memory is actually being used up and being released before the program contiues or finishes executing. The header inclues unistd.h which is where sleep exists on unix-based machines, so if you try this on a windows box, you will just need to include the equivalent header file for the sleep function. I have tried to put all the necessary code in a single file to simplify the compilation process. Also, you might want to change the TOTAL value in the class description to a smaller value so that the code doesn't eat up all the memory on your machine when it is running. I was running this code on a machine with 32 gigs of memory, so I needed a large value in order to actually *see* that the memory was being taken up. You probably could use a value like 100000 or something to recreate the problem.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
#include <map>
#include <vector>
#include <iostream>
#include <unistd.h>
using namespace std;

class mem {
   public:
      mem();
      void memTest();
   private:
      static const int TOTAL = 10000000;
      vector < vector < char * > > all_seqs;
};

int main(int argc, char *argv[]){
   mem *f = new mem();
   f->memTest();
   return 0;
}

mem::mem(){
   all_seqs.resize(TOTAL);
}

void mem::memTest(){

   cout << "Building memory array" << std::endl;
   for (int x = 0; x < TOTAL; x++) {
      for (int y = 0; y < 10; y++) {
         char *val = (char *)malloc(16);
         memset(val, 'A', 16);
         this->all_seqs.at(x).push_back(val);
      }
   }
   cout << "Done building memory array" << std::endl;

   sleep(5);

   cout << "Destroying memory array" << std::endl;
   for (int x = 0; x < TOTAL; x++) {
      for (int y = 0; y < 10; y++) {
         free(this->all_seqs.at(x).at(y));
      }
      this->all_seqs.at(x).resize(0);
   }
   cout << "Done destroying memory array" << std::endl;

   sleep(10);
}
Last edited on
With Windows XP SP2, it runs out of memory (I think), and it force-quits to preserve the stability of the system. Unfortunately, the system is left nearly unusable.

Note that 1000000 works just fine, but 10000000, as in the original program above, is horrible.

In any case, the memory isn't released because the process doesn't complete. I'm not sure if larger swapfiles or swap partitions would help or not.

Have you considered using something other than a vectors? If that doesn't work for you, maybe your inner vector could be a normal array. After all, you're only using 10, and you are allocating things dynamically, so they remain in memory. Create something like a vector< char** >. That may save quite a bit of memory.

Another alternative is to create your own Vector class.
Last edited on
Note that I said in the description that 10000000 probably wouldn't work on another system if you run this code, but for the program I'm writing, it is necessary for the vector to be this large, not to mention I was working on a machine with 32 gigabytes of RAM which makes this amount of allocation no problem. Working with entire genomes is no small project, so yes I will need something this large. I suggested a smaller value to use so you can see the problem on your system, but the value one used would need to be specific to their sytem setup, so I left the original value that was there. Please, if you want to help, read my description more carefully.

I don't think it is because the system doesn't finish running the program, because I can run this same setup just inside a main function without making a class, and after it is done freeing all the allocated memory, my OS shows that the memory actually has been freed up *before* the program exits. I also am using a two-dimensional vector, because while I know how large the outside vector is, I won't necessarily know how large each inside vector of char pointers will be in the actual program...this program was meant *only* as a test to show you the problem that I was running into my other program since this test code has the *exact* same problem as my actual program.

Thank you for your suggestions though, and if anyone can make sense of why this program doesn't show the memory being freed up, it would definitely help tremendously.
Topic archived. No new replies allowed.