iostream memory usage

I have a question about how iostream uses memory when handling multiple operators on the same line of code. This is a complex question that I will try and keep as simple as possible.

Suppose I have a file named "filein" that is 10 MB, and opened by an inputstream "fis". I also have 3 filters that I defined which inherit from iostream so that I can overload the stream operators (>> <<), called F1, F2 and F3. And I plan to output the final data to a file called "fileout" opened by an outputstream called "fos". So here is the code:

 
fis >> F1 >> F2 >> F3 >> fos;


These filters each used buffer limits based on their complexity, F1 will only process 2MB at a time. F2 - 3MB, and F3 1MB. What happens is the filter will create an input and output buffer. After grabbing the first chunk it alters the data, puts it in the output buffer and then flushes the output buffer. Then it grabs the next chunk and so on until there is no more data.These buffers are built at run time, if the input data is less than the buffer limit it sets the buffer to a smaller size. If it is greater it limits it at the max size.

Does iostream pull the 10MB file in to memory to send to F1? After F1 flushes it's output buffer (2MB) does the iostream send the 2MB to F2 and so on to fos before returning to grab the next chunk, or does it dynamically create memory to store the output until F1 is done? If it sends it to F2 first then F2 will never get its full 3MB chunks because it is only getting 2MB from F1. Likewise, when F2 flushes its 2MB F1 will only be able to take 1MB of it so now iostream will have to hold memory for the 1MB until it can process both chunks.

To throw one more wrench in it, we use mlockall which pulls all needed memory in at once. Since our buffers are dynamically built at run time, and the compiler could never know how big the input file might be, I kinda doubt this would affect anything but not knowing how iostream handles the flow and how it uses memory for such a task, I don't know.

Any feedback here would be greatly appreciated. You can see how this process can greatly vary our performance both in bottlenecking the chunks and/or in how much memory is needed. For example, if we push it through one filter at a time we will be fine, but what happens if we chain 4 filters together? 5? Thus, we need to know what is happening with the memory to make correct decisions on what our limits are.

Thank you for your time!
I don't believe the extractor operator works the way you are thinking: In regular streams, extractions can be performed from an input stream to a stream buffer, but not to an output stream directly. The most similar thing you could do is:
 
fis >> F1.rdbuf();

for each operation...

In this case, what the extraction operation does is a succession of snextc() and sputc() calls between the input stream and the stream buffer - so here what happens with memory depends on each buffer implementation, but anyway, each filter operation will have ended before you begin with the next one - so, no chaining.
Topic archived. No new replies allowed.