Is it possible to buffer overrun cout?

Dec 27, 2011 at 4:18pm
Hello all, I have a project that I have been working on for a couple weeks now where I have a multi-threaded application that listens across multiple ports for any incoming TCP connections. Any data received from a TCP connection is then directed to the standard output buffer cout. This program works like a champ, but I thought of a possible problem. Over the course of time I have the potential to overrun the standard cout buffer.

I need to keep that functionality (sending TCP data to cout) because I am invoking the above program from within another application that is looking at the standard output stream to process data so that it can be stored in our database. You've got to love legacy software!

I am wondering if it is possible to overrun the standard cout buffer. Here is a code snippet that am questioning...
1
2
3
4
5
6
7
8
9
10
11
/* This is just a snippet so keep in mind that all of my variables
   are being declared correctly and that the program works as
   designed except for the potential problem I am asking about */

if (Bytes_Read > 0)
{
cout << buffer << '\n';
cout.clear();
}
close(socket_descriptor);


Will the cout.close() command actually "initialize" the buffer or does it essentially just wipe the data from the screen? If not, is there a way to programmatically keep the standard output buffer from encountering an overrun scenario?

Thanks!
Dec 27, 2011 at 4:36pm
closed account (S6k9GNh0)
This maybe defined by "std::numeric_limits<streamsize>::max()". Although that's pretty large, I can sorta see what you're worrying about. I unfortunately dont' have the answer.
Dec 27, 2011 at 4:44pm
Yeah, I ran a test against it last week where I simulated 7 different machines all sending data at periodic intervals ranging from 5-7 seconds each simultaneously where each "machine" was on its own port. Did almost 250,000 records when I got back to the office to stop it. That is a LONG time in the real world where each record ranges between 90 seconds and 5 minutes. They're small packets, maybe 40 bytes at a time, but I want this to be robust enough that anyone can operate it without the need for intervention. I dont want to have to maintain this my entire career either .... I've learned that the extra effort put in by the developer to handle situations like this FAR outweighs the "just make it work" approach.
Last edited on Dec 27, 2011 at 4:46pm
Dec 27, 2011 at 4:48pm
closed account (S6k9GNh0)
On second thought, look into maximum packet size... if I'm not mistaken, even large contiguous data is broken into smaller packets. As long as the buffer is flushed each packet, wouldn't it be okay?
Dec 27, 2011 at 4:56pm
Thats what I'm thinking, but I don't want to make that assumption and put this into production. Its MUCH harder to fix once its out in the "real world". I've been trying to find any information on the "buffer overrun cout" through Google, but strangely it has been un-productive. Almost everything found has to deal with arrays and strings and the solution every time is that the developer didn't understand the indexing part of it. Anyway......

I am currentlly running another brute force test on it having 4 machines send data every 750 milliseconds. I'm going to let this run for a few days and see if I can crash the application. 15,000 strong and still going.


Dec 27, 2011 at 6:42pm
Over the course of time I have the potential to overrun the standard cout buffer.

What do you mean by that? There is no buffer to "overrun".
Dec 27, 2011 at 6:58pm
Buffer may be the wrong term here.......

Its my understanding that the cout method directs whatever is given to it to the standard output stream. Since its an I/O stream wouldn't there be a point at which said stream becomes completely filled with data and just goes blah? This program needs to be able to operate for long periods of time, possibly 6 months or more without interruption.
Dec 27, 2011 at 7:09pm
formatted output of cout directs whatever is given to it to stdout, the C output stream, which goes to screen, to file, to /dev/null, or wherever it has been redirected.

If it has been redirected to file and the file grew to the entire available disk space after 6 months, other programs that write to disk may be affected. Otherwise, there is nothing to worry about (except perhaps interleaved characters when output from multiple threads at the same time)
Dec 27, 2011 at 7:16pm
Otherwise, there is nothing to worry about
Well that's good news.

(except perhaps interleaved charaters ... from multiple threads
I had also thought about the "interleaved" characters that you mentioned as well. I'm hoping to find this with some of this brute force testing that I'm running now. I'm well above 50,000 successful "sends" without any loss or criss/cross of data. Still have a ways to go before I'm at the 250,000 from my previous trial.

Thanks for the help!
Dec 28, 2011 at 12:28am
That's now how streams work. The data "flows" through a stream, it doesn't fill up.
Dec 28, 2011 at 12:37am
closed account (S6k9GNh0)
But cout still uses an underlying buffer (streambuf). Given that the nature of that buffer (or streams for that matter) isn't well explained, he's worried. Although, the stream may deal with this just fine, I'm not sure.
Last edited on Dec 28, 2011 at 12:38am
Dec 28, 2011 at 12:57am
Output operations on std::cout will simply block until all the data has been entered into its output buffer. Data will not just be discarded.
Last edited on Dec 28, 2011 at 12:57am
Topic archived. No new replies allowed.