Hello all, I have a project that I have been working on for a couple weeks now where I have a multi-threaded application that listens across multiple ports for any incoming TCP connections. Any data received from a TCP connection is then directed to the standard output buffer cout. This program works like a champ, but I thought of a possible problem. Over the course of time I have the potential to overrun the standard cout buffer.
I need to keep that functionality (sending TCP data to cout) because I am invoking the above program from within another application that is looking at the standard output stream to process data so that it can be stored in our database. You've got to love legacy software!
I am wondering if it is possible to overrun the standard cout buffer. Here is a code snippet that am questioning...
1 2 3 4 5 6 7 8 9 10 11
/* This is just a snippet so keep in mind that all of my variables
are being declared correctly and that the program works as
designed except for the potential problem I am asking about */
if (Bytes_Read > 0)
{
cout << buffer << '\n';
cout.clear();
}
close(socket_descriptor);
Will the cout.close() command actually "initialize" the buffer or does it essentially just wipe the data from the screen? If not, is there a way to programmatically keep the standard output buffer from encountering an overrun scenario?
This maybe defined by "std::numeric_limits<streamsize>::max()". Although that's pretty large, I can sorta see what you're worrying about. I unfortunately dont' have the answer.
Yeah, I ran a test against it last week where I simulated 7 different machines all sending data at periodic intervals ranging from 5-7 seconds each simultaneously where each "machine" was on its own port. Did almost 250,000 records when I got back to the office to stop it. That is a LONG time in the real world where each record ranges between 90 seconds and 5 minutes. They're small packets, maybe 40 bytes at a time, but I want this to be robust enough that anyone can operate it without the need for intervention. I dont want to have to maintain this my entire career either .... I've learned that the extra effort put in by the developer to handle situations like this FAR outweighs the "just make it work" approach.
On second thought, look into maximum packet size... if I'm not mistaken, even large contiguous data is broken into smaller packets. As long as the buffer is flushed each packet, wouldn't it be okay?
Thats what I'm thinking, but I don't want to make that assumption and put this into production. Its MUCH harder to fix once its out in the "real world". I've been trying to find any information on the "buffer overrun cout" through Google, but strangely it has been un-productive. Almost everything found has to deal with arrays and strings and the solution every time is that the developer didn't understand the indexing part of it. Anyway......
I am currentlly running another brute force test on it having 4 machines send data every 750 milliseconds. I'm going to let this run for a few days and see if I can crash the application. 15,000 strong and still going.
Its my understanding that the cout method directs whatever is given to it to the standard output stream. Since its an I/O stream wouldn't there be a point at which said stream becomes completely filled with data and just goes blah? This program needs to be able to operate for long periods of time, possibly 6 months or more without interruption.
formatted output of cout directs whatever is given to it to stdout, the C output stream, which goes to screen, to file, to /dev/null, or wherever it has been redirected.
If it has been redirected to file and the file grew to the entire available disk space after 6 months, other programs that write to disk may be affected. Otherwise, there is nothing to worry about (except perhaps interleaved characters when output from multiple threads at the same time)
(except perhaps interleaved charaters ... from multiple threads
I had also thought about the "interleaved" characters that you mentioned as well. I'm hoping to find this with some of this brute force testing that I'm running now. I'm well above 50,000 successful "sends" without any loss or criss/cross of data. Still have a ways to go before I'm at the 250,000 from my previous trial.
But cout still uses an underlying buffer (streambuf). Given that the nature of that buffer (or streams for that matter) isn't well explained, he's worried. Although, the stream may deal with this just fine, I'm not sure.