Basically I was too lazy to include the <chrono> header, but I have to wait for some time. The thing is I don’t know how long to wait, so I just made a “busy loop”(I think that’s what it’s called?) instead like so..
for(int i = 0;i < 100000000;i++);
Now the thing that I made was this: I made a server that has a couple things going on at once.
There’s a thread pool to handle connections,
a thread to accept connections,
and the main thread of execution for whatever(idk what that’s gonna do).
The thread pool just tries to grab a file descriptor from a queue which is added to in the thread that accepts connections. In order to make absolutely sure that the thread pool actually has a chance to grab a file descriptor, I decided to make the thread adding to the queue wait a little every 20 times it adds a connection to the queue so that the thread pool has a chance to actually get some of the file descriptors from the queue. I was wondering whether someone thought this was a weird thing to do, and if there is is a better way of doing all this, because what I’m doing right now seems kinda like a little weird but that might just be me...?
Thank you for taking the time to read through that mess :)
"Too lazy to include <chrono>" makes no sense! And a busy-wait doesn't give up the CPU, so it's not really what you want here since you definitely want to relinguish the CPU so the other thread can use it.
You should just sleep the thread for a definite time.
1 2
// Sleep for 1 millisecond:
std::this_thread::sleep_for( std::chrono::milliseconds( 1 ) );
Since I don't understand your description of why you are doing this, I can't say whether it is a good idea or not.
I have 5 worker threads, all that are just sitting there waiting to lock onto a queue of file descriptors. Each file descriptor is a socket(connection) to a client.
Then there is a 6th thread that simply accepts clients over and over and adds them to the queue.
Then there is the main thread of execution, which is irrelevant.
To handle data races between the 5 worker threads and the 6th thread that’s just accepting connections, I use a mutex. Now, Since all the 6th thread is doing is just continually locking the mutex, adding a connection, and unlocking, only to lock again, I’m afraid that there might be times when 5 worker threads might not even get a chance to lock on to the mutex and extract a connection from the queue, so I made it so the 6th thread occasionally waits a bit instead of locking on to the queue to allow the worker threads to get some of the connections.
So the "20 iterations" and "wait" part is all about the delay you're trying to put in? Taking that out, I don't see what the problem is. Surely the producer is stuck in the "accept connection" state most of the time. I don't see how it will block out the consumers. The only problem I see is that the consumer will busy-wait on an empty queue. You should probably use a condition variable to signal when the queue has data.
Producer:
whiletrue:
accept connection
lock mutex
push connection
unlock mutex
notify waiting threads that queue has data
Consumer:
whiletrue:
lock mutex
pop connection
unlock mutex
if there was a connection
handle connection
else
wait until notified that queue is not empty
Oh yeah I kinda have something like that I just return -1 (not a valid socket descriptor) if the queue is empty. It seems kinda strange to add a thing that waits for a flag that then allows you to check for a though. Can I send you my code? It’s Unix sockets and I’ll heavily comment if needed.
Edit: well not the entire code, just the actual worker thread function and the producer function.
I don't think just returning -1 if the queue is empty is the same thing at all.
The point here is to use a condition variable to notify the consumer when the queue has data, that way the consumer doesn't need to busy-wait, checking the queue over and over.
You can post your code somewhere and put a link here.
Your queue is pretty wacky. I suggest you use std::queue if possible. It looks like your queue may have an actual error since it doesn't store the first fd passed to push.
As for the thread coordination, I'm no expert but maybe something like this:
#include <iostream>
#include <iomanip>
#include <vector>
#include <queue>
#include <chrono>
#include <thread>
#include <mutex>
#include <condition_variable>
usingnamespace std;
mutex mtx, mtx_queue;
condition_variable queue_ready;
queue< int > q;
bool loop = true;
void func( int n )
{
while ( loop )
{
int seconds = -1;
{
lock_guard< mutex > lk( mtx );
if ( !q.empty() )
{
seconds = q.front();
q.pop();
}
}
if ( seconds >= 0 )
{
{
lock_guard< mutex > lk( mtx );
cout << setw(n * 4) << "W" << seconds << '\n';
}
this_thread::sleep_for( chrono::seconds( seconds ) );
{
lock_guard< mutex > lk( mtx );
cout << setw(n * 4) << "F" << seconds << '\n';
}
}
else
{
unique_lock< mutex > lk( mtx_queue );
queue_ready.wait( lk );
}
}
}
int main(int argc, char** argv)
{
int NumThreads = 3;
if (argc == 2) stoi( argv[1] );
vector<thread> threads( NumThreads );
for ( int i = 0; i < NumThreads; ++i )
threads[i] = thread( func, i + 1 );
for ( int n; cin >> n; )
{
q.push( n );
queue_ready.notify_one();
}
queue_ready.notify_all();
loop = false;
for ( int i = 0; i < NumThreads; ++i )
threads[i].join();
}
In the main thread you enter non-negative integers that represents the number of seconds for a thread to wait. The ints are pushed to the queue and the threads pop them off and wait for that amount of time (that's their "work"). They print Wn and Fn to say they are Waiting or have Finished waiting n seconds. Each thread's output appears in it's own column (i.e., the output is indented proportionally to the thread number).
Example output with 3 "worker" threads (input values are not indented):