Thread synchronization in client-server like application? - c

I have been working on a file download application, where server continiously waits new connection requests from clients, when a new connection arrives server accepts this connection and creates a new process to serve the client recently connected to server. Clients can request to download multiple files from server. For each file , client and server sides create a new thread and the data transfer for each file should be carried out between the proper thread couples of the server and client. I'm using C and pthread for threads. I have stable socket connection and successful process creation for each client for now.
For threaded file transer, i had an attempt as follows:
In client i'm creating threads which runs a method to receive files:
int k;
for (k = 0; k < fNameCounter; k++)
{
pthread_t thread_id;
int status = pthread_create(&thread_id, NULL, &receiveFile, fName );
if (status != 0)
{
printf("Thread Creation Failed \n");
exit(0);
}
}
Similarly in server side, i create same number of threads as follows:
int k;
for (k = 0; k < fnameCounter; k++)
{
pthread_t thread_id;
int status = pthread_create(&thread_id, NULL, &sendFile, fName );
if (status != 0)
{
printf("Thread Creation Failed \n");
exit(0);
}
}
sendFile and receiveFile functions simply writes and reads the bytes of files specified by fName(as you can see in pthread_create) by socket, at that point i have a major problem:
In this program as far as i thought, there might be problems that files' contents may be different after the all threads complete receiving data from server, because since sendFile and readFile functions just reads from the socket and writes to the socket.
How can i guarantee that, each thread of client gets the proper data from the proper thread of server like i explained below:
receive send
cthread1 ----> a.txt <----- sthread1
cthread1 ----> a.txt <----- sthread1
cthread1 ----> a.txt <----- sthread1
p.s. i'm aware that creating many threads on one socket does not makes sense but, it is my hw and i need to do in that way :/.
Regards.

The easiest way is to open a new socket for each file.

To complete zvrba very valid point (one socket, one file).
I think the threads are useless here: you have one network card. You won't get faster by listening to the same resource several times. You're actually likely to have your threads block each other and be actually slower.
If you want to give feedback to the user, then you may create one background threads for all the socket IO.
You should handle your several connections in the same thread using select to detect which socket may be read from / written to.

Considering the restriction of only one socket, you might want to add some header info to the bytes being transmitted. This header might contain information about which pair of threads is responsible for that bit of information.
For example, sthread1 will send 100 bytes from file a.txt. You might add one byte to the beginning of that stream, containing the number 1. When receiving that chunck of data, cthread1 needs to check the first byte: if it is 1 then ok, this chunk is for cthread1, proceed processing. If the header isn't 1, then cthread1 should ignore this chunk and keep waiting, giving the other threads opportunity to run too.
If you don't identify your chunks of data, it will be impossible to determine which thread should process which chunk.
Please note that this adds a lot of complexity to deal with:
You will have to decide how to assign those identifiers to your pairs of threads
If you use only one byte, keep in mind that the maximum value it holds is 255.
You might have to use the flag MSG_PEEK in your recv function inside cthread (so you can ignore the chunk if needed). This flag alters the behaviour of the function recv. I advise you to read this (if you're programming on Win) or this (if you're on Linux).

Related

C: connection between threads with pipe, synchronization needed?

I'm dealing with pipe communications between threads in C programming.
I have 2 threads:
-thread 1 just manage some events,
-thread 2 communicates with a serial port;
Thread 1 and 2 communicates with a pipe.
The "events manager" thread if there are some conditions should send a string to the "serial manager" with e.g. pipe[1], which is polling out from serial port and from pipe[0].
Then if there's a string from pipe[0] it should do his work.
The problem is that thread 1 writes faster than threads 2 reads.
So my question is: how do I have to properly read from pipe[0]? How do I have a queue? Because if I use read simply in blocking way just typing in thread 2:
read(pipe[0], string, sizeof(string)-1)
thread 2 reads all the thread 1 overload messages;
The only solution that I found is to create another pipe that blocks thread 1 (because thread 1 starts to read after writing, read is in blocking way), so thread 1 waits until thread 2 has done is work (this is useful 'cause I can get response from thread2), but my question is: is this the correct way? My convinction is that I'm missing something about read function.
"[I]s this the correct way [to read variable-length asynchronous messages over a FIFO, processing them one at a time]?"
No, you do not need to synchronize a single producer sending variable-length messages over a FIFO to a single consumer to process the messages one at a time.
As you documented in your own answer, you could add a record terminator to the messages. You could implement a simple protocol that describes the length of each message (c.f. netstrings).
There's a lot of prior practice you can borrow here. For example, you don't need to read record-terminated messages one character at a time, but could locally buffer partial messages — think of what stdio does to turn bytestreams into lines. The constraint that there's only one consumer makes some things easy.
"[I]s this the correct way [to send variable-length asynchronous messages between threads]?"
It's serviceable but perhaps not ideal.
A message-oriented, queuing channel might be better suited here: message queues or a datagram socket pair.
Thank you all for the answers.
I've found the solution, don't know if it is stylistically correct but it works very well, using read function in non-blocking way. So just have to configure the pipe in the main:
fcntl(pipe_fd[0], F_SETFL, O_NONBLOCK);
Then just assure to write from thread1 the string plus the '\0' char:
write (pipe_fd[1], string, sizeof(string)+1);
And at last the read in the thread2 should be like this one
int n_bytes, offset;
int pres;
char ch[2];
string[256];
pres = poll (....);
if (pres > 0){
...
...
/* if it's the pipe_fd[0] ...*/
offset = 0;
do{
n_bytes = read(pipe_fd[0], &ch, 1);
string[offset] = ch[0];
offset += n;
}while(n>0 && ch[0]!='\0' && offset < sizeof(string))
work_with_message(string)....
that's it, tell me what you think ;-)

epoll IO with worker threads in C

I am writing a small server that will receive data from multiple sources and process this data. The sources and data received is significant, but no more than epoll should be able to handle quite well. However, all received data must be parsed and run through a large number of tests which is time consuming and will block a single thread despite epoll multiplexing. Basically, the pattern should be something like follows: IO-loop receives data and bundles it into a job, sends to the first thread available in the pool, the bundle is processed by the job and the result is passed pack to the IO loop for writing to file.
I have decided to go for a single IO thread and N worker threads. The IO thread for accepting tcp connections and reading data is easy to implement using the example provided at:
http://linux.die.net/man/7/epoll
Thread are also usually easy enough to deal with, but I am struggling to combine the epoll IO loop with a threadpool in an elegant manner. I am unable to find any "best practice" for using epoll with a worker pool online either, but quite a few questions regarding the same topic.
I therefore have some question I hope someone can help me answering:
Could (and should) eventfd be used as a mechanism for 2-way synchronization between the IO thread and all the workers? For instance, is it a good idea for each worker thread to have its own epoll routine waiting on a shared eventfd (with a struct pointer, containing data/info about the job) i.e. using the eventfd as a job queue somehow? Also perhaps have another eventfd to pass results back into the IO thread from multiple worker threads?
After the IO thread is signaled about more data on a socket, should the actual recv take place on the IO thread, or should the worker recv the data on their own in order to not block the IO thread while parsing data frames etc.? In that case, how can I ensure safety, e.g. in case recv reads 1,5 frames of data in a worker thread and another worker thread receives the last 0,5 frame of data from the same connection?
If the worker thread pool is implemented through mutexes and such, will waiting for locks block the IO thread if N+1 threads are trying to use the same lock?
Are there any good practice patterns for how to build a worker thread pool around epoll with two way communication (i.e. both from IO to workers and back)?
EDIT: Can one possible solution be to update a ring buffer from the IO-loop, after update send the ring buffer index to the workers through a shared pipe for all workers (thus giving away control of that index to the first worker that reads the index off the pipe), let the worker own that index until end of processing and then send the index number back into the IO-thread through a pipe again, thus giving back control?
My application is Linux-only, so I can use Linux-only functionality in order to achieve this in the most elegant way possible. Cross platform support is not needed, but performance and thread safety is.
In my tests, one epoll instance per thread outperformed complicated threading models by far. If listener sockets are added to all epoll instances, the workers would simply accept(2) and the winner would be awarded the connection and process it for its lifetime.
Your workers could look something like this:
for (;;) {
nfds = epoll_wait(worker->efd, &evs, 1024, -1);
for (i = 0; i < nfds; i++)
((struct socket_context*)evs[i].data.ptr)->handler(
evs[i].data.ptr,
evs[i].events);
}
And every file descriptor added to an epoll instance could have a struct socket_context associated with it:
void listener_handler(struct socket_context* ctx, int ev)
{
struct socket_context* conn;
conn->fd = accept(ctx->fd, NULL, NULL);
conn->handler = conn_handler;
/* add to calling worker's epoll instance or implement some form
* of load balancing */
}
void conn_handler(struct socket_context* ctx, int ev)
{
/* read all available data and process. if incomplete, stash
* data in ctx and continue next time handler is called */
}
void dummy_handler(struct socket_context* ctx, int ev)
{
/* handle exit condition async by adding a pipe with its
* own handler */
}
I like this strategy because:
very simple design;
all threads are identical;
workers and connections are isolated--no stepping on each other's toes or calling read(2) in the wrong worker;
no locks are required (the kernel gets to worry about synchronization on accept(2));
somewhat naturally load balanced since no busy worker will actively contend on accept(2).
And some notes on epoll:
use edge-triggered mode, non-blocking sockets and always read until EAGAIN;
avoid dup(2) family of calls to spare yourself from some surprises (epoll registers file descriptors, but actually watches file descriptions);
you can epoll_ctl(2) other threads' epoll instances safely;
use a large struct epoll_event buffer for epoll_wait(2) to avoid starvation.
Some other notes:
use accept4(2) to save a system call;
use one thread per core (1 for each physical if CPU-bound, or 1 for each each logical if I/O-bound);
poll(2)/select(2) is likely faster if connection count is low.
I hope this helps.
When performing this model, because we only know the packet size once we have fully received the packet, unfortunately we cannot offload the receive itself to a worker thread. Instead the best we can still do is a thread to receive the data which will have to pass off pointers to fully received packets.
The data itself is probably best held in a circular buffer, however we will want a separate buffer for each input source (if we get a partial packet we can continue receiving from other sources without splitting up the data. The remaining question is how to inform the workers of when a new packet is ready, and to give them a pointer to the data in said packet. Because there is little data here, just some pointers the most elegant way of doing this would be with posix message queues. These provide the ability for multiple senders and multiple receivers to write and read messages, always ensuring every message is received and by precisely 1 thread.
You will want a struct resembling the one below for each data source, I shall go through the fields purposes now.
struct DataSource
{
int SourceFD;
char DataBuffer[MAX_PACKET_SIZE * (THREAD_COUNT + 1)];
char *LatestPacket;
char *CurrentLocation
int SizeLeft;
};
The SourceFD is obviously the file descriptor to the data stream in question, the DataBuffer is where Packets contents are held while being processed, it is a circular buffer. The LatestPacket pointer is used to temporarily hold a pointer to the most resent packet in case we receive a partial packet and move onto another source before passing the packet off. The CurrentLocation stores where the latest packet ends so that we know where to place the next one, or where to carry on in case of partial receive. The size left is the room left in the buffer, this will be used to tell if we can fit the packet or need to circle back around to the beginning.
The receiving function will thus effectively
Copy the contents of the packet into the buffer
Move CurrentLocation to point to the end of the packet
Update SizeLeft to account for the now decreased buffer
If we cannot fit the packet in the end of the buffer we cycle around
If there is no room there either we try again a bit later, going to another source meanwhile
If we had a partial receive store the LatestPacket pointer to point to the start of the packet and go to another stream until we get the rest
Send a message using a posix message queue to a worker thread so it can process the data, the message will contain a pointer to the DataSource structure so it can work on it, it also needs a pointer to the packet it is working on, and it's size, these can be calculated when we receive the packet
The worker thread will do its processing using the received pointers and then increase the SizeLeft so the receiver thread will know it can carry on filling the buffer. The atomic functions will be needed to work on the size value in the struct so we don't get race conditions with the size property (as it is possible it is written by a worker and the IO thread simultaneously, causing lost writes, see my comment below), they are listed here and are simple and extremely useful.
Now, I have given some general background but will address the points given specifically:
Using the EventFD as a synchronization mechanism is largely a bad idea, you will find yourself using a fair amount of unneeded CPU time and it is very hard to perform any synchronization. Particularly if you have multiple threads pick up the same file descriptor you could have major problems. This is in effect a nasty hack that will work sometimes but is no real substitute for proper synchronization.
It is also a bad idea to try and offload the receive as explained above, you can get around the issue with complex IPC but frankly it is unlikely receiving IO will take enough time to stall your application, your IO is also likely much slower than CPU so receiving with multiple threads will gain little. (this assumes you do not say, have several 10 gigabit network cards).
Using mutexes or locks is a silly idea here, it fits much better into lockless coding given the low amount of (simultaneously) shared data, you are really just handing off work and data. This will also boost performance of the receive thread and make your app far more scalable. Using the functions mentioned here http://gcc.gnu.org/onlinedocs/gcc-4.1.2/gcc/Atomic-Builtins.html you can do this nice and easily. If you did do it this way, what you would need is a semaphore, this can be unlocked every time a packet is received and locked by each thread which starts a job to allow dynamically more threads in if more packets are ready, that would have far less overhead then a homebrew solution with mutexes.
There is not really much difference here to any thread pool, you spawn a lot of threads then have them all block in mq_receive on the data message queue to wait for messages. When they are done they send their result back to the main thread which adds the results message queue to its epoll list. It can then receive results this way, it is simple and very efficient for small data payloads like pointers. This will also use little CPU and not force the main thread to waste time managing workers.
Finally your edit is fairly sensible, except for the fact as I ave suggested, message queues are far better than pipes here as they very efficiently signal events , guarantee a full message read and provide automatic framing.
I hope this helps, however it is late so if I missed anything or you have questions feel free to comment for clarification or more explanation.
I post the same answer in other post: I want to wait on both a file descriptor and a mutex, what's the recommended way to do this?
==========================================================
This is a very common seen problem, especially when you are developing network server-side program. Most Linux server-side program's main look will loop like this:
epoll_add(serv_sock);
while(1){
ret = epoll_wait();
foreach(ret as fd){
req = fd.read();
resp = proc(req);
fd.send(resp);
}
}
It is single threaded(the main thread), epoll based server framework. The problem is, it is single threaded, not multi-threaded. It requires that proc() should never blocks or runs for a significant time(say 10 ms for common cases).
If proc() will ever runs for a long time, WE NEED MULTI THREADS, and executes proc() in a separated thread(the worker thread).
We can submit task to the worker thread without blocking the main thread, using a mutex based message queue, it is fast enough.
Then we need a way to obtain the task result from a worker thread. How? If we just check the message queue directly, before or after epoll_wait(), however, the checking action will execute after epoll_wait() to end, and epoll_wait() usually blocks for 10 micro seconds(common cases) if all file descriptors it wait are not active.
For a server, 10 ms is quite a long time! Can we signal epoll_wait() to end immediately when task result is generated?
Yes! I will describe how it is done in one of my open source project.
Create a pipe for all worker threads, and epoll waits on that pipe as well. Once a task result is generated, the worker thread writes one byte into the pipe, then epoll_wait() will end in nearly the same time! - Linux pipe has 5 us to 20 us latency.
In my project SSDB(a Redis protocol compatible in-disk NoSQL database), I create a SelectableQueue for passing messages between the main thread and worker threads. Just like its name, SelectableQueue has an file descriptor, which can be wait by epoll.
SelectableQueue: https://github.com/ideawu/ssdb/blob/master/src/util/thread.h#L94
Usage in main thread:
epoll_add(serv_sock);
epoll_add(queue->fd());
while(1){
ret = epoll_wait();
foreach(ret as fd){
if(fd is worker_thread){
sock, resp = worker->pop_result();
sock.send(resp);
}
if(fd is client_socket){
req = fd.read();
worker->add_task(fd, req);
}
}
}
Usage in worker thread:
fd, req = queue->pop_task();
resp = proc(req);
queue->add_result(fd, resp);

UDP Multiple sockets Receive data and process efficiently - C & Linux

I have to receive data from 15 different clients each of them sending on 5 different ports. totally 15 *5 sockets.
for each client port no is defined and fixed. example client 1 ,ports 3001 to 3005. client 2 ,ports 3051 to 3055 etc. They have one thing in common say first port (3001 , 3051) is used to send commands. other ports send some data.
After receiving the data i have to check for checksum. keep track of recvd packets, Re request the packet if lost and also have to write to files on hard disk.
Restriction I cannot change the above design and i cannot change from UDP to TCP.
The two methods i'm aware of after reading are
asynchronous multiplexing using select().
Thread per socket.
I tried the first one and i'm stuck at the point when i get the data. I'm able to receive data. I have some processing to do so i want to start a thread for each socket (or) for sockets to handle (say all first ports, all second, etc ..i.e.3001,3051 etc)
But here if client sends any data then FD_ISSET becomes true , so if i start a thread ,then it becomes thread for every message.
Question:
How to add thread code here, Say if i include pthread_create inside if(FD_ISSET .. ) then for every message that i receive i create a thread. But i wanted a thread per socket.
while(1)
{
int nready=0;
read_set = active_set;
if((nready = select(fdmax+1,&read_set,NULL,NULL,NULL)) == -1)
{
printf("Select Errpr\n");
perror("select");
exit(EXIT_FAILURE);
}
printf("number of ready desc=%d\n",nready);
for(index=1;index <= 15*5;index++)
{
if(FD_ISSET(sock_fd[index],&read_fd_set))
{
rc = recvfrom(sock_fd[index],clientmsgInfo,MSG_SIZE,0,
(struct sockaddr *)&client_sockaddr_in,
&sockaddr_in_length);
if(rc < 0)
printf("socket %d down\n",sock_fd[index]);
printf("Recieved packet from %s: %d\nData: %s\n\n", inet_ntoa(client_sockaddr_in.sin_addr), ntohs(client_sockaddr_in.sin_port), recv_client_message);
}
} //for
} //while
create the threads at the startup of program and divide them to handle data, commmands e.t.c.
how?
1. lets say you created 2 threads, one for data and another for the commands.
2. make them sleep in the thread handler or let them wait on a lock that the main thread
acquired, seems to be that mainthread got two locks one for each of them.
3. when any client data or command that got into the recvfrom at mainthread, depending on the
type of the buffer(data, commands), copy the buffer into the shared data by mainthread and
other threads and unlock the mutex.
4. at threads lock the mutex so that mainthread wont' corrupt the data and once processing is
done at the threads unlock and sleep.
The better one would be to have a queue, that fills up by main thread and can be accessed element wise by the other threads.
I assume that each client context is independent of the others, ie. one client socket group can be managed on its own, and the data pulled from the sockets can be processed alone.
You express two possibilities of handling the problem:
Asynchronous multiplexing: in this setting, the sockets are all managed by one single thread. This threads selects which socket must be read next, and pulls data out of it
Thread per socket: in this scenario, you have as many threads as there are sockets, or more probably group of sockets, ie. clients - this the interpretation I will build from.
In both cases, threads must keep ownership of their respective resources, meaning sockets. If you start moving sockets around between threads, you will make things more difficult that it needs to be.
Outside the work that needs to be done, you will need to handle thread management:
How do threads get started?
How and when are they stopped?
What are the error handling policies?
Your question doesn't cover these issues, but they might play a significant role in your final design.
Scenario (2) seems simpler: you have one main "template" (I use the word in a general meaning here) for handling a group of sockets using select on them, and in the same thread receive and process the data. It's quite straightforward to implement, with a struct to contain the context specific data (socket ports, pointer to function for packet processing), and a single function looping on select and process, plus perhaps some other checks for errors and thread life management.
Scenario (1) requires a different setup: one I/O thread reads all the packets and pass them on to specialized worker threads to do the processing. If processing error occurs, worker threads will have to generate the adhoc packet to be sent to the client, and pass it to the I/O thread for sending. You will need packet queues both ways to allow communication between I/O and workers, and have the I/O thread check the worker queues somehow for resend requests. So this solution is a bit more expensive in terms of developement, but reduce the I/O contention to one single point. It's also more flexible, in case some processing must be done against data coming from several clients, or if you want to chain up processing somehow. For instance, you could have instead one thread per client socket, and then one other thread per client group of socket further down the work pipeline, with each step of the pipeline interconnected by packet queue.
A blend of both solution can of course be implemented, with one IO thread per client, and pipelined worker threads.
The advantage of both outlined solutions is the fixed number of threads: no need to spawn and destroy threads on demand (although you could design a thread pool to handle that as well).
For a solution involving moving sockets between threads, the questions are:
When should these resources be passed on? What happens after a worker thread has read a packet? Should it return the socket to the IO thread, or risk a blocking read on the socket for the next packet? If it does a select to poll the socket for more packets, we fall in scenario (2), where each client will has its own I/O thread when there is network trafic from all of them, in which case what is the gain of the initial I/O thread doing the select?
If it passes the socket back, should the IO thread wait for all workers to give back their socket before initiating another select? If it waits, it takes the risk of making unserved client wait for packets already in the network buffers, inducing processing lag. If it does not wait, and return to select to avoid lag on unserved sockets, then the served ones will have to wait for the next wake up to see their sockets back in the select pool.
As you can see, the problem is difficult to handle. That's the reason why I recommend exclusive sockets ownership by threads as described in scenarii (1) and (2).
Your solution requires a fixed, relatively small, number of connections.
Create a help procedure that creates thread procedures that listen to each of the five ports and block on the recvfrom(), process the data, and block again. You can then call the helper 15 times to create the threads.
This avoids all polling, and allows Linux to schedule each thread when the I/O completes. No CPU used while waiting, and this can scale to somewhat larger solutions.
If you need to scale massively, why not use a single set of ports, and get the partner address from the client_sockaddr_in structure. If the processing takes a material amount of time, you could extend it by keeping a pool of threads available and assign a new one each time a message is received and continue processing the message thereafter, and adding the thread back to the pool after the response is sent.

Non-blocking FIFO: detect if a reader exists?

I have created a FIFO where I can do non-blocking writes into in this way:
// others, searching for a non-blocking FIFO-writer may copy this ;-)
mkfifo("/tmp/myfifo", S_IRWXU);
int fifo_fd = open("/tmp/myfifo", O_RDWR);
fcntl(fifo_fd, F_SETFL, fcntl(fifo_fd, F_GETFL) | O_NONBLOCK);
// and then in a loop:
LOGI("Writing into fifo.");
if (write(fifo_fd, data, count) < 0) {
LOGE("Failed to write into fifo: %s", strerror(errno));
}
The non-blocking write works perfect.
On the other side, I open the FIFO for read and do the same fcntl() to make the read() non-blocking.
I now would like to make several (cpu-intensive) calculations on the write side, but ONLY if there is a reader attached.
Therefor I need to find a way on the write side, to detect if the FIFO is opended for read somewhere else.
Has anyone an idea how to achieve this?
I now would like to make several (cpu-intensive) calculations on the write side, but ONLY if there is a reader attached.
For that you can simply create a socket, and when a consumer connects to it, do some work and write back.
But I think a better solution is to have calculation results ready for consumers before they connect (or open FIFO). but you don't want the producer working if work is not being consumed. So define N, number of work results you're willing to keep available for consumption and let the producer (or producers) work and save results in a queue of size N until it's full.
You could implement this with threads, one thread listens for connections, pops from the queue and writes to the consumer and one or more producer threads working and pushing to the queue.
Or you could use POSIX message queues to avoid threading headaches. Create a queue of size N, independent producers (multiple processes written in different languages) could push to the queue until its full, and multiple independent consumers pop from it.

Writing logs to a file in multithreaded application

I've written a server-client application. Now, I have to write what happens on the server to a log file. The server is written in C. I can already write what happens to the screen using printf.
So I'll just have to use fprintf instead of printf. My question is how should I handle the file?
I have Server.c source file where there is the main function
Here is the basic structure of my Server application:
Server.c
//.. some code
int main(...) {
//some code
//initialize variables
//bind server
//listen server on port
while(1)
{
//accept client
int check = pthread_create(&thread, NULL, handle_client,&ctx);//create new thread
//..
}//end while
return EXIT_SUCCESS;
}//end main
handle_client is a function which handles clients in a new thread.
How should I make the server log? I will have one text file for example SERVERLOG.log, but there are many clients on the server. How should I handle multiple access to this file?
One way is to create file when I start the server, open it, write in it, close it.
If a client wants to write in the file, then it should open the file to write in it and then close it.
But there is still a problem when more clients want to write in this file....
A common solution is to have a printf-like function, that writes all output first to a buffer, then locks a semaphore, do the actual writing to file, and unlocks the semaphore. If you are worried about the actual writing being slow, you can instead have a queue where all log messages gets inserted, and let another thread take items from the queue and write them to the file, you still have to protect the queue with e.g. a semaphore, but it should be quicker that doing I/O.
As for the actual file, either open it in the main thread and leave it open. Or if you have a special logging thread with queue then let that thread do the opening. Anyway, you don't need to keep opening/closing it every time you want to write something to it, the important part is to protect it from being written to by multiple threads simultaneous.
Just leave it open. Open the log file at server start.
A simple way to avoid badly-interlaced output buffers is to use a separate logging process, connected by a pipe (or a named pipe). The logger just sits blocked on the read() from the pipe, and writes whatever it gets to the file. (the reader's stdin, stdout could actually point to the pipe and the file) The clients just write to the pipe (which can have been dup()d over stderr) Writes to a pipe (upto PIPE_BUF) are guaranteed to be atomic.

Resources