Writing logs to a file in multithreaded application - c

I've written a server-client application. Now, I have to write what happens on the server to a log file. The server is written in C. I can already write what happens to the screen using printf.
So I'll just have to use fprintf instead of printf. My question is how should I handle the file?
I have Server.c source file where there is the main function
Here is the basic structure of my Server application:
Server.c
//.. some code
int main(...) {
//some code
//initialize variables
//bind server
//listen server on port
while(1)
{
//accept client
int check = pthread_create(&thread, NULL, handle_client,&ctx);//create new thread
//..
}//end while
return EXIT_SUCCESS;
}//end main
handle_client is a function which handles clients in a new thread.
How should I make the server log? I will have one text file for example SERVERLOG.log, but there are many clients on the server. How should I handle multiple access to this file?
One way is to create file when I start the server, open it, write in it, close it.
If a client wants to write in the file, then it should open the file to write in it and then close it.
But there is still a problem when more clients want to write in this file....

A common solution is to have a printf-like function, that writes all output first to a buffer, then locks a semaphore, do the actual writing to file, and unlocks the semaphore. If you are worried about the actual writing being slow, you can instead have a queue where all log messages gets inserted, and let another thread take items from the queue and write them to the file, you still have to protect the queue with e.g. a semaphore, but it should be quicker that doing I/O.
As for the actual file, either open it in the main thread and leave it open. Or if you have a special logging thread with queue then let that thread do the opening. Anyway, you don't need to keep opening/closing it every time you want to write something to it, the important part is to protect it from being written to by multiple threads simultaneous.

Just leave it open. Open the log file at server start.

A simple way to avoid badly-interlaced output buffers is to use a separate logging process, connected by a pipe (or a named pipe). The logger just sits blocked on the read() from the pipe, and writes whatever it gets to the file. (the reader's stdin, stdout could actually point to the pipe and the file) The clients just write to the pipe (which can have been dup()d over stderr) Writes to a pipe (upto PIPE_BUF) are guaranteed to be atomic.

Related

Should I close file while writing/reading concurrently?

I'm making a server that can read/write from file concurrently(from goroutines populated by net/http handlers). I've created The following struct:
type transactionsFile struct {
File *os.File
Mutex *sync.RWMutex
}
I'm initializing file once at the init() function. Should I close it somehow on each writing operation?
You can't write to a closed file, so if you close it after each write operation, you also have to (re)open it before each write.
This would be quite inefficient. So instead leave it open, and only close it once your app is about to terminate (this is required because File.Write() does not guarantee that when it returns the data is written to disk). Since you're writing the file from HTTP handlers, you should implement graceful server termination, and close the file after that. See Server.Shutdown() for details.
Also, if the purpose of your shared file writing is to create some kind of logger, you could take advantage of the log package, so you would not have to use a mutex. For details, see net/http set custom logger.

How should I handle file descriptor 'dependencies' when using epoll?

I'm writing an HTTP/2 server in C, using epoll. Let's say a client asks for /index.html - I need to open a file descriptor pointing to that file and then send it back to the socket whenever I read a chunk of it. So I'd have an event loop that looks something like this:
while (true)
events = epoll_wait()
for event in events
if event is on a socket
handle socket i/o
else if event is on a disk file
read as much as possible, and send to associated socket
However this poses a problem. If the socket then closes (for whatever reason), the file descriptor for index.html will also get closed too. But it's possible that the index.html FD will have already been queued for reading (i.e, it's already in events, since you closed it between calls to epoll_wait), and as such when the for loop gets to processing the FD I'll now be accessing a 'dangling' FD.
If this was a single threaded program I'd try and hack around the issue by looking at file descriptor numbers, but unfortunately I'm running the same epoll loop on multiple threads which means that I can't predict what FD numbers will be in use at a given moment. It's totally possible that by the time the invalid read on the file comes around, another thread will have claimed that FD, so the call to read won't explicitly fail but I'll probably get a use after free anyway for trying to send it on a socket that doesn't exist anymore.
What's the best way of dealing with this issue? Maybe I should take an entirely different approach and not have file I/O on the same epoll loop at all.

Unix named pipe, multiple writers or multiple pipes

I am currently measuring performance of named pipe to compare with another library.
I need to simulate a clients (n) / server (1) situation where server read messages and do a simple action for every written messages. So clients are writers.
My code now work, but if I add a 2nd writer, the reader (server) will never see the data and will not receive forever. The file is still filled with the non-read data at the end and read method will return 0.
Is it ok for a single named pipe to be written by multiple-process? Do I need to initialize it with a special flag for multiple-process?
I am not sure I can/should use multiple writers on a single pipe. But, I am not sure also it would be a good design to create 1 pipe for each clients.
Would it be a more standard design to use 1 named pipe per client connection?
I know about Unix Domain Name Socket and it will be use later. I need o make the named pipes work.

C - Block others from writing to a file

I have multiple process writing to a FIFO file. While one of them is writing, I want to prevent other processes from writing to the file.
I am using read and write calls to access the FIFO file.
Example: There are two C-programs Server.c and Client.c. Multiple clients are writing to the FIFO file created by the server program. Server should be able to read and write from the file at any time where as only one client can access the file at a time.
Try by using locking mechanisms..try concurrent read and exclusive write mechanisms (CREW) implementations.
void ReaderThread::run()
{
...
semaphore++;
read_file();
semaphore--;
...
}
void WriterThread::run()
{
...
semaphore += MaxReaders;
write_file();
semaphore -= MaxReaders;
...
}
With this solution, up to MaxReaders threads can read from the file at the same time. As soon as a writer thread wants to modify the file, it tries to allocate all the semaphore's resources, thus making sure that no other thread can access the file during the write operation.

Multi-threaded reads from one pipe

I'm implementing a system that runs game servers. I have a process (the "game controller") that creates two pipe pairs and forks a child. The child dups its STDIN to one pipe, and dups its STDOUT and STDERR to the other, then runs execlp() to run the game code.
The game controller will have two threads. The first will block in accept() on a named UNIX socket receiving input from another application, and the second thread is blocking read()ing the out and error pipe from the game server.
Occasionally, the first thread will receive a command to send a string to the stdin pipe of the game server. At this point, somehow I need to stop the second thread from read()ing so that the first thread can read the reply from the out and error pipe.
(It is worth noting that I will know how many characters/lines long the reply is, so I will know when to stop reading and let the second thread resume reading, resetting the process.)
How can I temporarily switch the read control to another thread, as above?
There are a couple of options. One would be to have the second thread handle all of the reading, and give the first thread a way to signal it to tell it to pass the input back. But this will be somewhat complicated; you will need to set up a method for signalling between the threads, and make sure that the first thread tells the second thread that it wants the input before the second thread reads it and handles it itself. There will be potential for various kinds of race conditions that could make your code unpredictable.
Another option is to avoid using threads at all. Just use select(2) (or poll(2)) to allow one thread to wait for activity on several file descriptors at once. select allows you to indicate the set of file descriptors that you are interested in. As soon as any activity happens on one of them (a connection is available to accept, data is available to read), select will return, indicating the set of file descriptors that are ready. You can then accept or read on the appropriate descriptors, and when you are done, loop and call select again to wait for the next I/O event.

Resources