How does Nginx prevents race condition? - c

When thread pool mechanism is enabled in Nginx, some of the aio tasks would be offloaded to a thread pool, which will then notify the main thread when the tasks are done.
But what if a request is timeout while being processed by a thread? ngx_event_expire_timers will invoke ev->handler(ev) when an event is timeout. How does Nginx prevent such race condition? Please help me out.

Related

How to handle shared memory in a multi threaded environment?

I have a client-server model. A multithreaded client sends a message to the server over the TCP sockets. The server is also multiple threaded with each request handled by a thread from the worker pool.
Now, the server must send back the message to the client via shared-memory IPC. For example:
multi threaded client --- GET /a.png --> server
|
|
one worker
|
|
\ /
puts the file descriptor into the shared memory
When worker thread adds the information into the shared memory, how do I make sure that it is read by the same client that requested it?
I feel clueless here as to how to proceed. Currently, I have created one segment of shared memory and there are 20 threads on the server and 10 threads on the client.
While you can use IPC between threads, it's generally not a good idea. Threads share all memory anyway since they are part of the same process and there are very efficient mechanisms for communications between threads.
It might just be easier to have the same thread handle a request all the way through. That way, you don't have to hand off a request from thread to thread. However, if you have a pool of requests that are being worked on, it often makes sense to have a thread be able to "put down" a request and then later be able to have that thread or a different thread "pick up" the request.
The easiest way to do this is to make all the information related to the request live in a single structure or object. Use standard thread synchronization tools (like mutexes) to control finding the object, taking ownership of it, and so on.
So when an I/O thread receives a request, it creates a new request object, acquires a mutex, and adds it to the global collection of requests the server is working on. Worker threads can check this global collection to see which requests need work or they can be explicitly dispatched by the thread that created the request.

Windows ThreadPool API - wait until all tasks finished

I'm trying to use Windows Thread Pool API as pat of my C program. In my scenario, I want to send many tasks to the thread pool, and wait until all tasks are finished, and there are no more pending tasks. After all tasks finished I want to close the thread pool and continue execution of my program.
I believe the way to achieve it is using the wait objects described in the link above, but I can't really figure out what is the right way to do that. Does anyone understand if it's possible and what is the right way to do that?

mutex lock in multi-tasking system

I know it’s been written a lot about mutex implementation, however I couldn’t find any solution for my problem.
I am working on one core multi-task system ran from RTOS.
Task_1() // lower priority preemptive task
Task_2() // higher priority task
I am trying to implement mutex for locking the Uart communication port, however I am facing with the following problem:
Task_1 locks the mutex and starts sending a message over Uart port.
In meantime it’s been preempted (interrupted) by the higher priority Task_2 which also attempts to send data over UART.
Task_2, however can not lock the mutex since the task which is holding the mutex locked was interrupted and it can not unlock it. This blocks both Tasks run.
Is there any good solution to avoid or resolve this situation?
The goal is also not to corrupt the Uart data sent by Task_1.

What does an asynchronous server mean?

I am reading a journal, it stated
Lighttpd is asynchronous server, and Apache2 is a process-based
server.
What does this actually mean?
Which server will you recommend for RasPi in purpose of monitoring purposes.
Thanks.
See this website for a detailed explanation.
In the traditional thread-based (Synchronous) models, for each client there is one thread which is completely separate and is dedicated to serve that thread. This might cause I/O blocking problems when process is waiting to get completed to release the resources (memory, CPU) in hold. Also,creating separate processes consumes more resources.
Asynchronous servers do not create a new process or thread for a new request. Here the worker process accepts the requests and process thousands of it with the implementation of highly efficient event loops.Asynchronous means that the threads can be executed concurrently with out blocking each other. It enhances the sharing of resources without being dedicated and blocked.

C - simultaneous receiving and handling data from unix sockets

I have a C program which communicates with PHP through Unix Sockets. The procedure is as follows: PHP accepts a file upload by the user, then sends a "signal" to C which then dispatches another process (fork) to unzip the file (I know this could be handled by PHP alone, this is just an example; the whole problem is more complex).
The problem is that I don't want to have more than say 4 processes running at the same time. I think this could be solved like this: C, when it gets a new "task" from PHP dumps it on a queue and handles them one-by-one (assuring that there are no more than 4 running) while still listening on the socket.
I'm unsure how to achieve this though, as I cannot do that in the same process (or can I)? I have thought I could have another child process for managing the queue which would be accessible by the parent with the use of shared memory, but that seems overly complicated. Is there any other way around?
Thanks in advance.
If you need to have a separate process for each task handler, then you might consider having five separate processes. The first one is the listener, and handles new incoming tasks, and places it into a queue. Each task handler initially sends a request for work, and also when it is finished processing a task. When the listener receives this request, it delivers the next task on the queue to the task handler, or it places the task handler on a handler queue if the task queue is empty. When the task queue transitions from empty to non-empty, it checks if there is a ready task handler in the handler queue. If so, it takes that task handler out of the queue, and delivers the task from the task queue to the task handler.
The PHP process would put tasks to the listener, while the task handlers would get tasks from the listener. The listener simply waits for put or get requests, and processes them. You can think of the listener as a simple web server, but each of the socket connections to the PHP process and to each task handler can be persistent.
Since the number of sockets is small and persistent, any of the multiplexing calls could work (select, poll, epoll, kqueue, or whatever is best and or available for your system), but it may be easiest to use a separate thread to handle each socket synchronously. The ready task handler queue would then be a semaphore or a condition variable on the task queue. The thread that handles puts from the PHP process would place tasks on the task queue, and up the semaphore. Each thread that handles ready tasks would down the semaphore, then take a task off the task queue. The task queue itself may itself need mutual exclusive protection depending on how it is implemented.

Resources