Server architecture puzzle, C programming - c

In my program, I use a single thread pool to dispatch all my task, like timer task, non-blocking socket I/O, etc. A task is actually an callback function, which will be executed when specific event received.
The architecture is :
The main thread calls epoll() to harvest the I/O event, then dispatch the I/O callback to the thread pool.
The main thread also handle timer timeout, and dispatch the timeout callback to the thread pool
In an I/O callback, one timer task may be cancelled, depending on I/O processing result.
During one I/O callback is running, the coresponding socket is not monitored for further identical event.
Durning one timer callback is running, that timer task will temporarily removed from the timer task queue.
Here is the problem:
During thread A in the pool is running a timer callback T.
Thread B in the pool may be running another callback(registered for an socket I/O read event), after processing request received, thread B decide to delete the timer task T, but that timer task T is being executed by thread A right now.
I can add an lock for the timer task, but where should I place the lock? I can't place the lock object in the timer task structure, because when I decide to free the task object, I must have already acquired the lock, free and held lock, may lead to undefined behaviours:
pthread_mutex_lock(T->mutex);
free(T);
/*without a pthread_mutex_unlock(T->mutex);*/
And what happened if another thread is blocked on :
pthread_mutex_lock(T->mutex);
Without these problem being addressed, I can't continue my work.Please HELP me out!
Should I use separate thread pool for task of different types in my single process? Or just use single thread?
Any suggestion is appreciated!

You can use a global table of timers protected by its own mutex. The table does not actually need to be global but can belong to some collection, such as whatever owns all the things you are doing I/O on.
Then use this logic:
To create a timer:
Lock the global table.
Add the timer to the global table in state "pending".
Unlock the global table.
Schedule the timer with the thread pool.
To fire a timer:
Lock the global table.
Check the timer's state. If it's not "pending", delete the timer, unlock the table, and stop.
Change the timer's state to "firing".
Unlock the global table.
Perform the timer operation.
Lock the global table.
Remove the timer from the table.
Unlock the global table.
To cancel a timer:
Lock the global table.
Find the timer. If it's state is "pending", change it to "cancelled". Do not delete it.
Unlock the global table.

Can't you refernce count the tasks? When thread A in the pool is running a timer callback T you mark it (e.g. using interlocked increment). When it finishes you decrement the usage. It cannot be freed until the usage is zero.

Related

Kafka Consumer - Reset Consumer Poll time

I have a Kafka Consumer with poll time mentioned like
kafkaConsumer.poll(polltimeinmilliseconds);
I would like to update the poll timer dynamically. Right now I set that to a static variable , and the poll time updates.
The problem is , the consumer waits for the old timer to complete. i.e if the old timer was 5 minutes, and if I update the timer to 10 ( dynamimcally ), it duly waits for the first 5 minutes before updating to 10 minutes interval.
How do I reset it immediately. i.e the timer should reset and set to 10 minutes immediately?
You can abort a long poll using the wakeup method.
Wakeup the consumer. This method is thread-safe and is useful in particular to abort a long poll. The thread which is blocking in an operation will throw WakeupException. If no thread is blocking in a method which can throw WakeupException, the next call to such a method will raise it instead.

Activiti / Camunda change boundary timer with variable

I got a special question regarding timer boundary events on a user task in Activiti/Camunda:
When starting the process I set the timer duration with a process variable and use expressions in the boundary definition to resolve the variable. The boundary event is defined on a user task.
<bpmn2:timerEventDefinition id="_TimerEventDefinition_11">
<bpmn2:timeDuration xsi:type="bpmn2:tFormalExpression">${hurry}</bpmn2:timeDuration>
</bpmn2:timerEventDefinition>
In some cases, when the timer is already running it can occur, that the deadline (dueDate) should be extended because the asignee has requested more time. For this purpose i want to change the value of the process variable defining the deadline.
As it happens, the variable is already resolved at the process-start and set to the boundary event.
Any further changes of the variable do not affect the dueDate of the boundary timer because it is stored in the database and is not updated when the value of the variable changes.
I know how to update the dueDate of the job element via the Java API, but i want to provide a generic approach like setting it with changing the value of the variable.
The most common use case for extending the deadline will be when the boundary timer is already running.
Any ideas how to cope with this problem?
Any tips are very apprechiated.
Cheers Chris
After some time of thinking I came up with a workaround like that:
I start the process with two variables. "hurry" is evaluated for the boundary timer. And "extendDeadline" is initialized with false. If the timer triggers and the process advances to the exclusive gateway, the value of "extendDeadline" is evaluated.
If a user has changed the value of "extendDeadline" to true during the the timer was running the process returns to the user task again where the boundary timer is set to the value of "hurry".
If the "extendDeadline" is still set to false, the process can proceed.
If the timer is running you can change dueDate of the timer by executing a signal. If a assginee requested more time, set new value of hurry and execute the signal. The old timer will be canceled and the new timer will be created with new due date.
runtimeService.setVariable(execution.getId(), "hurry", newDueDate);
runtimeService.signalEventReceived(signalName, execution.getId());
Solution is to have 2 out going sequence flow, one should be from boundary timer on task and another one should be from task it self, as shown in diagram added by #theFriedC. See following image also.
Then you can use some exclusive gateway on 2nd sequence flow and reroute that back to same task with a new timer value.

Differences between events and semaphores

I already searched for this subject but couldn't understand it very well. What are the main differences between events and semaphores?
An event generally has only two states, unsignaled or signaled. A semaphore has a count, and is considered unsignaled if the count is zero, and signaled if the count is not zero. In the case of Windows, ReleaseSemaphore() increments a semaphore count, and WaitForSingleObject(...) with a handle of a semaphore will wait (unless the timeout parameter is set to zero) for a non-zero count, then decrement the count before returning.
Do you need do know it in a specific context? That would help to make it better understandable.
Typically a semaphore is some token that must be obtained to execute an action, e.g. lock on an execution unit that is protected from concurrent access.
Events are functions in a message/subscriber pattern.
So they are somewhat related but not even comparable.
A typical confusing/complex scenario that you may face is that one event triggers two different subscribers, that than want simultaneous access to some resource. They should request for a semaphore token and release it after use to let the other subscriber have a go.

How does the UV_RUN_NOWAIT mode work in libuv?

When running an event loop in libuv using the uv_run function, there's a "mode" parameter that is used with the following values:
UV_RUN_DEFAULT
UV_RUN_ONCE
UV_RUN_NOWAIT
The first two are obvious. UV_RUN_DEFAULT runs the event loop until there are no more events, and UV_RUN_ONCE processing a single event from the loop. However, UV_RUN_NOWAIT doesn't seem to be a separate mode, but rather a flag that can be ORed with one of the other two values.
By default, this function blocks until events are done processing, and UV_RUN_NOWAIT makes it nonblocking, but any documentation I can find on it ends there. My question is, if you run the event loop nonblocking, how are callbacks handled?
The libuv event model is single-threaded (reactor pattern), so I'd assume it needs to block to be able to call the callbacks, but if the main thread is occupied, what happens to an event after it's processed? Will the callback be "queued" until libuv gets control of the main thread again? Or will the callbacks be dispatched on another thread?
Callbacks are handled in the same manner. They will run within the thread that is in uv_run().
Per the documentation:
UV_RUN_DEFAULT: Runs the event loop until the reference count drops to zero. Always returns zero.
UV_RUN_ONCE: Poll for new events once. Note that this function blocks if there are no pending events. Returns zero when done (no active handles or requests left), or non-zero if more events are expected (meaning you should run the event loop again sometime in the future).
UV_RUN_NOWAIT: Poll for new events once but don't block if there are no pending events.
Consider the case where a program has a single watcher listening to a socket. In this scenario, an event would be created when the socket has received data.
UV_RUN_DEFAULT will block the caller even if the socket does not have data. The caller will return from uv_run(), when either:
The loop has been explicitly stopped, via uv_stop()
No more watchers are running in the loop. For example, the only watcher has been stopped.
UV_RUN_ONCE will block the caller even if the socket does not have data. The caller will return from uv_run(), when any of the following occur:
The loop has been explicitly stopped, via uv_stop()
No more watchers are running in the loop. For example, the only watcher has been stopped.
It has handled a max of one event. For example, the socket received data, and the user callback has been invoked. Additional events may be ready to be handled, but will not be handled in the current uv_run() call.
UV_RUN_NOWAIT will return if the socket does not have data.
Often times, running an event-loop in a non-blocking manner is done to integrate with other event-loops. Consider an application that has two event loops: libuv for backend work and Qt UI (which is driven by its own event loop). Being able to run the event loop in a non-blocking manner allows for a single thread to dispatch events on both event-loops. Here is a simplistic overview showing two libuv loops being handled by a single thread:
uv_loop_t *loop1 = uv_loop_new();
uv_loop_t *loop2 = uv_loop_new();
// create, initialize, and start a watcher for each loop.
...
// Handle two event loops with a single thread.
while (uv_run(loop1, UV_RUN_NOWAIT) || uv_run(loop2, UV_RUN_NOWAIT));
Without using UV_RUN_NOWAIT, loop2 would only run once loop1 or loop1's watchers have been stopped.
For more information, consider reading the Advanced Event Loops and Processes sections of An Introduction to libuv.

Locking dispatcher

Is it necessary to lock code snippet where multiple threads access same wpf component via dispatcher?
Example:
void ladder_OnIndexCompleted(object sender, EventArgs args)
{
lock (locker)
{
pbLadder.Dispatcher.Invoke(new Action(() => { pbLadder.Value++; }));
}
}
pbLadder is a progress bar and this event can be raised from multiple threads in the same time.
You should not acquire a lock if you're then going to marshal to another thread in a synchronous fashion - otherwise if you try to acquire the same lock in the other thread (the dispatcher thread in this case) you'll end up with a deadlock.
If pbLadder.Value is only used from the UI thread, then you don't need to worry about locking for thread safety - the fact that all the actions occur on the same thread isolates you from a lot of the normal multi-threading problems. The fact that the original action which caused the code using pbLadder.Value to execute occurred on a different thread is irrelevant.
All actions executed on the Dispatcher are queued up and executed in sequence on the UI thread. This means that data races like that increment cannot occur. The Invoke method itself is thread-safe, so also adding the action to the queue does not require any locking.
From MSDN:
Executes the specified delegate with the specified arguments
synchronously on the thread the Dispatcher is associated with.
and:
The operation is added to the event queue of the Dispatcher at the
specified DispatcherPriority.
Even though this one is pretty old it was at the top of my search results and I'm pretty new (4 months since I graduated) so after reading other peoples comments, I went and spoke with my senior coder. What the others are saying above is accurate but I felt the answers didn't provide a solution, just information. Here's the feedback from my senior coder:
"It's true that the Dispatcher is running on its own thread, but if another thread is accessing an object that the dispatcher wants to access then all UI processing stops while the dispatcher waits for the access. To solve this ideally, you want to make a copy of the object that the dispatcher needs to access and pass that to the dispatcher, then the dispatcher is free to edit the object and won't have to wait on the other thread to release their lock."
Cheers!

Resources