Prevent ROS2 Timer Callback Queuing - timer

ROS2 timer adds elements to the callback queue. In a single threaded (C++) node, when a timer callback execution time exceeds the timer period, the spinner would be flooded with timer callbacks. This will cause a degradation in node responsivity to other (subscriber) callbacks.
Is there a way to make a ROS node to hold only one timer's callback instance in a queue at a time?

Related

Event driven simulation: When inserting new events into a priority queue, do old events become redundant?

I created a binary heap based priority queue in C. I'm trying to create a discrete event simulation.
Here's what I understand about event simulation:
Suppose I have 10 values in my priority queue, each value representing an event. For each value in the PQ, the program wil dequeue an value and insert 10 more values. In other words, the program is making new calculations for those 10 events.
But what happens to the old values in the PQ? Since new values are being enqueued for every event, shouldn't the previous values become redundant? Shouldn't they be removed from the PQ so that the PQ doesn't get too large?
Pending events in your priority queue event list remain there until 1) they are polled and become the active event, or 2) they are cancelled (explicitly deleted from the priority queue) due to the logic of a different active event.
For example, consider a simplistic air traffic simulation. A take-off event will schedule an arrival event at the target destination at some specified time. However, a weather event or an emergency event might cancel the scheduled arrival, and either reschedule it with an additional delay or divert the airplane to arrive at a different destination with a different time. However, unless you explicitly cancelled the originally scheduled arrival, that event would be pending on the event list until its scheduled time rolled around.
Bottom line, there's no magic. It's up to you as the modeler to schedule or cancel events in a way that reflects the correct logic of your model. The priority queue just does the bookkeeping to handle the execution order.

Kafka Consumer - Reset Consumer Poll time

I have a Kafka Consumer with poll time mentioned like
kafkaConsumer.poll(polltimeinmilliseconds);
I would like to update the poll timer dynamically. Right now I set that to a static variable , and the poll time updates.
The problem is , the consumer waits for the old timer to complete. i.e if the old timer was 5 minutes, and if I update the timer to 10 ( dynamimcally ), it duly waits for the first 5 minutes before updating to 10 minutes interval.
How do I reset it immediately. i.e the timer should reset and set to 10 minutes immediately?
You can abort a long poll using the wakeup method.
Wakeup the consumer. This method is thread-safe and is useful in particular to abort a long poll. The thread which is blocking in an operation will throw WakeupException. If no thread is blocking in a method which can throw WakeupException, the next call to such a method will raise it instead.

Call back function to be called whenever hardware timer elapses a specified elapse time in STM32F101

Hi I want to toggle LED with timing as follows
100ms ON1, 250ms Off1
1250ms ON2, 1500ms off2
and this cycle gets repeated (Both ON1 off1 and ON2 off2 pair repeats)
For this I have planned to utilize hardware timer with elapsing timings as 100,250,1250 and 1500 and this will repeat.
I am pretty new to the embedded field,
My questions are as follows
How to trigger this using a hardware timer? (How to enable and alternate the timings dynamically?)
How to set a call back function that toggles LED based on the timer elapse ?
Note : This is not a standalone code but an application will
Be running .From OS the callback will be triggered in the background so that other normal application is not affected during this process
Use the OS's software timer service. Create four software timers with the specified periods. The timers should be configured to count once and then stop each time they are started (i.e., they should be "one-shot", not "auto-reloading" nor "continuous" or whatever terminology your OS uses). Each software timer will have a unique callback function that you specify when you create the software timer.
From main, or wherever is appropriate, start the first timer once to get things started. Then start the second timer from the first timer's callback function. And start the third timer from the second timer's callback function. And so on until the last timer's callback function restarts the first timer and the circle of timers repeats.
Use timer interrupt for it.
You have the ready example here:
https://www.diymat.co.uk/arm-blinking-led-driver/
It does what you need and a bit more as well :)

How does the UV_RUN_NOWAIT mode work in libuv?

When running an event loop in libuv using the uv_run function, there's a "mode" parameter that is used with the following values:
UV_RUN_DEFAULT
UV_RUN_ONCE
UV_RUN_NOWAIT
The first two are obvious. UV_RUN_DEFAULT runs the event loop until there are no more events, and UV_RUN_ONCE processing a single event from the loop. However, UV_RUN_NOWAIT doesn't seem to be a separate mode, but rather a flag that can be ORed with one of the other two values.
By default, this function blocks until events are done processing, and UV_RUN_NOWAIT makes it nonblocking, but any documentation I can find on it ends there. My question is, if you run the event loop nonblocking, how are callbacks handled?
The libuv event model is single-threaded (reactor pattern), so I'd assume it needs to block to be able to call the callbacks, but if the main thread is occupied, what happens to an event after it's processed? Will the callback be "queued" until libuv gets control of the main thread again? Or will the callbacks be dispatched on another thread?
Callbacks are handled in the same manner. They will run within the thread that is in uv_run().
Per the documentation:
UV_RUN_DEFAULT: Runs the event loop until the reference count drops to zero. Always returns zero.
UV_RUN_ONCE: Poll for new events once. Note that this function blocks if there are no pending events. Returns zero when done (no active handles or requests left), or non-zero if more events are expected (meaning you should run the event loop again sometime in the future).
UV_RUN_NOWAIT: Poll for new events once but don't block if there are no pending events.
Consider the case where a program has a single watcher listening to a socket. In this scenario, an event would be created when the socket has received data.
UV_RUN_DEFAULT will block the caller even if the socket does not have data. The caller will return from uv_run(), when either:
The loop has been explicitly stopped, via uv_stop()
No more watchers are running in the loop. For example, the only watcher has been stopped.
UV_RUN_ONCE will block the caller even if the socket does not have data. The caller will return from uv_run(), when any of the following occur:
The loop has been explicitly stopped, via uv_stop()
No more watchers are running in the loop. For example, the only watcher has been stopped.
It has handled a max of one event. For example, the socket received data, and the user callback has been invoked. Additional events may be ready to be handled, but will not be handled in the current uv_run() call.
UV_RUN_NOWAIT will return if the socket does not have data.
Often times, running an event-loop in a non-blocking manner is done to integrate with other event-loops. Consider an application that has two event loops: libuv for backend work and Qt UI (which is driven by its own event loop). Being able to run the event loop in a non-blocking manner allows for a single thread to dispatch events on both event-loops. Here is a simplistic overview showing two libuv loops being handled by a single thread:
uv_loop_t *loop1 = uv_loop_new();
uv_loop_t *loop2 = uv_loop_new();
// create, initialize, and start a watcher for each loop.
...
// Handle two event loops with a single thread.
while (uv_run(loop1, UV_RUN_NOWAIT) || uv_run(loop2, UV_RUN_NOWAIT));
Without using UV_RUN_NOWAIT, loop2 would only run once loop1 or loop1's watchers have been stopped.
For more information, consider reading the Advanced Event Loops and Processes sections of An Introduction to libuv.

Server architecture puzzle, C programming

In my program, I use a single thread pool to dispatch all my task, like timer task, non-blocking socket I/O, etc. A task is actually an callback function, which will be executed when specific event received.
The architecture is :
The main thread calls epoll() to harvest the I/O event, then dispatch the I/O callback to the thread pool.
The main thread also handle timer timeout, and dispatch the timeout callback to the thread pool
In an I/O callback, one timer task may be cancelled, depending on I/O processing result.
During one I/O callback is running, the coresponding socket is not monitored for further identical event.
Durning one timer callback is running, that timer task will temporarily removed from the timer task queue.
Here is the problem:
During thread A in the pool is running a timer callback T.
Thread B in the pool may be running another callback(registered for an socket I/O read event), after processing request received, thread B decide to delete the timer task T, but that timer task T is being executed by thread A right now.
I can add an lock for the timer task, but where should I place the lock? I can't place the lock object in the timer task structure, because when I decide to free the task object, I must have already acquired the lock, free and held lock, may lead to undefined behaviours:
pthread_mutex_lock(T->mutex);
free(T);
/*without a pthread_mutex_unlock(T->mutex);*/
And what happened if another thread is blocked on :
pthread_mutex_lock(T->mutex);
Without these problem being addressed, I can't continue my work.Please HELP me out!
Should I use separate thread pool for task of different types in my single process? Or just use single thread?
Any suggestion is appreciated!
You can use a global table of timers protected by its own mutex. The table does not actually need to be global but can belong to some collection, such as whatever owns all the things you are doing I/O on.
Then use this logic:
To create a timer:
Lock the global table.
Add the timer to the global table in state "pending".
Unlock the global table.
Schedule the timer with the thread pool.
To fire a timer:
Lock the global table.
Check the timer's state. If it's not "pending", delete the timer, unlock the table, and stop.
Change the timer's state to "firing".
Unlock the global table.
Perform the timer operation.
Lock the global table.
Remove the timer from the table.
Unlock the global table.
To cancel a timer:
Lock the global table.
Find the timer. If it's state is "pending", change it to "cancelled". Do not delete it.
Unlock the global table.
Can't you refernce count the tasks? When thread A in the pool is running a timer callback T you mark it (e.g. using interlocked increment). When it finishes you decrement the usage. It cannot be freed until the usage is zero.

Resources