Scenario : Client is sending a data and the server is receving the data from client via ethernet layer (udp). When the server receives a data from the client on the ip layer (kernel). It interrupts the kernel and kernel as to execute the data by the client, so I want to create a interrupt service function to catch the interrupt from the network service card.
I am using Interruptattach api to handle the interrupt from the network interface card and sigevent structure to call the specific function.
http://www.qnx.com/developers/docs/6.3.0SP3/neutrino/lib_ref/i/interruptattach.html#HandlerFunction
is it the right way to handle interrupts in qnx ??
volatile int id1, id2, id3;
const struct sigevent *handler1(void *area, int id1)
{
volatile double KernelStartExecutionTime;
KernelStartExecutionTime = GetTimeStamp(); // calculating the time when the kernel starts executing
TASK1(Task2ms_Raster);
return (NULL);
}
const struct sigevent *handler2(void *area, int id2)
{
volatile double KernelStartExecutionTime;
KernelStartExecutionTime = GetTimeStamp(); // calculating the time when the kernel starts executing
TASK2(Task10ms_Raster);
return (NULL);
}
const struct sigevent *handler3(void *area, int id3)
{
volatile double KernelStartExecutionTime;
KernelStartExecutionTime = GetTimeStamp(); // calculating the time when the kernel starts executing
TASK3(Task100ms_Raster);
return (NULL);
}
/*kernel calls attach the interrupt function handler to the hardware interrupt specified by intr(i.e irq) */
// InterruptAttach() : Attach an interrupt handler to an interrupt source
// interrupt source is handler1 for this example
void ISR(void)
{
volatile int irq = 0; //0 : A clock that runs at the resolution set by ClockPeriod()
ThreadCtl (_NTO_TCTL_IO, NULL);
id1 = InterruptAttach(irq, &handler1, NULL, 0, 0);
id2 = InterruptAttach(irq, &handler2, NULL, 0, 0);
id3 = InterruptAttach(irq, &handler3, NULL, 0, 0);
}
int main(int argc, char *argv[])
{
Xcp_Initialize();
CreateSocket();
ISR(); //function call for ISR
return 0;
}
another question : if I want to call another function in the sigevent structure then should I use another ISR for that (i.e. how to handle multiple function from the interrupt)?
I modified my code as above. Will it be efficient if I do like above. One ISR function with InterruptAttach API for three different handler.
This is a bad approach: Interrupt (IRQ) handlers are not interruptable. That means: 1. your computer will lock up when you do a lot of work in them and 2. you can't call every method.
The correct approach is to receive the IRQ, call a handler. The handler should create a memory structure, fill it with the details what needs to be done and adds this "task data" to a queue. A background thread can then wait for elements in the queue and do the work.
That way, IRQ handlers will be small and fast. Your background thread can be as complex as you like. if the thread has a bug, the worst that can happen is that it breaks (make the IRQ handler throw away events when the queue is full).
Note that the queue must be implemented in such a way that adding elements to it never blocks. Check the documentation, there should already be something that allows several threads to exchange data; the same can be used for IRQ handlers.
Related
I am currently trying to develop a proxy program that takes data from a SPI bus to tcp and vice versa. I would like to know if the method i intend to do is a good/intended way of utilising freertos library. The program is running as a SPI master with GPIO pin trigger from slave if slave wants to send data over as SPI can only be initiated from master.
char buffer1 [128];
char buffer2[128];
static QueueHandle_t rdySem1 //semaphore
static QueueHandle_t rdySem2 //semaphore
volatile atomic int GPIO_interrupt_pin;
void SPI_task(void* arg)
{
while(1)
{
if (GPIO_interrupt_pin)
{
//TODO:read data from SPI bus and place in buffer1
xSemaphoreGive(rdySem1);
GPIO_interrupt_pin = 0;
}
xSemaphoreTake(rdySem2, portMAX_DELAY);
//TODO:send data from buffer2[] to SPI bus
}
}
void tcp_task(void* arg)
{
while(1)
{
int len;
char rx_buffer[128];
len = recv(sock, rx_buffer, sizeof(rx_buffer) - 1, 0);
if (len>0)
{
//TODO:process data from TCP socket and place in buffer2
xSemaphoreGive(rdySem2);
}
xSemaphoreTake(rdySem1, portMAX_DELAY);
//TODO:send data from buffer1[] to TCP
}
}
//only run when GPIO pin interrupt triggers
static void isr_handler(void* arg)
{
GPIO_interrupt_pin = 1;
}
Also, i am not very familiar with how freeRTOS work but i believe xSemaphoreTake is a blocking call and it would not work in this context unless i use a non-blocking call version if xSemaphoreTake(). Any kind soul that can point me in the right direction? Much appreciate.
Your current pattern has a few problems but the fundamental idea of using semaphores can solve part of this problem. However, I think it's worth looking at restructuring your code such that each thread only waits on it's respective receive and does the complementary transmit upon reception instead of trying to pass it off to the other thread. Trying to pass, for example, both TCP-recv waiting and SPI-packet-to-TCP-send waiting, which unless you are guaranteed that first you get data over TCP to send to SPI and then you get data back, doesn't really work very well; truly asynchronous communication involves being ready to wake on either event (ie, tcp_task can't be waiting on recv when a SPI packet comes in or it may never TCP send that SPI packet until something is TCP recieved).
Instead, let the tasks only wait on their respective receiving functions and send the data to the other side immediately. If there are mutual exclusion concerns, use a mutex to guard the actual transactions. Also note that even though it's atomic, there is a risk without using test and set that GPIO_interrupt_pin might be set to 0 incorrectly if an interrupt comes between the test and the clearing of the variable. Fortunately, FreeRTOS provides nicer mechanisms in the form of task notifications to do the same thing (and the API I am using here is very much like a semaphore).
void SPI_task(void *arg)
{
while (1) {
// Wait on SPI data ready pin
ulTaskNotifyTake(0, portMAX_DELAY);
// There is spi data, can grab a mutex to avoid conflicting SPI transactions
xSemaphoreTake(spi_mutex, portMAX_DELAY);
char data[128];
spi_recv(data, sizeof(data)); //whatever the spi function is, maybe get length out of it
xSemaphoreGive(spi_mutex);
// Send the data from this thread, no need to have the other thread handle (probably)
send(sock, data, sizeof(data));
}
}
void tcp_task(void *arg)
{
while (1) {
char data[128];
int len = recv(sock, data, sizeof(data));
if (len > 0) {
// Just grab the spi mutex and do the transfer here
xSemaphoreTake(spi_mutex, portMAX_DELAY);
spi_send(data, len);
xSemaphoreGive(spi_mutex);
}
}
}
static void isr_handler(void *arg)
{
vTaskNotifyGiveFromISR(SPI_task_handle, NULL);
}
The above is a simplified example, and there's a bit more depth to go into for task notifications which you can read about here:
https://www.freertos.org/RTOS-task-notifications.html
I have a timer that runs at regular intervals. I create the timer using timer_create() using the SIGEV_THREAD option. This will fire a callback on a thread when the timer expires, rather than send a SIGALRM signal to the process. The problem is, every time my timer expires, a new thread is spawned. This means the program spawns potentially hundreds of threads, depending on the frequency of the timer.
What would be better is to have one thread that handles the callbacks. I can do this when using timer_create() with signals (by using sigaction), but not threads only.
Is there any way to not use signals, but still have the timer notify the process in a single existing thread?
Or should I even worry about this from a performance perspective (threads vs signals)?
EDIT:
My solution was to use SIGEV_SIGNAL and pthread_sigmask(). So, I continue to rely on signals to know when my timer expires, but I can be 100% sure only a single thread (created by me) is being used to capture the signals and execute the appropriate action.
tl;dr: The basic premise that SIGEV_THREAD doesn't work based on signals is false - signals are the underlying mechanism through which new threads are spawned. glibc has no support for reutilizing the same thread for multiple callbacks.
timer_create doesn't behave exactly the way you think - its second parameter, struct sigevent *restrict sevp contains the field sigevent_notify which has following documentation:
SIGEV_THREAD
Notify the process by invoking sigev_notify_function "as
if" it were the start function of a new thread. (Among the
implementation possibilities here are that each timer notification
could result in the creation of a new thread, or that a single thread
is created to receive all notifications.) The function is invoked
with sigev_value as its sole argument. If sigev_notify_attributes is
not NULL, it should point to a pthread_attr_t structure that defines
attributes for the new thread (see pthread_attr_init(3)).
And indeed, if we look at glibc's implementation:
else
{
/* Create the helper thread. */
pthread_once (&__helper_once, __start_helper_thread);
...
struct sigevent sev =
{ .sigev_value.sival_ptr = newp,
.sigev_signo = SIGTIMER,
.sigev_notify = SIGEV_SIGNAL | SIGEV_THREAD_ID,
._sigev_un = { ._pad = { [0] = __helper_tid } } };
/* Create the timer. */
INTERNAL_SYSCALL_DECL (err);
int res;
res = INTERNAL_SYSCALL (timer_create, err, 3,
syscall_clockid, &sev, &newp->ktimerid);
And we can see __start_helper_thread's implementation:
void
attribute_hidden
__start_helper_thread (void)
{
...
int res = pthread_create (&th, &attr, timer_helper_thread, NULL);
And follow along to timer_helper_thread's implementation:
static void *
timer_helper_thread (void *arg)
{
...
/* Endless loop of waiting for signals. The loop is only ended when
the thread is canceled. */
while (1)
{
...
int result = SYSCALL_CANCEL (rt_sigtimedwait, &ss, &si, NULL, _NSIG / 8);
if (result > 0)
{
if (si.si_code == SI_TIMER)
{
struct timer *tk = (struct timer *) si.si_ptr;
...
(void) pthread_create (&th, &tk->attr,
timer_sigev_thread, td);
So - at least at the glibc level - when using SIGEV_THREAD you are necessarily using signals to signal a thread to create the function anyways - and it seems like your primary motivation to begin with was avoiding the use of alarm signals.
At the Linux source code level, timers seems to work on signals alone - the posix_timer_event in kernel/time/posix_timers.c function (called by alarm_handle_timer in kernel/time/alarmtimer.c) goes straight to code in signal.c that necessarily sends a signal. So it doesn't seem possible to avoid signals when working with timer_create, and this statement from your question - "This will fire a callback on a thread when the timer expires, rather than send a SIGALRM signal to the process." - is false (though it's true that the signal doesn't have to be SIGALRM).
In other words - there seem to be no performance benefits to be gained from SIGEV_THREAD as opposed to signals. Signals will still be used to trigger the creation of threads, and you're adding the additional overhead of creating new threads.
i am currently working on project involving the interfacing of an ADC with Ras.-Pi using SPI communication. In the project I am controlling the initialisation of SPI using a timer, which then initiates a signal handler. In the signal handler the SPI transmission takes place and value is being stored in a variable, this variabler i am accesing in a thread and storing the recieved value in an array.
The code runs but the program never comes out of the signal handler. I want the handler to jump to the thread to store the recieved value everytime it processes a value.
Can someone point me to something reliable.
void getSPIvalues(){ // A new Thread which runs parallel and get the values from ADC over SPI
printf("inside thread function\n");
timer_useconds(100, 1);
spiValues[i] = rawData;
printf("from thread, value = %d\n", spiValues[i]);
i++;
}
void signalHandler(int sig){
printf("inside handler function\n");
PWMGenerate(0, 26, 2); //Zyklus = 960 ns, Freuquency = 1,1 MHz, duty clycle= 8 %
char data[2];
bcm2835_spi_transfern(data, sizeof(data));
rawData = (int)(data[0] << 8 | data[1]);
bcm2835_gpio_write(PIN, LOW);
}
//Handler Installation
memset(&sa, 0, sizeof(sa));
sigemptyset(&sa.sa_mask);
sa.sa_handler = &signalHandler;
sigaction(SIGVTALRM, &sa, NULL);
If I understand correctly, you want a "status update" every x useconds of process execution (rather than of wall clock time, as SIGVTALRM implies ITIMER_VIRTUAL to me).
The safest, simplest way to do this will be to accept a pending signal, instead of delivering that signal to a signal handler.
Before spawning any threads, use pthread_sigmask to SIG_BLOCK at least SIGVTALRM. All new threads will inherit that signal mask. Then, spawn your status thread, detached, which sets an intervalic virtual clock timer and loops, waiting to accept VTALRM:
static void *
my_status_thread(void *ignored) { // spawn me with VTALRM blocked
sigset_t desired; // for me and everyone else!
sigemptyset(&desired);
sigaddset(&desired, SIGVTALRM);
set_itimer_virtual(100, 1); // setitimer()
while (1) {
int s;
(void)sigwait(&desired, &s);
// we got VTALRM, pull the data
PWMGenerate(...);
....
printf("value is %d\n", ...);
}
return NULL; // not reached
}
Aside
It is possible to do this correctly with signal handlers.
It's quite nuanced, and the nuances matter. You should probably be aware that sigaction is preferred over signal and why. That signal disposition (a registered "handler" or "behavior") is a global process attribute, though signal delivery per se and signal masking are per-thread. That sig_atomic_t doesn't necessarily mean volatile, and why you'd care. That very, very few functions can be safely invoked within a signal handler. That sigemptyset(&sa.sa_mask) is, in my opinion, a bit cargo-culty, and you almost certainly want a full mask inside any consequential handlers.
Even then, it's just not worth it. Signal acceptance is a superior idiom to delivery: you react to signals when and where it is safe for you to do so.
I'm working on an embedded Linux ARM system that needs to react to a power failure signal by turning off some power supplies (via GPIO control) in a specific sequence. This process needs to start as soon as possible, so I've installed an interrupt handler to detect this power failure.
The problem is that we need to introduce a little bit of delay between turning each supply off. I understand that delays are not usually allowed in an interrupt handler, but it's totally okay if this handler never returns (power is failing!).
I'm trying to introduce a delay by using the method described in this post, but I can't for the life of me actually cause a measurable delay (observed on an oscilloscope).
What am I doing wrong, and how can I do it right?
What follows is the relevant code.
/* This function sets en_gpio low, then waits until pg_gpio goes low. */
static inline void powerdown(int en_gpio, int pg_gpio)
{
/* Bring the enable line low. */
gpio_set_value(en_gpio, 0);
/* Loop until power good goes low. */
while (gpio_get_value(pg_gpio) != 0);
}
/* This is my attempt at a delay function. */
#define DELAY_COUNT 1000000000
static void delay(void)
{
volatile u_int32_t random;
volatile u_int32_t accum;
volatile u_int32_t i;
get_random_bytes((void*)&random, 4);
accum = 0;
for (i = 0; i < DELAY_COUNT; i++)
accum = accum * random;
}
/* This is the interrupt handler. */
static irqreturn_t power_fail_interrupt(int irq, void *dev_id)
{
powerdown(VCC0V75_EN, VCC0V75_PG);
delay();
powerdown(DVDD15_EN, DVDD15_PG);
delay();
powerdown(DVDD18_EN, DVDD18_PG);
delay();
powerdown(CVDD1_EN, CVDD1_PG);
delay();
powerdown(CVDD_EN, CVDD_PG);
/* It doesn't matter if we get past this point. Power is failing. */
/* I'm amazed this printk() sometimes gets the message out before power drops! */
printk(KERN_ALERT "egon_power_fail driver: Power failure detected!\n");
return IRQ_HANDLED;
}
Using delay functions in hard IRQ handlers is usually bad idea, because interrupts are disabled in hard IRQ handler and system will hang until your hard IRQ function is finished. On the other hand, you can't use sleep functions in hard IRQ handler since hard IRQ is atomic context.
Taking all that into the account, you may want to use threaded IRQ. This way hard IRQ handler is only wakes bottom half IRQ handler (which executed in kernel thread). In this threaded handler you can use regular sleep functions.
To implement threaded IRQ instead of regular IRQ, just replace your request_irq() function with request_threaded_irq() function. E.g. if you have requesting IRQ like this:
ret = request_irq(irq, your_irq_handler, IRQF_SHARED,
dev_name(&dev->dev), chip);
You can replace it with something like this:
ret = request_threaded_irq(irq, NULL, your_irq_handler,
IRQF_ONESHOT | IRQF_SHARED,
dev_name(&dev->dev), chip);
Here NULL means that the standard hard IRQ handler will be used (which is only wakes threaded IRQ handler), and your_irq_handler() function will be executed in kernel thread (where you can call sleep functions). Also IRQF_ONESHOT flag should be used when requesting threaded IRQ.
It also should be mentioned that there is managed version ofrequest_threaded_irq() function, called devm_request_threaded_irq(). Using it (instead of regular request_threaded_irq()) allows you to omit free_irq() function in your driver exit function (and also in error path). I would recommend you use devm_* function (if your kernel version already has it). But don't forget to remove all the free_irq() calls in your driver if you decided to go with devm_*.
TL;DR
Replace your request_irq() with request_threaded_irq() (as it shown above) and you will be able to use sleep in your IRQ handler.
I would rearchitect this into two parts:
An interrupt handler
An application which waits for the interrupt handler, and then does the timed logic.
As you have experienced, sleeping in an IRQ handler is bad. So is any significant busy waiting as it kills the responsiveness of the rest of the system.
The specific mechanism for the interaction could be any of several means.
If a Linux device driver were used, it could accept read() operations and return something (like how long the wait was, or even single byte of zero) when an interrupt occurs. So the application would open the device, do a blocking read() and when it returns successfully (without error) do whatever logic is required all in user mode at (maybe) normal priority.
It turns out the root cause of my issue was a misconfigured pin (the one the interrupt signal was on), and my interrupt wasn't event occuring... I was looking at the rails coming down themselves uncontrolled. I'm guessing I messed this up while I was working on another part of the system...
I ended up using the following function to implement my delay in the hard interrupt. It's not sexy, but it does work and it's simple, and I believe the shift operation avoids overflow as pointed out in a comment by #specializt.
This code is very specific to a single piece of equipment, and the testing I've done today shows it to be pretty stable.
/* This is my attempt at a delay function. */
/* A count of 8 is approximately 100 microseconds */
static void delay(int delay_count)
{
volatile u_int32_t random;
volatile u_int64_t accum;
volatile u_int32_t i;
accum = 0;
for (i = 0; i < delay_count; i++)
{
get_random_bytes((void*)&random, 4);
accum = accum * random;
accum = accum >> 32;
}
}
The main function is based on libevent, but there is a long run task in the function. So start N treads to run the tasks. Is is this idea OK? And how to use libevent and pthread together in C?
Bumping an old question, may have already been solved. But posting the answer just in case someone else needs it.
Yes, it is okay to do threading in this case. I recently used libevent in pthreads, and it seems to be working just fine. Here's the code :
#include <stdint.h>
#include <pthread.h>
#include <event.h>
void * thread_func (void *);
int main(void)
{
int32_t tid = 0, ret = -1;
struct event_base *evbase;
struct event *timer;
int32_t *t_ret = &ret;
struct timeval tv;
// 1. initialize libevent for pthreads
evthread_use_pthreads();
ret = pthread_create(&tid, NULL, thread_func, NULL);
// check ret for error
// 2. allocate event base
evbase = event_base_new();
// 3. allocate event object
timer = event_new(evbase, -1, EV_PERSIST, callback_func, NULL);
// 4. add event
tv.tv_sec = 0;
tv.tv_usec = 1000;
evtimer_add(timer, &tv);
// 5. start the event loop
event_base_dispatch(evbase); // event loop
// join pthread...
// 6. free resources
event_free(timer);
event_base_free(evbase);
return 0;
}
void * thread_func(void *arg)
{
struct event *ev;
struct event_base *base;
base = event_base_new();
ev = event_new(base, -1, EV_PERSIST, thread_callback, NULL);
event_add(ev, NULL); // wait forever
event_base_dispatch(base); // start event loop
event_free(ev);
event_base_free(base);
pthread_exit(0);
}
As you can see, in my case, the event for the main thread is timer. The base logic followed is as below :
call evthread_use_pthreads() to initialize libevent for pthreads on Linux (my case). For windows evthread_use_window_threads(). Check out the documentation given in event.h itself.
Allocate an event_base structure on global heap as instructed in documentation. Make sure to check return value for errors.
Same as above, but allocate event structure itself. In my case, I am not waiting on any file descriptor, so -1 is passed as argument. Also, I want my event to persist, hence EV_PERSIST . The code for callback functions is omitted.
Schedule the event for execution
Start the event loop
free the resources when done.
Libevent version used in my case is libevent2 5.1.9 , and you will also need libevent_pthreads.so library for linking.
cheers.
That would work.
In the I/O callback function delegates time consuming job to another thread of a thread pool. The exact mechanics depend on the interface of the worker thread or the thread pool.
To communicate the result back from the worker thread to the I/O thread use a pipe. The worker thread writes the pointer to the result object to the pipe and the I/O thread
wakes up and read the pointer from the pipe.
There is a multithreaded libevent example in this blog post:
http://www.roncemer.com/multi-threaded-libevent-server-example
His solution is, to quote:
The solution is to create one libevent event queue (AKA event_base) per active connection, each with its own event pump thread. This project does exactly that, giving you everything you need to write high-performance, multi-threaded, libevent-based socket servers.
NOTE This is for libev not libevent but the idea may apply.
Here I present an example for the community. Please comment and let me know if there are any noticable bugs. This example could include a signal handler for thread termination and graceful exit in the future.
//This program is demo for using pthreads with libev.
//Try using Timeout values as large as 1.0 and as small as 0.000001
//and notice the difference in the output
//(c) 2009 debuguo
//(c) 2013 enthusiasticgeek for stack overflow
//Free to distribute and improve the code. Leave credits intact
//compile using: gcc -g test.c -o test -lpthread -lev
#include <ev.h>
#include <stdio.h> // for puts
#include <stdlib.h>
#include <pthread.h>
pthread_mutex_t lock;
double timeout = 0.00001;
ev_timer timeout_watcher;
int timeout_count = 0;
ev_async async_watcher;
int async_count = 0;
struct ev_loop* loop2;
void* loop2thread(void* args)
{
// now wait for events to arrive on the inner loop
ev_loop(loop2, 0);
return NULL;
}
static void async_cb (EV_P_ ev_async *w, int revents)
{
//puts ("async ready");
pthread_mutex_lock(&lock); //Don't forget locking
++async_count;
printf("async = %d, timeout = %d \n", async_count, timeout_count);
pthread_mutex_unlock(&lock); //Don't forget unlocking
}
static void timeout_cb (EV_P_ ev_timer *w, int revents) // Timer callback function
{
//puts ("timeout");
if(ev_async_pending(&async_watcher)==false){ //the event has not yet been processed (or even noted) by the event loop? (i.e. Is it serviced? If yes then proceed to)
ev_async_send(loop2, &async_watcher); //Sends/signals/activates the given ev_async watcher, that is, feeds an EV_ASYNC event on the watcher into the event loop.
}
pthread_mutex_lock(&lock); //Don't forget locking
++timeout_count;
pthread_mutex_unlock(&lock); //Don't forget unlocking
w->repeat = timeout;
ev_timer_again(loop, &timeout_watcher); //Start the timer again.
}
int main (int argc, char** argv)
{
if (argc < 2) {
puts("Timeout value missing.\n./demo <timeout>");
return -1;
}
timeout = atof(argv[1]);
struct ev_loop *loop = EV_DEFAULT; //or ev_default_loop (0);
//Initialize pthread
pthread_mutex_init(&lock, NULL);
pthread_t thread;
// This loop sits in the pthread
loop2 = ev_loop_new(0);
//This block is specifically used pre-empting thread (i.e. temporary interruption and suspension of a task, without asking for its cooperation, with the intention to resume that task later.)
//This takes into account thread safety
ev_async_init(&async_watcher, async_cb);
ev_async_start(loop2, &async_watcher);
pthread_create(&thread, NULL, loop2thread, NULL);
ev_timer_init (&timeout_watcher, timeout_cb, timeout, 0.); // Non repeating timer. The timer starts repeating in the timeout callback function
ev_timer_start (loop, &timeout_watcher);
// now wait for events to arrive on the main loop
ev_loop(loop, 0);
//Wait on threads for execution
pthread_join(thread, NULL);
pthread_mutex_destroy(&lock);
return 0;
}