I am currently trying to develop a proxy program that takes data from a SPI bus to tcp and vice versa. I would like to know if the method i intend to do is a good/intended way of utilising freertos library. The program is running as a SPI master with GPIO pin trigger from slave if slave wants to send data over as SPI can only be initiated from master.
char buffer1 [128];
char buffer2[128];
static QueueHandle_t rdySem1 //semaphore
static QueueHandle_t rdySem2 //semaphore
volatile atomic int GPIO_interrupt_pin;
void SPI_task(void* arg)
{
while(1)
{
if (GPIO_interrupt_pin)
{
//TODO:read data from SPI bus and place in buffer1
xSemaphoreGive(rdySem1);
GPIO_interrupt_pin = 0;
}
xSemaphoreTake(rdySem2, portMAX_DELAY);
//TODO:send data from buffer2[] to SPI bus
}
}
void tcp_task(void* arg)
{
while(1)
{
int len;
char rx_buffer[128];
len = recv(sock, rx_buffer, sizeof(rx_buffer) - 1, 0);
if (len>0)
{
//TODO:process data from TCP socket and place in buffer2
xSemaphoreGive(rdySem2);
}
xSemaphoreTake(rdySem1, portMAX_DELAY);
//TODO:send data from buffer1[] to TCP
}
}
//only run when GPIO pin interrupt triggers
static void isr_handler(void* arg)
{
GPIO_interrupt_pin = 1;
}
Also, i am not very familiar with how freeRTOS work but i believe xSemaphoreTake is a blocking call and it would not work in this context unless i use a non-blocking call version if xSemaphoreTake(). Any kind soul that can point me in the right direction? Much appreciate.
Your current pattern has a few problems but the fundamental idea of using semaphores can solve part of this problem. However, I think it's worth looking at restructuring your code such that each thread only waits on it's respective receive and does the complementary transmit upon reception instead of trying to pass it off to the other thread. Trying to pass, for example, both TCP-recv waiting and SPI-packet-to-TCP-send waiting, which unless you are guaranteed that first you get data over TCP to send to SPI and then you get data back, doesn't really work very well; truly asynchronous communication involves being ready to wake on either event (ie, tcp_task can't be waiting on recv when a SPI packet comes in or it may never TCP send that SPI packet until something is TCP recieved).
Instead, let the tasks only wait on their respective receiving functions and send the data to the other side immediately. If there are mutual exclusion concerns, use a mutex to guard the actual transactions. Also note that even though it's atomic, there is a risk without using test and set that GPIO_interrupt_pin might be set to 0 incorrectly if an interrupt comes between the test and the clearing of the variable. Fortunately, FreeRTOS provides nicer mechanisms in the form of task notifications to do the same thing (and the API I am using here is very much like a semaphore).
void SPI_task(void *arg)
{
while (1) {
// Wait on SPI data ready pin
ulTaskNotifyTake(0, portMAX_DELAY);
// There is spi data, can grab a mutex to avoid conflicting SPI transactions
xSemaphoreTake(spi_mutex, portMAX_DELAY);
char data[128];
spi_recv(data, sizeof(data)); //whatever the spi function is, maybe get length out of it
xSemaphoreGive(spi_mutex);
// Send the data from this thread, no need to have the other thread handle (probably)
send(sock, data, sizeof(data));
}
}
void tcp_task(void *arg)
{
while (1) {
char data[128];
int len = recv(sock, data, sizeof(data));
if (len > 0) {
// Just grab the spi mutex and do the transfer here
xSemaphoreTake(spi_mutex, portMAX_DELAY);
spi_send(data, len);
xSemaphoreGive(spi_mutex);
}
}
}
static void isr_handler(void *arg)
{
vTaskNotifyGiveFromISR(SPI_task_handle, NULL);
}
The above is a simplified example, and there's a bit more depth to go into for task notifications which you can read about here:
https://www.freertos.org/RTOS-task-notifications.html
Related
I am learning about the FreeRTOS and would like to understand more about the queues and semaphores and when it is better to use one or another.
For example, I have multiple tasks trying to talk to my I2C gpio expander (toggling pins ON and OFF).
For this to work properly, I have decided to implement a semaphore. When the I2C communication is already in progress, the semaphore is taken and given only when the I2C communication is fully complete. The example function :
void write_led1_state(bool state,bool current_state){
if(state != current_state){
if(xSemaphoreTake(i2c_mutex,1000)==pdTRUE){
i2c_expander(led1,state);
xSemaphoreGive(i2c_mutex);
}
else{
printf("semaphore is busy write FAN state \n");
}
}
}
void write_led2_state(bool state,bool current_state){
if(state != current_state){
if(xSemaphoreTake(i2c_mutex,1000)==pdTRUE){
i2c_expander(led2,state);
xSemaphoreGive(i2c_mutex);
}
else{
printf("semaphore is busy write FAN state \n");
}
}
}
Lets imagine a scenario where I have 2 freertos tasks running. One task is trying to write_led1_state and the other is trying to write_led2_state at the same time. The potential issue would be that one of those functions will fail due to the semaphore being taken by another task. How to ensure that once the function is called, even though the semaphore is not free at this particular moment but the led will still be set once the semaphore frees.?
For this particular example, is it better to implement a queue instead? For example, write_led1_state and write_led2_state would both write a value to the queue. For example value = 1 would correspond that led1 needs to be turned ON and value = 2 would mean that the led2 needs turning ON. This way, both functions can be executed at once and the queue handler task will have to ensure that the required leds are turned ON based on the value received.
These are 2 different approaches I can think of. Could someone suggest me which one is more appropriate?
UPDATE
Method 2 (Using queues)
From what I understand, the solution using the queues could look something like(Keep in mind that this is just a pseudo code)
// the below task will be created when we initialise the i2c interface. This task will be running in the background and continuously waiting for the messages
void I2C_queue_handler(void *parameters) {
uint8_t instruction;
while (1) {
if (xQueueReceive(i2c_queue, (void *)&instruction, 0) == pdTRUE) {
// something has been detected on queue. Determine what we need to do based on instruction value
if(instruction == 0 ){
i2c_expander(led1,1); // this toggles led2
}
else if(instruction == 1){
i2c_expander(led2,1) // this toggles led1
}
}
}
}
And then my 2 functions from above can be modified instead of using a semaphore use a queue instead:
void write_led1_state(bool state,bool current_state){
if(state != current_state){
uint8_t instruction;
instruction = 0;
xQueueSend(i2c_queue, (void *)&instruction , 10);
}
}
void write_led2_state(bool state,bool current_state){
if(state != current_state){
uint8_t instruction;
instruction = 1;
xQueueSend(i2c_queue, (void *)&instruction , 10);
}
}
Is my queue implementation above correct? #Some programmer dude mentioned that I will need to have some sort of synchronization but I am not sure why is that required since my i2c_queue_handler will ensure that even when multiple tasks try to toggle the leds, onle one will happen at a time . If both tasks try to toggle an led, the first one to be sent to the queue will be written while the other will be saved to the queue and wait its turn. Is my understanding correct?
Scenario : Client is sending a data and the server is receving the data from client via ethernet layer (udp). When the server receives a data from the client on the ip layer (kernel). It interrupts the kernel and kernel as to execute the data by the client, so I want to create a interrupt service function to catch the interrupt from the network service card.
I am using Interruptattach api to handle the interrupt from the network interface card and sigevent structure to call the specific function.
http://www.qnx.com/developers/docs/6.3.0SP3/neutrino/lib_ref/i/interruptattach.html#HandlerFunction
is it the right way to handle interrupts in qnx ??
volatile int id1, id2, id3;
const struct sigevent *handler1(void *area, int id1)
{
volatile double KernelStartExecutionTime;
KernelStartExecutionTime = GetTimeStamp(); // calculating the time when the kernel starts executing
TASK1(Task2ms_Raster);
return (NULL);
}
const struct sigevent *handler2(void *area, int id2)
{
volatile double KernelStartExecutionTime;
KernelStartExecutionTime = GetTimeStamp(); // calculating the time when the kernel starts executing
TASK2(Task10ms_Raster);
return (NULL);
}
const struct sigevent *handler3(void *area, int id3)
{
volatile double KernelStartExecutionTime;
KernelStartExecutionTime = GetTimeStamp(); // calculating the time when the kernel starts executing
TASK3(Task100ms_Raster);
return (NULL);
}
/*kernel calls attach the interrupt function handler to the hardware interrupt specified by intr(i.e irq) */
// InterruptAttach() : Attach an interrupt handler to an interrupt source
// interrupt source is handler1 for this example
void ISR(void)
{
volatile int irq = 0; //0 : A clock that runs at the resolution set by ClockPeriod()
ThreadCtl (_NTO_TCTL_IO, NULL);
id1 = InterruptAttach(irq, &handler1, NULL, 0, 0);
id2 = InterruptAttach(irq, &handler2, NULL, 0, 0);
id3 = InterruptAttach(irq, &handler3, NULL, 0, 0);
}
int main(int argc, char *argv[])
{
Xcp_Initialize();
CreateSocket();
ISR(); //function call for ISR
return 0;
}
another question : if I want to call another function in the sigevent structure then should I use another ISR for that (i.e. how to handle multiple function from the interrupt)?
I modified my code as above. Will it be efficient if I do like above. One ISR function with InterruptAttach API for three different handler.
This is a bad approach: Interrupt (IRQ) handlers are not interruptable. That means: 1. your computer will lock up when you do a lot of work in them and 2. you can't call every method.
The correct approach is to receive the IRQ, call a handler. The handler should create a memory structure, fill it with the details what needs to be done and adds this "task data" to a queue. A background thread can then wait for elements in the queue and do the work.
That way, IRQ handlers will be small and fast. Your background thread can be as complex as you like. if the thread has a bug, the worst that can happen is that it breaks (make the IRQ handler throw away events when the queue is full).
Note that the queue must be implemented in such a way that adding elements to it never blocks. Check the documentation, there should already be something that allows several threads to exchange data; the same can be used for IRQ handlers.
I am working on one project in which i need to read from 80 or more clients and then write their o/p into a file continuously and then read these new data for another task. My question is what should i use select or multithreading?
Also I tried to use multi threading using read/fgets and write/fputs call but as they are blocking calls and one operation can be performed at one time so it is not feasible. Any idea is much appreciated.
update 1: I have tried to implement the same using condition variable. I able to achieve this but it is writing and reading one at a time.When another client tried to write then it cannot able to write unless i quit from the 1st thread. I do not understand this. This should work now. What mistake i am doing?
Update 2: Thanks all .. I am able to succeeded to get this model implemented using mutex condition variable.
updated Code is as below:
**header file*******
char *mailbox ;
pthread_mutex_t lock = PTHREAD_MUTEX_INITIALIZER ;
pthread_cond_t writer = PTHREAD_COND_INITIALIZER;
int main(int argc,char *argv[])
{
pthread_t t1 , t2;
pthread_attr_t attr;
int fd, sock , *newfd;
struct sockaddr_in cliaddr;
socklen_t clilen;
void *read_file();
void *update_file();
//making a server socket
if((fd=make_server(atoi(argv[1])))==-1)
oops("Unable to make server",1)
//detaching threads
pthread_attr_init(&attr);
pthread_attr_setdetachstate(&attr,PTHREAD_CREATE_DETACHED);
///opening thread for reading
pthread_create(&t2,&attr,read_file,NULL);
while(1)
{
clilen = sizeof(cliaddr);
//accepting request
sock=accept(fd,(struct sockaddr *)&cliaddr,&clilen);
//error comparison against failire of request and INT
if(sock==-1 && errno != EINTR)
oops("accept",2)
else if ( sock ==-1 && errno == EINTR)
oops("Pressed INT",3)
newfd = (int *)malloc(sizeof(int));
*newfd = sock;
//creating thread per request
pthread_create(&t1,&attr,update_file,(void *)newfd);
}
free(newfd);
return 0;
}
void *read_file(void *m)
{
pthread_mutex_lock(&lock);
while(1)
{
printf("Waiting for lock.\n");
pthread_cond_wait(&writer,&lock);
printf("I am reading here.\n");
printf("%s",mailbox);
mailbox = NULL ;
pthread_cond_signal(&writer);
}
}
void *update_file(int *m)
{
int sock = *m;
int fs ;
int nread;
char buffer[BUFSIZ] ;
if((fs=open("database.txt",O_RDWR))==-1)
oops("Unable to open file",4)
while(1)
{
pthread_mutex_lock(&lock);
write(1,"Waiting to get writer lock.\n",29);
if(mailbox != NULL)
pthread_cond_wait(&writer,&lock);
lseek(fs,0,SEEK_END);
printf("Reading from socket.\n");
nread=read(sock,buffer,BUFSIZ);
printf("Writing in file.\n");
write(fs,buffer,nread);
mailbox = buffer ;
pthread_cond_signal(&writer);
pthread_mutex_unlock(&lock);
}
close(fs);
}
I think for the the networking portion of things, either thread-per-client or multiplexed single-threaded would work fine.
As for the disk I/O, you are right that disk I/O operations are blocking operations, and if your data throughput is high enough (and/or your hard drive is slow enough), they can slow down your network operations if the disk I/O is done synchronously.
If that is an actual problem for you (and you should measure first to verify that it really is a problem; no point complicating things if you don't need to), the first thing I would try to ameliorate the problem would be to make your file's output-buffer larger by calling setbuffer. With a large enough buffer, it may be possible for the C runtime library to hide any latency caused by disk access.
If larger buffers aren't sufficient, the next thing I'd try is creating one or more threads dedicated to reading and/or writing data. That is, when your network thread wants to save data to disk, rather than calling fputs()/write() directly, it allocates a buffer containing the data it wants written, and passes that buffer to the IO-write thread via a (mutex-protected or lockless) FIFO queue. The I/O thread then pops that buffer out of the queue, writes the data to the disk, and frees the buffer. The I/O thread can afford to be occasionally slow in writing because no other threads are blocked waiting for the writes to complete. Threaded reading from disk is a little more complex, but basically the IO-read thread would fill up one or more buffers of in-memory data for the network thread to drain; and whenever the network thread drained some of the data out of the buffer, it would signal the IO-read thread to refill the buffer up to the top again. That way (ideally) there is always plenty of input-data already present in RAM whenever the network thread needs to send some to a client.
Note that the multithreaded method above is a bit tricky to get right, since it involves inter-thread synchronization and communication; so don't do it unless there isn't any simpler alternative that will suffice.
Either select/poll or multithreading is ok if you you program solves the problem.
I' guess your program would be io-bound as the number of clients grows up, as you have disk read/write frequently. So it would not speed up to have multiple threads doing the io operation. Polling may be a better choice then
You can set a socket that you get from accept to be non-blocking. Then it is easy to use select to find out when there is data, read the number of bytes that are available and process them.
With (only) 80 clients, I see no reason to expect any significant difference from using threads unless you get very different amounts of data from different clients.
On my app I have a pthread running a while(1) that read a socket client and a serial callback function. My app receive messages from a serial (like /dev/ttyS0) and receive messages from socket. The problem: the app crash after receive some messages from serial, on this moment the socket is receiving nothing. But if I comment the thread creation the app work fine.
code draft:
// Socket Thread
static void *Socket(void *arg)
{
// socket inicialization
while (1)
{
ret = read(client, buffer, sizeof(buffer)-1);
// do something
}
}
// serial callback
static void SerialCallback(int id, unsigned char *buffer, int length)
{
// do something
}
// main
int main (void)
{
// start and configure serial callback
cssl_start();
Serial = cssl_open(SERIAL_PORT, SerialCallback, 0, 115200, 8, 0, 1);
// create the pthread
// If I comment the line below the app work fine
pthread_create(&Thread_Socket, NULL, Socket, NULL);
while (1)
{
}
}
Notes:
I use the library cssl (http://sourceforge.net/projects/cssl/) to deal with serial. This library use a real time signal.
For tests purposes I use socat to generate pseudo-terminals (like /dev/pts/XX)
The serial callback is called each time that serial receive one or more bytes
I am using the cutecom to send messages to serial
Added new tests information in 2012.07.16
First test: I replace the line of read function by a while(1); and the problem follow (so, the problem is not related with read function).
Second test: Using the full code (above example), I use two external usb/serial converter loopback connected, work rightly.
How said #Nikolai N Fetissov, the program break because EINTR signal. I looked into cssl library code and change the flags of signal, from: sa.sa_flags = SA_SIGINFO; to sa.sa_flags = SA_SIGINFO | SA_RESTART;. Worked.
I contacted Marcin Siennicki, cssl project developer, and sent the link of this post for him.
Thanks for comments.
I have a fairly basic TCP server keeping track of a couple connections and recv'ing data when it's available. However, I'd like to artificially trigger an event from within the program itself, so I can send my TCP server data as if it came from sock1 or sock2, but in reality came from somewhere else. Is this possible, or at all clear?
struct pollfd fds[2];
fds[0].fd = sock1;
fds[1].fd = sock2;
while (true) {
int res = poll(fds, 2, timeout);
if ((fds[0].revents & POLLIN)){
//ready to recv data from sock1
}
if ((fds[1].revents & POLLIN)){
//ready to recv data from sock2
}
}
Create a pair of connected sockets (see socketpair(2)), and wait for events on one of the sockets in your poll loop. When you want to wake up the poll thread, write a single byte to the other socket. When the polling loop wakes up, read the byte, do whatever was required and continue.
This is more like a design question -- your polling loop should probably abstract the poll method to allow trapping on other external signals, like from kill -USR1.
If you really want to trigger port traffic, you'll likely want to use netcat to send data to the socket.
I would consider something like this:
struct pollfd fds[2];
fds[0].fd = sock1;
fds[0].events = POLLIN;
fds[1].fd = sock2;
fds[1].events = POLLIN;
for (;;) {
int result = poll(fds, 2, timeout);
if (result) {
if ((fds[0].revents & POLLIN)){
/* Process data from sock1. */
}
if ((fds[1].revents & POLLIN)){
/* Process data from sock2. */
}
} else {
/* Do anything else you like, including
processing data that wasn't from a
real socket. */
}
}
Notes:
don't forget to initialise your events field
for(;;) is more idiomatic C than while(true) and doesn't require true to be defined