polling task in VxWorks - loops

I want to write a task that does some polling on some IOs. Now, I need it to not block the cpu but to check the IOs every 1 microsecond or so.
I'm a relative VxWorks newbie and just realized that inserting a usleep(1); into my polling loop probably won't do what I need it to do. How do I best go about this?
I have figured out that sysClkRateGet() returns 60 which isn't good enough for me. I need to poll and react fast but can't block the other things that are going on in the CPU, so I guess taskDelay() won't do it for me... is there anything else that allows for a shorter downtime of my task (than 1/60 seconds)?
edit
I think I've figured out that it's much smarter to have a timer kicking in every 1us that executes my short polling function.
i triggered the timer like this:
timer_t polltimerID;
struct itimerspec poll_time;
poll_time.it_value.tv_sec = 0;
poll_time.it_value.tv_nsec= 1000;
poll_time.it_interval.tv_sec = 0;
poll_time.it_interval.tv_nsec= 1000; // execute it every 1us
if(timer_create (CLOCK_REALTIME, NULL, &polltimerID))
printf("problem in timer_create(): %s",strerror(errno));
if(timer_connect (polltimerID,MyPollFunction,0))
printf("problem in timer_connect(): %s",strerror(errno));
if(timer_settime (polltimerID, 0, &poll_time, NULL))
printf("problem in timer_settime(): %s",strerror(errno));
But I'm not exactly sure yet, what the priority of the timer is and if (and how) it is able to preempt a current task, anyone?

The posix timer won't do what you want as it's driven off the system clock (which as you pointed out is at 60Hz).
There is no "built-in" OS function that will give you a 100KHz timer.
You will have to find some unused hardware timer on your board (CPU reference manual is useful)
You will have to configure the timer registers for you 100KHz (again Ref. Manual is good)
You will have to hook up the timer interrupt line to your function: intConnect (vector, fn, arg)
The VxWorks Kernel programmers manual has information about writing Interrupt Service Routines.

Related

Watchdog timeout is too short

I have a conceptual question, I'm currently working on a project that have to implement a watchdog timer to ensure that the code works properly, I'm using a STM32F4, from the datasheet I can see that the max timeout allow by the IWDG (independent Watchdog) is 32768 ms, I'm using a SIM800L for communication via GPRS, so some communications take longer than that, during this process the UC is busy waiting for the answers, so it cannot reset the IWDG, so I was thinking on deactivating the Watchdog in those parts, or implement my own watchdog whit a timer and a simple reset function so can make longer timeout periods.
My question is:
Is this a sign of a flaw on my code design? Should I instead adapt my code to reset the IWDG every 30 seconds or so and never deactivate it? Is implementing my own WDG with a timer bad practice?
¿is this a sign of a flaw on my code design?,
¿should instead adapt my
code to renew the IWDG every 30 seconds or so?
No, you simply need to write the key register or load a new value to the downcounter before the downcounter reaches zero. It shows the watchdog that your software is alive and no reset is needed.
during this process the UC is busy waiting for the answers, so it
cannot reset the IWDG
This means that your implementation is bad. You need to implement it non-blocking way. It is not dificult.
¿implementing my own
WDG whit a timer is a bad practice?
It is a very bad idea. What will happen if your program hardfault? Your own watchdog will be useless. Hardware WDG is also clocked from its one clock source - so if your program does something wrong with the clocks - it will still work.
Programs should never deactivate the watchdog in run-time, as that defeats the purpose of having a watchdog in the first place. Many watchdog hardware peripherals don't even allow you to disable it once enabled.
You cannot implement your own watchdog using timers, because the watchdog hardware is explicitly using a different timer than what's available to the application programmer. So if your program halts for whatever reason, your timer solution will halt as well. Forget about implementing watchdogs using on-chip timers or software. You can only implement your own watchdog using a external hardware, such as a binary counter IC or monostable multivibratior IC.
Is this a sign of a flaw on my code design?
It is - you should not busy-wait for external resources to become available. Rather than
while(some_serial_bus == BUSY) {} // bad, busy wait
you should be doing:
for(;;)
{
kick_wdog();
if(some_serial_bus != BUSY) // good, polling
{
do_stuff();
}
}
When implementing the driver for the external serial bus you should provide a method to check if data is available, then allow the caller to decide whether to busy wait for that function or not. An ideal, properly written driver should never contain any busy waits nor should it contain any "sleep/delay" calls.
I don't think you can stop the IWDG once it starts (nor would you want to). I'm not familiar with the SIM800L, but your best bet would be to find a way to kick the watchdog intermittently while GPRS is operating. You want to do this in firmware, not hardware. (Don't use a HW timer to kick the WDT because if your SW crashes, the HW timer could keep doing its thing.) Alternatively, the STM32F4 also as a window watchdog (WWDG) timer you could use. You might be able to configure longer window times with the WWDG.

Linux timer interval

I want to run a timer with interval of 5 ms. I created a Linux timer and when a sigalrm_handler is called I'm checking elapsed time from a previous call. I'm getting times like: 4163, 4422, 4266, 4443, 4470 4503, 4288 microseconds when I want intervals to be about 5000 microseconds with a least possible error. I don't know why this interval is not constant but it varies and is much lower than it should be.
Here is my code:
static int time_count;
static int counter;
struct itimerval timer={0};
void sigalrm_handler(int signum)
{
Serial.print("SIGALRM received, time: ");
Serial.println(micros()-time_count);
time_count=micros();
}
void setup() {
Serial.begin(9600);
timer.it_value.tv_sec = 1;
timer.it_interval.tv_usec = 5000;
signal(SIGALRM, &sigalrm_handler);
setitimer(ITIMER_REAL, &timer, NULL);
time_count = micros();
}
I want to run a timer with interval of 5 ms.
You probably cannot get that period reliably, because it is smaller than a reasonable PC hardware can handle.
As a rule of thumb, 50 Hz (or perhaps 100Hz) is probably the highest reliable frequency you could get. And it is not a matter of software, but of hardware.
Think of your typical processor cache (a few megabytes). You could need a few milliseconds to fill it. Or think of time to handle a page fault; it probably would take more than a millisecond.
And Intel Edison is not a top-fast processor. I won't be surprised if converting a number to a string and displaying that string on some screen could take about a millisecond (but I leave you to check that). This could explain your figures.
Regarding software, see also time(7) (or consider perhaps some busy waiting approach inside the kernel; I don't recommend that).
Look also into /proc/interrupts several times (see proc(5)) by running a few times some cat /proc/interrupts command in a shell. You'll probably see that the kernel gets interrupted less frequently than once every one or a few milliseconds.
BTW your signal handler calls non-async-signal-safe functions (so is undefined behavior). Read signal(7) & signal-safety(7).
So it looks like your entire approach is wrong.
Maybe you want some RTOS, at least if you need some hard real-time (and then, you might consider upgrading your hardware to something faster and more costly).

Contiki delay in seconds

I am trying to develop a contiki piece of code in which I need to wait for three seconds for a transducer output. Although this may sound quite un-transducer like, at development time, I want to simulate the behavior at human readable speeds and hence I need to set the timer to say 3 seconds.
The contiki timer library is pretty well documented and has a good series of examples which mention the creation, setting and resetting of the timer. However, if I have code like the following:
timer_set(&transducerOutputWaitTimer);
bool if_blk_executed;
if(timer_expired(&&transducerOutputWaitTimer)){
if_blk_executed = true;
//do something
}
if(if_blk_executed)
printf("Sunrise");
else
printf("It is not dawned yet");
Now the expiry is not immediately triggered after the timer is set. So the if block is never executed. Effectively it will never dawn.
Now there are two ways in which I can get the system to wait. One, by adding a while loop on the timer like this:
while(!timer_expired(&&transducerOutputWaitTimer)){};
//do something
or
cpu_delay_usecs{mytimerdur_in_secs*10^6*};
//do something
I do not see that either approaches are elegant. While one wastes CPU cycles, the other enforces an unnecessarily large computation.
Is there any better way? I know that clock
Attached to this question are the following two:
How can I trigger one Contiki protothread from another? If I can do this, I can get the interrupt that causes the transducerOutput to invoke a process from which I can trigger the etimer and process events.
What exactly does the CPU delay mean? Does this mean that the entire CPU clock cycles get held back for the given duration? If yes, how are other processes running currently in the system affected?
Updates
Update 1: the while method did not work. The code possibly went into an infinite loop.
Update 2: I tried with a clock_delay(3*CLOCK_SECOND) approach after setting my timer. It worked. However, in this case, why do I need the timer method at all?
Update 3 (This changes the context of the answer, hence adding this comment as per the suggestion)
My Timer needs to be used outside a process in a different void function. In that case, I need to use the timer() library rather than the etimer (which is specific to a process. So how do I get my method to wait for the desired time in such cases?
There are several Contiki abstractions built on top of the timer library - there is usually no need to user the timer_t structures directly. Instead, there are event timers (struct etimer), which play well together with Contiki processes, and callback timers (struct ctimer), both of which internally use struct timer. For a less RAM hungry option with a more limited API there are second timers (struct stimer). Finally, there is a single real-time timer in the system (struct rtimer).
An example event timer usage: set and wait for 3 seconds:
static struct etimer timer;
etimer_set(&timer, 3 * CLOCK_SECOND);
PROCESS_WAIT_EVENT_UNTIL(etimer_expired(&timer));
How can I trigger one Contiki protothread from another?
There is process_poll - see the process API documentation for more examples.
What exactly does the CPU delay mean?
It's a form of busy waiting. Don't use the delay functions or other forms of busy waiting for delays longer than microseconds - not energy efficient, does not allow other processes to run, and in worst case the watchdog might expire.

Digital Clock program in C - use interrupt?

I am writing a monitoring program for a computer cluster that displays a lot of data onto an LCD screen. As part of the display, I would like to have a digital clock running showing the current date, hour, minute, and second. The problem is, I have a bunch of tasks going on in a big loop (ping requests, A/D conversions, file scanning) and I just don't have time to update the clock during the loop.
I'm using C under Linux (Debian).
Any suggestions of how I should got about solving this? I was thinking of maybe using an interrupt to update the clock every second, but how does one go about doing that under Linux? I've only really used interrupts with microntrollers before now.
You need to run some parallel code to do that. The best way I can think of for this application, is, by the use of threads using pthread_create() with a function like this:
void * clock_routine(void * args) {
while(1) {
// update clock
}
}
Call it similarly to this, in your main function:
pthread_t clock_tid; // Clock thread handle
int rc = pthread_create(&clock_tid, NULL, clock_routine, NULL);
And compile the file with the -lpthread flag.
The code in clock_routine will run in parallel to your other code, and since it's only a clock, you don't have any dependency/synchronization issues to deal with.
P.S.: (Kinda useless, but here goes:) I'm not sure, but using a sleep function inside your while(1) loop, to sleep for a few microseconds, "might" improve your program, since you don't need "real" time clock update, but just a few microseconds of accuracy. Constantly updating the clock like this, "might" be considered busy-waiting. If anyone knows more on this, please, correct me.

Loops/timers in C

How does one create a timer in C?
I want a piece of code to continuously fetch data from a gps parsers output.
Are there good libraries for this or should it be self written?
Simplest method available:
#include <pthread.h>
void *do_smth_periodically(void *data)
{
int interval = *(int *)data;
for (;;) {
do_smth();
usleep(interval);
}
}
int main()
{
pthread_t thread;
int interval = 5000;
pthread_create(&thread, NULL, do_smth_periodically, &interval)
...
}
On POSIX systems you can create (and catch) an alarm. Alarm is simple but set in seconds. If you need finer resolution than seconds then use setitimer.
struct itimerval tv;
tv.it_interval.tv_sec = 0;
tv.it_interval.tv_usec = 100000; // when timer expires, reset to 100ms
tv.it_value.tv_sec = 0;
tv.it_value.tv_usec = 100000; // 100 ms == 100000 us
setitimer(ITIMER_REAL, &tv, NULL);
And catch the timer on a regular interval by setting sigaction.
One doesn't "create a timer in C". There is nothing about timing or scheduling in the C standard, so how that is accomplished is left up to the Operating System.
This is probably a reasonable question for a C noob, as many languages do support things like this. Ada does, and I believe the next version of C++ will probably do so (Boost has support for it now). I'm pretty sure Java can do it too.
On linux, probably the best way would be to use pthreads. In particular, you need to call pthread_create() and pass it the address of your routine, which presumably contains a loop with a sleep() (or usleep()) call at the bottom.
Note that if you want to do something that approximates real-time scheduling, just doing a dumb usleep() isn't good enough because it won't account for the execution time of the loop itself. For those applications you will need to set up a periodic timer and wait on that.
SDL provides a cross platform timer in C.
http://www.libsdl.org/cgi/docwiki.cgi/SDL_AddTimer
If your using Windows, you can use SetTimer,else you can build a timer out of timeGetTime and _beginthreadex along with a queue of timers with callbacks
The question about a timer is quite unspecific, though there are two functions that come to my mind that will help you:
sleep() This function will cause execution to stop for a specified number of seconds. You can also use usleep and nanosleep if you want to specify the sleeptime more exactly
gettimeofday() Using this function you are able to stop between to timesteps.
See manpages for further explanation :)
If the gps data is coming from some hardware device, like over a serial port, then one thing that you may consider is changing the architecture around so that the parser kicks off the code that you are trying to run when more data is available.
It could do this through a callback function or it could send an event - the actual implementation would depend on what you have available.

Resources