I am writing a monitoring program for a computer cluster that displays a lot of data onto an LCD screen. As part of the display, I would like to have a digital clock running showing the current date, hour, minute, and second. The problem is, I have a bunch of tasks going on in a big loop (ping requests, A/D conversions, file scanning) and I just don't have time to update the clock during the loop.
I'm using C under Linux (Debian).
Any suggestions of how I should got about solving this? I was thinking of maybe using an interrupt to update the clock every second, but how does one go about doing that under Linux? I've only really used interrupts with microntrollers before now.
You need to run some parallel code to do that. The best way I can think of for this application, is, by the use of threads using pthread_create() with a function like this:
void * clock_routine(void * args) {
while(1) {
// update clock
}
}
Call it similarly to this, in your main function:
pthread_t clock_tid; // Clock thread handle
int rc = pthread_create(&clock_tid, NULL, clock_routine, NULL);
And compile the file with the -lpthread flag.
The code in clock_routine will run in parallel to your other code, and since it's only a clock, you don't have any dependency/synchronization issues to deal with.
P.S.: (Kinda useless, but here goes:) I'm not sure, but using a sleep function inside your while(1) loop, to sleep for a few microseconds, "might" improve your program, since you don't need "real" time clock update, but just a few microseconds of accuracy. Constantly updating the clock like this, "might" be considered busy-waiting. If anyone knows more on this, please, correct me.
Related
In my application I need to generate a function in C that will provide a specific time delay in nano seconds. This delay timer must be done in software as I don't have any hardware timers left in my AVR MCU. My problem is that I would like to be able to set the value in nanoseconds. My MCU clock is 20MHz (50nS period). I thought a quick "for" loop, like;
for (n=0; n<value; n++)
but that won't take into account how many cycles are added to each time around the loop when compiled. Has anyone got any suggestions? I really don't want to write the code in assembler.
You give us too few information btw. but I think I can answer without them but it makes answer long. Lets start with easier problem that is you have this situation that your action need to be executed less times than the most frequent isr is executing. For example you need send byte every 1s but your isr is executing every 1ms. So in short you need to send byte every 1000 executions ISR, then you make counter in ISR thats incrementing every ISR and when reaches 1000 you send byte and set cnt to 0.
ISR()
{
cnt++;
if(cnt >= 1000)
{
execute(Z);
cnt = 0;
}
}
When you have opposed problem, isr is slower than desired time of executing your actions then I stand for redesign your use of timers. You should then make this ISR to execute faster and then divide time by counting exectued isr as I described above. This was mentioned in comments.
My suggestion is that you rethink the way you use timers.
Since you are using an AVR, you should look into using the AVR-Libc delay functions, _delay_us and _delau_ms, which are documented here:
https://www.nongnu.org/avr-libc/user-manual/group__util__delay.html
They are standard in the context of AVRs, but not standard for all C environments in general.
Some example code to get you started:
#define F_CPU 20000000
#include <util/delay.h>
int main() {
while (1) {
_delay_us(0.05);
}
}
Note that even though the _delay_us and _delay_ms functions each take a double as an argument, all floating point arithmetic is done at compile time if possible in order to produce efficient code for your delay.
I am trying to develop a contiki piece of code in which I need to wait for three seconds for a transducer output. Although this may sound quite un-transducer like, at development time, I want to simulate the behavior at human readable speeds and hence I need to set the timer to say 3 seconds.
The contiki timer library is pretty well documented and has a good series of examples which mention the creation, setting and resetting of the timer. However, if I have code like the following:
timer_set(&transducerOutputWaitTimer);
bool if_blk_executed;
if(timer_expired(&&transducerOutputWaitTimer)){
if_blk_executed = true;
//do something
}
if(if_blk_executed)
printf("Sunrise");
else
printf("It is not dawned yet");
Now the expiry is not immediately triggered after the timer is set. So the if block is never executed. Effectively it will never dawn.
Now there are two ways in which I can get the system to wait. One, by adding a while loop on the timer like this:
while(!timer_expired(&&transducerOutputWaitTimer)){};
//do something
or
cpu_delay_usecs{mytimerdur_in_secs*10^6*};
//do something
I do not see that either approaches are elegant. While one wastes CPU cycles, the other enforces an unnecessarily large computation.
Is there any better way? I know that clock
Attached to this question are the following two:
How can I trigger one Contiki protothread from another? If I can do this, I can get the interrupt that causes the transducerOutput to invoke a process from which I can trigger the etimer and process events.
What exactly does the CPU delay mean? Does this mean that the entire CPU clock cycles get held back for the given duration? If yes, how are other processes running currently in the system affected?
Updates
Update 1: the while method did not work. The code possibly went into an infinite loop.
Update 2: I tried with a clock_delay(3*CLOCK_SECOND) approach after setting my timer. It worked. However, in this case, why do I need the timer method at all?
Update 3 (This changes the context of the answer, hence adding this comment as per the suggestion)
My Timer needs to be used outside a process in a different void function. In that case, I need to use the timer() library rather than the etimer (which is specific to a process. So how do I get my method to wait for the desired time in such cases?
There are several Contiki abstractions built on top of the timer library - there is usually no need to user the timer_t structures directly. Instead, there are event timers (struct etimer), which play well together with Contiki processes, and callback timers (struct ctimer), both of which internally use struct timer. For a less RAM hungry option with a more limited API there are second timers (struct stimer). Finally, there is a single real-time timer in the system (struct rtimer).
An example event timer usage: set and wait for 3 seconds:
static struct etimer timer;
etimer_set(&timer, 3 * CLOCK_SECOND);
PROCESS_WAIT_EVENT_UNTIL(etimer_expired(&timer));
How can I trigger one Contiki protothread from another?
There is process_poll - see the process API documentation for more examples.
What exactly does the CPU delay mean?
It's a form of busy waiting. Don't use the delay functions or other forms of busy waiting for delays longer than microseconds - not energy efficient, does not allow other processes to run, and in worst case the watchdog might expire.
I want to implement a real time clock and timer , that prints on screen current time like this. : " HOURS:MINUTES:SECONDS "
Is it safe to use :
While(1){
.....Do sth
sleep(1);
.....Do sth
}
and then
seconds+=1;
For measure of one second pass?
You will have to check whether in your particular embedded system, sleep(1) will sleep the system for 1 second. In many of the embedded boards I have used, sleep takes the argument in milliseconds. So for 1 second sleep you would have to use sleep(1000).
If you are not too worried about accuracy then yes you can use this method. however, this will not be as accurate as you using a timer or an RTC. so for example if you want your system to do something when seconds reaches 30, a better way might be to setup a timer or an RTC alarm (based on what your embedded platform has) to more accurately measure out that time.
I want to write a task that does some polling on some IOs. Now, I need it to not block the cpu but to check the IOs every 1 microsecond or so.
I'm a relative VxWorks newbie and just realized that inserting a usleep(1); into my polling loop probably won't do what I need it to do. How do I best go about this?
I have figured out that sysClkRateGet() returns 60 which isn't good enough for me. I need to poll and react fast but can't block the other things that are going on in the CPU, so I guess taskDelay() won't do it for me... is there anything else that allows for a shorter downtime of my task (than 1/60 seconds)?
edit
I think I've figured out that it's much smarter to have a timer kicking in every 1us that executes my short polling function.
i triggered the timer like this:
timer_t polltimerID;
struct itimerspec poll_time;
poll_time.it_value.tv_sec = 0;
poll_time.it_value.tv_nsec= 1000;
poll_time.it_interval.tv_sec = 0;
poll_time.it_interval.tv_nsec= 1000; // execute it every 1us
if(timer_create (CLOCK_REALTIME, NULL, &polltimerID))
printf("problem in timer_create(): %s",strerror(errno));
if(timer_connect (polltimerID,MyPollFunction,0))
printf("problem in timer_connect(): %s",strerror(errno));
if(timer_settime (polltimerID, 0, &poll_time, NULL))
printf("problem in timer_settime(): %s",strerror(errno));
But I'm not exactly sure yet, what the priority of the timer is and if (and how) it is able to preempt a current task, anyone?
The posix timer won't do what you want as it's driven off the system clock (which as you pointed out is at 60Hz).
There is no "built-in" OS function that will give you a 100KHz timer.
You will have to find some unused hardware timer on your board (CPU reference manual is useful)
You will have to configure the timer registers for you 100KHz (again Ref. Manual is good)
You will have to hook up the timer interrupt line to your function: intConnect (vector, fn, arg)
The VxWorks Kernel programmers manual has information about writing Interrupt Service Routines.
I need a way to get the elapsed time (wall-clock time) since a program started, in a way that is resilient to users meddling with the system clock.
On windows, the non standard clock() implementation doesn't do the trick, as it appears to work just by calculating the difference with the time sampled at start up, so that I get negative values if I "move the clock hands back".
On UNIX, clock/getrusage refer to system time, whereas using function such as gettimeofday to sample timestamps has the same problem as using clock on windows.
I'm not really interested in precision, and I've hacked a solution by having a half a second resolution timer spinning in the background countering the clock skews when they happen
(if the difference between the sampled time and the expected exceeds 1 second i use the expected timer for the new baseline) but I think there must be a better way.
I guess you can always start some kind of timer. For example under Linux a thread
that would have a loop like this :
static void timer_thread(void * arg)
{
struct timespec delay;
unsigned int msecond_delay = ((app_state_t*)arg)->msecond_delay;
delay.tv_sec = 0;
delay.tv_nsec = msecond_delay * 1000000;
while(1) {
some_global_counter_increment();
nanosleep(&delay, NULL);
}
}
Where app_state_t is an application structure of your choice were you store variables. If you want to prevent tampering, you need to be sure no one killed your thread
For POSIX, use clock_gettime() with CLOCK_MONOTONIC.
I don't think you'll find a cross-platform way of doing that.
On Windows what you need is GetTickCount (or maybe QueryPerformanceCounter and QueryPerformanceFrequency for a high resolution timer). I don't have experience with that on Linux, but a search on Google gave me clock_gettime.
Wall clock time can bit calculated with the time() call.
If you have a network connection, you can always acquire the time from an NTP server. This will obviously not be affected in any the local clock.
/proc/uptime on linux maintains the number of seconds that the system has been up (and the number of seconds it has been idle), which should be unaffected by changes to the clock as it's maintained by the system interrupt (jiffies / HZ). Perhaps windows has something similar?