proper use of time_is_before_jiffies() - timer

My Linux device driver has some obstinate logic which twiddles with some hardware and then waits for a signal to appear. The seemingly proper way is:
ulong timeout, waitcnt = 0;
...
/* 2. Establish programming mode */
gpio_bit_write (MPP_CFG_PROGRAM, 0); /* assert */
udelay (3); /* one microsecond should be long enough */
gpio_bit_write (MPP_CFG_PROGRAM, 1); /* de-assert */
/* 3. Wait for the FPGA to initialize. */
/* 100 ms timeout should be nearly 100 times too long */
timeout = jiffies + msecs_to_jiffies(100);
while (gpio_bit_read (MPP_CFG_INIT) == 0 &&
time_is_before_jiffies (timeout))
++waitcnt; /* do nothing */
if (!time_is_before_jiffies (timeout)) /* timed out? */
{
/* timeout error */
}
This always exercises the "timeout error" path and doesn't increment waitcnt at all. Perhaps I don't understand the meaning of time_is_before_jiffies(), or it is broken. When I replace it with the much more understandable direct comparison of jiffies:
while (gpio_bit_read (MPP_CFG_INIT) == 0 &&
jiffies <= timeout)
++waitcnt; /* do nothing */
It works just fine: it loops for awhile (1600 µS), sees the INIT bit come on, and then proceeds without triggering a timeout error.
The comment for time_is_before_jiffies() is:
/* time_is_before_jiffies(a) return true if a is before jiffies */
#define time_is_before_jiffies(a) time_after(jiffies, a)
As the sense of the comparison seemed nonsensically backward, I replaced both with time_is_after_jiffies(), but that doesn't work either.
What am I doing wrong? Maybe I should replace use of this confusing macro with the straightforward jiffies <= timeout logic, though that seems less portable.

The jiffies <= timeout comparison does not work when the jiffies are wrapping around, so you must use it.
The condition you want to use can be described as "has not yet timed out".
This means that the current time (jiffies) has not yet reached the timeout time (timeout), i.e., jiffies is before the variable you are comparing it to, which means that your variable is after jiffies.
(All the time_is_ functions have jiffies on the right side of the comparison.)
So you have to use timer_is_after_jiffies() in the while loop.
(And the <= implies that you actually want to use time_is_after_eq_jiffies().)
The timeout check should be better done by reading the GPIO bit, because it would be a shame if your code times out although it got the signal right at the end.
Furthermore, busy-looping for a hundred milliseconds is extremly evil; you should release the CPU if you don't need it:
unsigned long timeout = jiffies + msecs_to_jiffies(100);
bool ok = false;
for (;;) {
ok = gpio_bit_read(MPP_CFG_INIT) != 0;
if (ok || time_is_before_eq_jiffies(timeout))
break;
/* you should do msleep(...) or cond_resched() here, if possible */
}
if (!ok) /* timed out? */
...
(This loop uses time_is_before_eq_jiffies() because the condition is reversed.)

Related

Need some help in C code for optimization (Poll + delay/sleep)

Currently I'm polling the register to get the expected value and now I want reduce the CPU usage and increase the performance.
So, I think, if we do polling for particular time (Say for 10ms) and if we didn't get expected value then wait for some time (like udelay(10*1000) or usleep(10*1000) delay/sleep in ms) then continue to do polling for more more extra time (Say 100ms) and still if you didn't get the expected value then do sleep/delay for 100ms.....vice versa... need to do till it reach to maximum timeout value.
Please let me know if anything.
This is the old code:
#include <sys/time.h> /* for setitimer */
#include <unistd.h> /* for pause */
#include <signal.h> /* for signal */
#define INTERVAL 500 //timeout in ms
static int timedout = 0;
struct itimerval it_val; /* for setting itimer */
char temp_reg[2];
int main(void)
{
/* Upon SIGALRM, call DoStuff().
* Set interval timer. We want frequency in ms,
* but the setitimer call needs seconds and useconds. */
if (signal(SIGALRM, (void (*)(int)) DoStuff) == SIG_ERR)
{
perror("Unable to catch SIGALRM");
exit(1);
}
it_val.it_value.tv_sec = INTERVAL/1000;
it_val.it_value.tv_usec = (INTERVAL*1000) % 1000000;
it_val.it_interval = it_val.it_value;
if (setitimer(ITIMER_REAL, &it_val, NULL) == -1)
{
perror("error calling setitimer()");
exit(1);
}
do
{
temp_reg[0] = read_reg();
//Read the register here and copy the value into char array (temp_reg
if (timedout == 1 )
return -1;//Timedout
} while (temp_reg[0] != 0 );//Check the value and if not try to read the register again (poll)
}
/*
* DoStuff
*/
void DoStuff(void)
{
timedout = 1;
printf("Timer went off.\n");
}
Now I want to optimize and reduce the CPU usage and want to improve the performance.
Can any one help me on this issue ?
Thanks for your help on this.
Currently I'm polling the register to get the expected value [...]
wow wow wow, hold on a moment here, there is a huge story hidden behind this sentence; what is "the register"? what is "the expected value"? What does read_reg() do? are you polling some external hardware? Well then, it all depends on how your hardware behaves.
There are two possibilities:
Your hardware buffers the values that it produces. This means that the hardware will keep each value available until you read it; it will detect when you have read the value, and then it will provide the next value.
Your hardware does not buffer values. This means that values are being made available in real time, for an unknown length of time each, and they are replaced by new values at a rate that only your hardware knows.
If your hardware is buffering, then you do not need to be afraid that some values might be lost, so there is no need to poll at all: just try reading the next value once and only once, and if it is not what you expect, sleep for a while. Each value will be there when you get around to reading it.
If your hardware is not buffering, then there is no strategy of polling and sleeping that will work for you. Your hardware must provide an interrupt, and you must write an interrupt-handling routine that will read every single new value as quickly as possible from the moment that it has been made available.
Here are some pseudo code that might help:
do
{
// Pseudo code
start_time = get_current_time();
do
{
temp_reg[0] = read_reg();
//Read the register here and copy the value into char array (temp_reg
if (timedout == 1 )
return -1;//Timedout
// Pseudo code
stop_time = get_current_time();
if (stop_time - start_time > some_limit) break;
} while (temp_reg[0] != 0 );
if (temp_reg[0] != 0)
{
usleep(some_time);
start_time = get_current_time();
}
} while (temp_reg[0] != 0 );
To turn the pseudo code into real code, see https://stackoverflow.com/a/2150334/4386427

How does linux handle overflow in jiffies?

Suppose we have a following code:
if (timeout > jiffies)
{
/* we did not time out, good ... */
}
else
{
/* we timed out, error ...*
}
This code works fine when jiffies value do not overflow.
However, when jiffies overflow and wrap around to zero, this code doesn't work properly.
Linux apparently provides macros for dealing with this overflow problem
#define time_before(unknown, known) ((long)(unkown) - (long)(known) < 0)
and code above is supposed to be safe against overflow when replaced with this macro:
// SAFE AGAINST OVERFLOW
if (time_before(jiffies, timeout)
{
/* we did not time out, good ... */
}
else
{
/* we timed out, error ...*
}
But, what is the rationale behind time_before (and other time_ macros?
time_before(jiffies, timeout) will be expanded to
((long)(jiffies) - (long)(timeout) < 0)
How does this code prevent overflow problems?
Let's actually give it a try:
#define time_before(unknown, known) ((long)(unkown) - (long)(known) < 0)
I'll simplify things down a lot by saying that a long is only two bytes, so in hex it can have a value in the range [0, 0xFFFF].
Now, it's signed, so the range [0, 0xFFFF] can be broken into two separate ranges [0, 0x7FFF], [0x8000, 0xFFFF]. Those correspond to the values [0, 32767], [ -32768, -1]. Here's a diagram:
[0x0 - - - 0xFFFF]
[0x0 0x7FFF][0x8000 0xFFFF]
[0 32,767][-32,768 -1]
Say timeout is 32,000. We want to check if we're inside our timeout, but in truth we overflowed, so jiffies is -31,000. So if we naively tried to evaluate jiffies < timeout we'd get True. But, plugging in the values:
time_before(jiffies, offset)
== ((long)(jiffies) - (long)(offset) < 0)
== (-31000 - 32000 < 0) // WTF is this. Clearly NOT -63000
== (-31000 - 1768 - 1 - 30231 < 0) // simply expanded 32000
== (-32768 - 1 - 30232 < 0) // this -1 causes an underflow
== (32767 - 30232 < 0)
== (2535 < 0)
== False
jiffies are 4 bytes, not 2, but the same principle applies. Does that help at all?
See for example here: http://fixunix.com/kernel/266713-%5Bpatch-1-4%5D-fs-autofs-use-time_before-time_before_eq-etc.html
Code with checking overflow against some fixed small constant was converted to use time_before. Why?
I'm just summarizing the comment that goes with the definition of the
time_after etc functions:
include/linux/jiffies.h:93
93 /*
94 * These inlines deal with timer wrapping correctly. You are
95 * strongly encouraged to use them
96 * 1. Because people otherwise forget
97 * 2. Because if the timer wrap changes in future you won't have to
98 * alter your driver code.
99 *
100 * time_after(a,b) returns true if the time a is after time b.
101 *
So, time_before and time_after is the better effort of handling overflow.
Your testcase is more likely to be timeout < jiffles (w/o overflow) than timeout > jiffles (with overflow):
unsigned long jiffies = 2147483658;
unsigned long timeout = 10;
And if you will change timeout to
unsigned long timeout = -2146483000;
what will be an answer?
Or you can change the check from
printf("%d",time_before(jiffies,timeout));
to
printf("%d",time_before(jiffies,old_jiffles+timeout));
where old_jiffles is saved value of jiffles at the timer's start.
So, I think the usage of time_before can be like:
old_jiffles=jiffles;
timeout=10; // or even 10*HZ for ten-seconds
do_a_long_work_or_wait();
//check is the timeout reached or not
if(time_before(jiffies,old_jiffles+timeout) ) {
do_another_long_work_or_wait();
} else {
printk("ERRROR: the timeout is reached; here is a problem");
panic();
}
Given that jiffies is an unsigned value, a simple comparison is safe across one wraparound point (where signed values would jump from positive to negative) but not safe across the other point (where signed values would jump from negative to positive, and where unsigned values jump from high to low). It's protection against this second point that the macro is intended to solve.
There is a fundamental assumption that timeout was initially calculated as jiffies + some_offset at some prior recent point in time -- specifically, less than half the range of the variables. If you're trying to measure times longer than this then things break down and you'll get the wrong answer.
If we pretend that jiffies is 16-bit wide for convenience in the explanation (similar to the other answers):
timeout > jiffies
This is an unsigned comparison that is intended to return true if we have not yet reached the timeout. Some examples:
timeout == 0x0300, jiffies == 0x0100: result is true, as expected.
timeout == 0x8100, jiffies == 0x7F00: result is true, as expected.
timeout == 0x0100, jiffies == 0xFF00: oops, result is false, but we haven't really reached the timeout, it just wrapped the counter.
timeout == 0x0100, jiffies == 0x0300: result is false, as expected.
timeout == 0x7F00, jiffies == 0x8100: result is false, as expected.
timeout == 0xFF00, jiffies == 0x0100: oops, result is true, but we did pass the timeout.
time_before(jiffies, timeout)
This does a signed comparison on the difference of the values rather than the values themselves, and again is expected to return true if the timeout has not yet been reached. Provided that the assumption above is upheld, the same examples:
timeout == 0x0300, jiffies == 0x0100: result is true, as expected.
timeout == 0x8100, jiffies == 0x7F00: result is true, as expected.
timeout == 0x0100, jiffies == 0xFF00: result is true, as expected.
timeout == 0x0100, jiffies == 0x0300: result is false, as expected.
timeout == 0x7F00, jiffies == 0x8100: result is false, as expected.
timeout == 0xFF00, jiffies == 0x0100: result is false, as expected.
If the offset you used when calculating timeout is too large or you allow too much time to pass after calculating timeout, then the result can still be wrong. eg. if you calculate timeout once but then just keep testing it repeatedly, then time_before will initially be true, then change to false after the offset time has passed -- and then change back to true again after 0x8000 time has passed (however long that is; it depends on the tick rate). This is why when you reach the timeout, you're supposed to remember this and stop checking the time (or recalculate a new timeout).
In the real kernel, jiffies is longer than 16 bits so it will take longer for it to wrap, but it's still possible if the machine is run for long enough. (And typically it's set to wrap shortly after boot, to catch these bugs more quickly.)
I couldn't easily understand the above answers, so hoping to help with my own:
#define time_after(a,b) (long) ( (b) - (a) )
Here brackets around 'b' and 'a' make them signed.
Example overflow:
For convenience, imagine 8-bit integers, jiffy1 is changing and timeout is fixed and greater than jiffy1
like : jiffy1 = 252, timeout = 254 and jiffy2 becomes 0 or 1 after overflow
When we use unsigned:
jiffy1 < timeout and
jiffy2 < timeout (mistakenly due to overflow which we need to fix via MACRO)
When we use signed:
jiffy1 < timeout (more-negative < less-negative)
and
jiffy2 > timeout (positive > negative)
(because it will consider MSB as the sign bit, hence timeout will appear negative while our jiffy2 has become positive due to the overflow)
Do correct me if there is something wrong

Creating a timeout using time and difftime

gcc (GCC) 4.6.0 20110419 (Red Hat 4.6.0-5)
I am trying to get the time of start and end time. And get the difference between them.
The function I have is for creating a API for our existing hardware.
The API wait_events take one argument that is time in milli-seconds. So what I am trying to get the start before the while loop. And using time to get the number of seconds. Then after 1 iteration of the loop get the time difference and then compare that difference with the time out.
Many thanks for any suggestions,
/* Wait for an event up to a specified time out.
* If an event occurs before the time out return 0
* If an event timeouts out before an event return -1 */
int wait_events(int timeout_ms)
{
time_t start = 0;
time_t end = 0;
double time_diff = 0;
/* convert to seconds */
int timeout = timeout_ms / 100;
/* Get the initial time */
start = time(NULL);
while(TRUE) {
if(open_device_flag == TRUE) {
device_evt.event_id = EVENT_DEV_OPEN;
return TRUE;
}
/* Get the end time after each iteration */
end = time(NULL);
/* Get the difference between times */
time_diff = difftime(start, end);
if(time_diff > timeout) {
/* timed out before getting an event */
return FALSE;
}
}
}
The function that will call will be like this.
int main(void)
{
#define TIMEOUT 500 /* 1/2 sec */
while(TRUE) {
if(wait_events(TIMEOUT) != 0) {
/* Process incoming event */
printf("Event fired\n");
}
else {
printf("Event timed out\n");
}
}
return 0;
}
=============== EDIT with updated results ==================
1) With no sleep -> 99.7% - 100% CPU
2) Setting usleep(10) -> 25% CPU
3) Setting usleep(100) -> 13% CPU
3) Setting usleep(1000) -> 2.6% CPU
4) Setting usleep(10000) -> 0.3 - 0.7% CPU
You're overcomplicating it - simplified:
time_t start = time();
for (;;) {
// try something
if (time() > start + 5) {
printf("5s timeout!\n");
break;
}
}
time_t should in general just be an int or long int depending on your platform counting the number of seconds since January 1st 1970.
Side note:
int timeout = timeout_ms / 1000;
One second consists of 1000 milliseconds.
Edit - another note:
You'll most likely have to ensure that the other thread(s) and/or event handling can happen, so include some kind of thread inactivity (using sleep(), nanosleep() or whatever).
Without calling a Sleep() function this a really bad design : your loop will use 100% of the CPU. Even if you are using threads, your other threads won't have much time to run as this thread will use many CPU cycles.
You should design something like that:
while(true) {
Sleep(100); // lets say you want a precision of 100 ms
// Do the compare time stuff here
}
If you need precision of the timing and are using different threads/processes, use Mutexes (semaphores with a increment/decrement of 1) or Critical Sections to make sure the time compare of your function is not interrupted by another process/thread of your own.
I believe your Red Hat is a System V so you can sync using IPC

Assign delays for 1 ms or 2 ms in C?

I'm using code to configure a simple robot. I'm using WinAVR, and the code used there is similar to C, but without stdio.h libraries and such, so code for simple stuff should be entered manually (for example, converting decimal numbers to hexadecimal numbers is a multiple-step procedure involving ASCII character manipulation).
Example of code used is (just to show you what I'm talking about :) )
.
.
.
DDRA = 0x00;
A = adc(0); // Right-hand sensor
u = A>>4;
l = A&0x0F;
TransmitByte(h[u]);
TransmitByte(h[l]);
TransmitByte(' ');
.
.
.
For some circumstances, I must use WinAVR and cannot external libraries (such as stdio.h). ANYWAY, I want to apply a signal with pulse width of 1 ms or 2 ms via a servo motor. I know what port to set and such; all I need to do is apply a delay to keep that port set before clearing it.
Now I know how to set delays, we should create empty for loops such as:
int value= **??**
for(i = 0; i<value; i++)
;
What value am I supposed to put in "value" for a 1 ms loop ?
Chances are you'll have to calculate a reasonable value, then look at the signal that's generated (e.g., with an oscilloscope) and adjust your value until you hit the right time range. Given that you apparently have a 2:1 margin, you might hit it reasonably close the first time, but I wouldn't be much on it.
For your first approximation, generate an empty loop and count the instruction cycles for one loop, and multiply that by the time for one clock cycle. That should give at least a reasonable approximation of time taken by a single execution of the loop, so dividing the time you need by that should get you into the ballpark for the right number of iterations.
Edit: I should also note, however, that (at least most) AVRs have on-board timers, so you might be able to use them instead. This can 1) let you do other processing and/or 2) reduce power consumption for the duration.
If you do use delay loops, you might want to use AVR-libc's delay loop utilities to handle the details.
If my program is simple enough there is not a need of explicit timer programming, but it should be portable. One of my choices for a defined delay would be AVR Libc's delay function:
#include <delay.h>
_delay_ms (2) // Sleeps 2 ms
Is this going to go to a real robot? All you have is a CPU, no other integrated circuits that can give a measure of time?
If both answers are 'yes', well... if you know the exact timing for the operations, you can use the loop to create precise delays. Output your code to assembly code, and see the exact sequence of instructions used. Then, check the manual of the processor, it'll have that information.
If you need a more precise time value you should employ an interrupt service routine based on an internal timer. Remember a For loop is a blocking instruction, so while it is iterating the rest of your program is blocked. You could set up a timer based ISR with a global variable that counts up by 1 every time the ISR runs. You could then use that variable in an "if statement" to set the width time. Also that core probably supports PWM for use with the RC type servos. So that may be a better route.
This is a really neat little tasker that I use sometimes. It's for an AVR.
************************Header File***********************************
// Scheduler data structure for storing task data
typedef struct
{
// Pointer to task
void (* pTask)(void);
// Initial delay in ticks
unsigned int Delay;
// Periodic interval in ticks
unsigned int Period;
// Runme flag (indicating when the task is due to run)
unsigned char RunMe;
} sTask;
// Function prototypes
//-------------------------------------------------------------------
void SCH_Init_T1(void);
void SCH_Start(void);
// Core scheduler functions
void SCH_Dispatch_Tasks(void);
unsigned char SCH_Add_Task(void (*)(void), const unsigned int, const unsigned int);
unsigned char SCH_Delete_Task(const unsigned char);
// Maximum number of tasks
// MUST BE ADJUSTED FOR EACH NEW PROJECT
#define SCH_MAX_TASKS (1)
************************Header File***********************************
************************C File***********************************
#include "SCH_AVR.h"
#include <avr/io.h>
#include <avr/interrupt.h>
// The array of tasks
sTask SCH_tasks_G[SCH_MAX_TASKS];
/*------------------------------------------------------------------*-
SCH_Dispatch_Tasks()
This is the 'dispatcher' function. When a task (function)
is due to run, SCH_Dispatch_Tasks() will run it.
This function must be called (repeatedly) from the main loop.
-*------------------------------------------------------------------*/
void SCH_Dispatch_Tasks(void)
{
unsigned char Index;
// Dispatches (runs) the next task (if one is ready)
for(Index = 0; Index < SCH_MAX_TASKS; Index++)
{
if((SCH_tasks_G[Index].RunMe > 0) && (SCH_tasks_G[Index].pTask != 0))
{
(*SCH_tasks_G[Index].pTask)(); // Run the task
SCH_tasks_G[Index].RunMe -= 1; // Reset / reduce RunMe flag
// Periodic tasks will automatically run again
// - if this is a 'one shot' task, remove it from the array
if(SCH_tasks_G[Index].Period == 0)
{
SCH_Delete_Task(Index);
}
}
}
}
/*------------------------------------------------------------------*-
SCH_Add_Task()
Causes a task (function) to be executed at regular intervals
or after a user-defined delay
pFunction - The name of the function which is to be scheduled.
NOTE: All scheduled functions must be 'void, void' -
that is, they must take no parameters, and have
a void return type.
DELAY - The interval (TICKS) before the task is first executed
PERIOD - If 'PERIOD' is 0, the function is only called once,
at the time determined by 'DELAY'. If PERIOD is non-zero,
then the function is called repeatedly at an interval
determined by the value of PERIOD (see below for examples
which should help clarify this).
RETURN VALUE:
Returns the position in the task array at which the task has been
added. If the return value is SCH_MAX_TASKS then the task could
not be added to the array (there was insufficient space). If the
return value is < SCH_MAX_TASKS, then the task was added
successfully.
Note: this return value may be required, if a task is
to be subsequently deleted - see SCH_Delete_Task().
EXAMPLES:
Task_ID = SCH_Add_Task(Do_X,1000,0);
Causes the function Do_X() to be executed once after 1000 sch ticks.
Task_ID = SCH_Add_Task(Do_X,0,1000);
Causes the function Do_X() to be executed regularly, every 1000 sch ticks.
Task_ID = SCH_Add_Task(Do_X,300,1000);
Causes the function Do_X() to be executed regularly, every 1000 ticks.
Task will be first executed at T = 300 ticks, then 1300, 2300, etc.
-*------------------------------------------------------------------*/
unsigned char SCH_Add_Task(void (*pFunction)(), const unsigned int DELAY, const unsigned int PERIOD)
{
unsigned char Index = 0;
// First find a gap in the array (if there is one)
while((SCH_tasks_G[Index].pTask != 0) && (Index < SCH_MAX_TASKS))
{
Index++;
}
// Have we reached the end of the list?
if(Index == SCH_MAX_TASKS)
{
// Task list is full, return an error code
return SCH_MAX_TASKS;
}
// If we're here, there is a space in the task array
SCH_tasks_G[Index].pTask = pFunction;
SCH_tasks_G[Index].Delay =DELAY;
SCH_tasks_G[Index].Period = PERIOD;
SCH_tasks_G[Index].RunMe = 0;
// return position of task (to allow later deletion)
return Index;
}
/*------------------------------------------------------------------*-
SCH_Delete_Task()
Removes a task from the scheduler. Note that this does
*not* delete the associated function from memory:
it simply means that it is no longer called by the scheduler.
TASK_INDEX - The task index. Provided by SCH_Add_Task().
RETURN VALUE: RETURN_ERROR or RETURN_NORMAL
-*------------------------------------------------------------------*/
unsigned char SCH_Delete_Task(const unsigned char TASK_INDEX)
{
// Return_code can be used for error reporting, NOT USED HERE THOUGH!
unsigned char Return_code = 0;
SCH_tasks_G[TASK_INDEX].pTask = 0;
SCH_tasks_G[TASK_INDEX].Delay = 0;
SCH_tasks_G[TASK_INDEX].Period = 0;
SCH_tasks_G[TASK_INDEX].RunMe = 0;
return Return_code;
}
/*------------------------------------------------------------------*-
SCH_Init_T1()
Scheduler initialisation function. Prepares scheduler
data structures and sets up timer interrupts at required rate.
You must call this function before using the scheduler.
-*------------------------------------------------------------------*/
void SCH_Init_T1(void)
{
unsigned char i;
for(i = 0; i < SCH_MAX_TASKS; i++)
{
SCH_Delete_Task(i);
}
// Set up Timer 1
// Values for 1ms and 10ms ticks are provided for various crystals
OCR1A = 15000; // 10ms tick, Crystal 12 MHz
//OCR1A = 20000; // 10ms tick, Crystal 16 MHz
//OCR1A = 12500; // 10ms tick, Crystal 10 MHz
//OCR1A = 10000; // 10ms tick, Crystal 8 MHz
//OCR1A = 2000; // 1ms tick, Crystal 16 MHz
//OCR1A = 1500; // 1ms tick, Crystal 12 MHz
//OCR1A = 1250; // 1ms tick, Crystal 10 MHz
//OCR1A = 1000; // 1ms tick, Crystal 8 MHz
TCCR1B = (1 << CS11) | (1 << WGM12); // Timer clock = system clock/8
TIMSK |= 1 << OCIE1A; //Timer 1 Output Compare A Match Interrupt Enable
}
/*------------------------------------------------------------------*-
SCH_Start()
Starts the scheduler, by enabling interrupts.
NOTE: Usually called after all regular tasks are added,
to keep the tasks synchronised.
NOTE: ONLY THE SCHEDULER INTERRUPT SHOULD BE ENABLED!!!
-*------------------------------------------------------------------*/
void SCH_Start(void)
{
sei();
}
/*------------------------------------------------------------------*-
SCH_Update
This is the scheduler ISR. It is called at a rate
determined by the timer settings in SCH_Init_T1().
-*------------------------------------------------------------------*/
ISR(TIMER1_COMPA_vect)
{
unsigned char Index;
for(Index = 0; Index < SCH_MAX_TASKS; Index++)
{
// Check if there is a task at this location
if(SCH_tasks_G[Index].pTask)
{
if(SCH_tasks_G[Index].Delay == 0)
{
// The task is due to run, Inc. the 'RunMe' flag
SCH_tasks_G[Index].RunMe += 1;
if(SCH_tasks_G[Index].Period)
{
// Schedule periodic tasks to run again
SCH_tasks_G[Index].Delay = SCH_tasks_G[Index].Period;
SCH_tasks_G[Index].Delay -= 1;
}
}
else
{
// Not yet ready to run: just decrement the delay
SCH_tasks_G[Index].Delay -= 1;
}
}
}
}
// ------------------------------------------------------------------
************************C File***********************************
Most ATmega AVR chips, which are commonly used to make simple robots, have a feature known as pulse-width modulation (PWM) that can be used to control servos. This blog post might serve as a quick introduction to controlling servos using PWM. If you were to look at the Arduino platform's servo control library, you would find that it also uses PWM.
This might be a better choice than relying on running a loop a constant number of times as changes to compiler optimization flags and the chip's clock speed could potentially break such a simple delay function.
You should almost certainly have an interrupt configured to run code at a predictable interval. If you look in the example programs supplied with your CPU, you'll probably find an example of such.
Typically, one will use a word/longword of memory to hold a timer, which will be incremented each interrupt. If your timer interrupt runs 10,000 times/second and increments "interrupt_counter" by one each time, a 'wait 1 ms' routine could look like:
extern volatile unsigned long interrupt_counter;
unsigned long temp_value = interrupt_counter;
do {} while(10 > (interrupt_counter - temp_value));
/* Would reverse operands above and use less-than if this weren't HTML. */
Note that as written the code will wait between 900 µs and 1000 µs. If one changed the comparison to greater-or-equal, it would wait between 1000 and 1100. If one needs to do something five times at 1 ms intervals, waiting some arbitrary time up to 1 ms for the first time, one could write the code as:
extern volatile unsigned long interrupt_counter;
unsigned long temp_value = interrupt_counter;
for (int i=0; 5>i; i++)
{
do {} while(!((temp_value - interrupt_counter) & 0x80000000)); /* Wait for underflow */
temp_value += 10;
do_action_thing();
}
This should run the do_something()'s at precise intervals even if they take several hundred microseconds to complete. If they sometimes take over 1 ms, the system will try to run each one at the "proper" time (so if one call takes 1.3 ms and the next one finishes instantly, the following one will happen 700 µs later).

Using hardware timer in C

Okay, so I've got some C code to perform a mathematical operation which could, pretty much, take any length of time (depending on the operands supplied to it, of course). I was wondering if there is a way to register some kind of method which will be called every n seconds which can analyse the state of the operation, i.e. what iteration it is currently at, possibly using a hardware timer interrupt or something?
The reason I ask this is because I know the common way to implement this is to be keeping track of the current iteration in a variable; say, an integer called progress and have an IF statement like this in the code:
if ((progress % 10000) == 0)
printf("Currently at iteration %d\n", progress);
but I believe that a mod operation takes a relatively long time to execute, so the idea of having it inside a loop which will be ran many, many times scares me, from an optimisation point of view.
So I get the feeling that having an external way of signalling a progress print is nice and efficient. Are there any great ways to perform this, or is the simple 'mod check' the best (in terms of optimising)?
I'd go with the mod check, but maybe with subtractions instead :-)
icount = 0;
progress = 10000;
/* ... */
if (--progress == 0) {
progress = 10000;
printf("Currently at iteration %d0000\n", ++icount);
}
/* ... */
While mod operations are usually slow, the compiler should be able to optimize and predict this really well and only mis-predict once ever 10'000 ifs, burning one mod operation and ~20 cycles (for the mis-prediction) on it, which is fine. So you are trying to optimize one mod operation every 10'000 iterations. Of course this assumes you are running it on a modern and typical CPU, and not some embedded system with unknown specs. This should even be faster than having a counter variable.
Suggestion: Test it with and without the timing code, and figure out a complex solution if there is really a problem.
Premature optimisation is the root of all evil. -Knuth
mod is about the same speed as division, on most CPU's these days that means about 5-10 cycles... in other words hardly anything, slower than multiply/add/subtract, but not enough to really worry about.
However you are right to want to avoid sting in a loop spinning if you're doing work in another thread or something like that, if you're on a unixish system there's timer_create() or on linux the much easier to use timerfd_create()
But for single threaded, just putting that if in is enough.
Use alarm setitimer to raise SIGALRM signals at regular intervals.
struct itimerval interval;
void handler( int x ) {
write( STDOUT_FILENO, ".", 1 ); /* Defined in POSIX, not in C */
}
int main() {
signal( SIGALRM, &handler );
interval.it_value.tv_sec = 5; /* display after 5 seconds */
interval.it_interval.tv_sec = 5; /* then display every 5 seconds */
setitimer( ITIMER_REAL, &interval, NULL );
/* do computations */
interval.it_interval.tv_sec = 0; /* don't display progress any more */
setitimer( ITIMER_REAL, &interval, NULL );
printf( "\n" ); /* done with the dots! */
}
Note, only a smattering of functions are OK to call inside handler. They are listed partway down this page. If you want to communicate anything for a fancier printout, do it through a sig_atomic_t variable.
you could have a global variable for the iterations, which you could monitor from an external thread.
While () {
Print(iteration);
Sleep(1000);
}
You may need to watch out for data races though.

Resources