Using timers in ARM embedded C programming - c

I'm writing a pong-type game in C, which will run on an ARM board over an LCD screen. Part of the requirements for the game is something called 'magic time'.
A "magic time" period occurs at random intervals between 5 and 10 seconds - i.e, between 5 and 10 seconds after the last "magic time" period, and lasts for a random duration of 2 to 10 seconds.

I don't really understand your question (do you execute this code every second via timer interrupt, or?), but there are some errors that I see on the first sight:
while (magicTime == true) {
magicTimeLength++;
magicTime == magicTimeLength;
}
Last line (magicTime == magicTimeLength;) don't do anything - it simply evaluates if magicTime is equal to the magicTimeLength, so you will enter dead-loop.
I think that you want to do this:
Init magicTimeOccurence with random value within 5 and 10.
Init magicTimeLength with random value within 2 and 10.
Every second, if magicTimeOccurence is greater than 0, decrease
its value by one.
Once magicTimeOccurence hits 0, decrease magicTimeLength value
by one.
Check if magicTimeLength is greater than 0. If it is, it is magic
time period (so, set the magicTime flag to true). Decrement
magicTimeLength.
If magicTimeLength, set magicTime to false and go to step 1.
You should initialize your timer0 interrupt with period of 1s. I think that you accomplished it with
/* Set timer 0 period */
T0PR = 0;
T0MR0 = SYS_GetFpclk(TIMER0_PCLK_OFFSET)/(TIMER0_TICK_PER_SEC);
but make sure that is triggered every second.
Here is sample code, it should show you what I mean.
/* In void InitTimer0Interrupt() */
...
T0TCR_bit.CE = 1; /* Counting Enable */
magicTimeOccurence = 5+(rand()%5);
magicTimeLength = 2+(rand()%8);
magicTime = false;
__enable_interrupt();
}
/* In void Timer0IntrHandler (void) */
void Timer0IntrHandler (void) {
/* clear interrupt */
T0IR_bit.MR0INT = 1;
VICADDRESS = 0;
if(magicTimeOccurence > 0)
{
magicTimeOccurence--;
}
else if(magicTimeLength > 0){
magicTime = true;
magicTimeLenght--;
}
else{
magicTime = false;
magicTimeOccurence = 5+(rand()%5);
magicTimeLength = 2+(rand()%8);
}
/* take action on timer interrupt */
}

Related

Improvements in led indication with RTOS code

I want to make certain effects with RGB led which changes colour using PWM signals.
|¯¯¯¯¯¯¯¯¯¯||¯¯¯¯¯¯¯¯¯¯||¯¯¯¯¯¯¯¯¯¯|
| Green || Yellow || Pink |
|__________||__________||__________|
15ms 10ms 25ms
|¯¯¯¯¯¯¯¯¯¯||¯¯¯¯¯¯¯¯¯¯||¯¯¯¯¯¯¯¯¯¯|
| Blue || Magneta || Blue |
|__________||__________||__________|
20ms 10ms 20ms
Based on interrupt, these led colours should be displayed on LED.
I made a task specifically for this purpose and running these effects on that task.
while(1)
{
if (indication_type == event_a)
{
led_color_green();
osDelay(15);
led_color_yellow();
osDelay(10);
led_color_pink();
osDelay(25);
}
else if (indication_type == event_b)
{
led_color_blue();
osDelay(20);
led_color_magneta();
osDelay(10);
led_color_blue();
osDelay(20);
}
else
..
..
}
Code is working but there are two issues
Each led effect is visible for 50ms and if an event change has occurred, the led keeps on executing the previous state and a new state is showed after it has exited if-else condition with delay as per current implementation. How can I change the led indication immediately after receiving an interrupt?
I think there should be some clean way to make such effects especially with led. How can I improve this code? Any code reference you can share?
I am using STM32F4 and CMSIS RTOS v2
You need to be able to make a decision about the "indication type" at every time quanta (tick), and switch the sequence immediately.
The following is an outline of one way to do that:
// Color/time descriptor for each sequence
static const struct
{
void (*setcolor)(void) // Pointer to color set function
int showtime ; // Time to display color
} indication_descriptor[][3] =
{
{ {led_color_green, 15}, {led_color_green, 15}, {led_color_green, 15} }, // event_a sequence
{ {led_color_blue, 15}, {led_color_magneta, 15}, {led_color_blue, 15} }, // event_b sequence
...
} ;
// Sequence control variables
int last_indication_type = indication_type - 1 ; // force "change" on first iteration
int current_indication = 0 ;
int current_step = 0 ;
int step_time = 0 ;
while(1)
{
// On indication change...
if( indication_type != last_indication_type )
{
// Set the indication descriptor...
if( indication_type == event_a )
{
current_indication = 0 ;
}
else if( indication_type == event_b )
{
current_indication = 1 ;
}
...
last_indication_type = indication_type ;
// Set initial indication color and time for this sequence
current_step = 0
step_time = indication_descriptor[current_indication][0].showtime ;
indication_descriptor[current_indication][0].setcolor() ;
}
// Wait one tick
osDelay(1) ;
// Decrement time in step
step_time-- ;
// If end of step...
if( step_time == 0 )
{
// Set colour and time for next step
current_step == (current_step % 3) ;
step_time = indication_descriptor[current_indication][current_step].showtime ;
indication_descriptor[current_indication][current_step].setcolor() ;
}
}
It is likely that further improvements can be made, but from the fragment you have provided it is not possible to advise. For example the data type and range of indication_type, event_a etc are not shown - it is possible that a switch-case or lookup table or even arithmetic could replace the if/else if/.../else selection block, but more information would be required.
If you need faster and more deterministic sequence interruption than 1ms you'd have to use an osTimer, with a callback function that sets a thread flag and have the task wait on a thread flag rather than osDelay(). The interrupter can then set a different thread flag to wake the task to abort the timer and switch indication. That is somewhat more complex however and likely not necessary in this case. A pattern that is useful for when hard-real-time response at the microsecond order is required. If you did that the sequence code might change thus:
...
// Set initial indication color and time for this sequence
current_step = 0
osTimerStart( step_timer, indication_descriptor[current_indication][0].showtime ) ;
indication_descriptor[current_indication][0].setcolor() ;
}
// Wait for timer expiry or event
uint32_t flags = osThreadFlagsWait( STEP_COMPLETE_FLAG | SEQUENCE_CHANGE_FLAG, osFlagsWaitAny, step_time ) ;
// If end of step...
if( (flag & STEP_COMPLETE_FLAG) != 0 )
{
// Set colour and time for next step
current_step == (current_step % 3) ;
step_time = indication_descriptor[current_indication][current_step ].showtime ;
indication_descriptor[current_indication][current_step].setcolor() ;
}
else
{
osTimerStop( step_timer ) ;
}
It has the advantage of having far fewer task context switches which may be useful if you need to reduce CPU load to achieve real-time deadlines or reduce power consumption by entering sleep for longer periods.
osWait allows you to wait for either an event or a timeout. So if you can generate a proper signal when you have an interrupt, the thread will wake-up and you can immediately switch LED state.
See https://www.keil.com/pack/doc/cmsis/RTOS/html/group__CMSIS__RTOS__Wait.html#details
Put the RTOS to work and use an RTOS timer. From the interrupt that triggers the change in indication_type, you can call your function to set the color and then start the timer for the desired color duration (this can also be done in a task context after the interrupt notifies the task). On the timeout, set the next color and restart the timer with the next color duration. When indication_type changes, you can simply set the color again and restart the timer with the new indication_type color pattern.

Need some help in C code for optimization (Poll + delay/sleep)

Currently I'm polling the register to get the expected value and now I want reduce the CPU usage and increase the performance.
So, I think, if we do polling for particular time (Say for 10ms) and if we didn't get expected value then wait for some time (like udelay(10*1000) or usleep(10*1000) delay/sleep in ms) then continue to do polling for more more extra time (Say 100ms) and still if you didn't get the expected value then do sleep/delay for 100ms.....vice versa... need to do till it reach to maximum timeout value.
Please let me know if anything.
This is the old code:
#include <sys/time.h> /* for setitimer */
#include <unistd.h> /* for pause */
#include <signal.h> /* for signal */
#define INTERVAL 500 //timeout in ms
static int timedout = 0;
struct itimerval it_val; /* for setting itimer */
char temp_reg[2];
int main(void)
{
/* Upon SIGALRM, call DoStuff().
* Set interval timer. We want frequency in ms,
* but the setitimer call needs seconds and useconds. */
if (signal(SIGALRM, (void (*)(int)) DoStuff) == SIG_ERR)
{
perror("Unable to catch SIGALRM");
exit(1);
}
it_val.it_value.tv_sec = INTERVAL/1000;
it_val.it_value.tv_usec = (INTERVAL*1000) % 1000000;
it_val.it_interval = it_val.it_value;
if (setitimer(ITIMER_REAL, &it_val, NULL) == -1)
{
perror("error calling setitimer()");
exit(1);
}
do
{
temp_reg[0] = read_reg();
//Read the register here and copy the value into char array (temp_reg
if (timedout == 1 )
return -1;//Timedout
} while (temp_reg[0] != 0 );//Check the value and if not try to read the register again (poll)
}
/*
* DoStuff
*/
void DoStuff(void)
{
timedout = 1;
printf("Timer went off.\n");
}
Now I want to optimize and reduce the CPU usage and want to improve the performance.
Can any one help me on this issue ?
Thanks for your help on this.
Currently I'm polling the register to get the expected value [...]
wow wow wow, hold on a moment here, there is a huge story hidden behind this sentence; what is "the register"? what is "the expected value"? What does read_reg() do? are you polling some external hardware? Well then, it all depends on how your hardware behaves.
There are two possibilities:
Your hardware buffers the values that it produces. This means that the hardware will keep each value available until you read it; it will detect when you have read the value, and then it will provide the next value.
Your hardware does not buffer values. This means that values are being made available in real time, for an unknown length of time each, and they are replaced by new values at a rate that only your hardware knows.
If your hardware is buffering, then you do not need to be afraid that some values might be lost, so there is no need to poll at all: just try reading the next value once and only once, and if it is not what you expect, sleep for a while. Each value will be there when you get around to reading it.
If your hardware is not buffering, then there is no strategy of polling and sleeping that will work for you. Your hardware must provide an interrupt, and you must write an interrupt-handling routine that will read every single new value as quickly as possible from the moment that it has been made available.
Here are some pseudo code that might help:
do
{
// Pseudo code
start_time = get_current_time();
do
{
temp_reg[0] = read_reg();
//Read the register here and copy the value into char array (temp_reg
if (timedout == 1 )
return -1;//Timedout
// Pseudo code
stop_time = get_current_time();
if (stop_time - start_time > some_limit) break;
} while (temp_reg[0] != 0 );
if (temp_reg[0] != 0)
{
usleep(some_time);
start_time = get_current_time();
}
} while (temp_reg[0] != 0 );
To turn the pseudo code into real code, see https://stackoverflow.com/a/2150334/4386427

How to force interrupt to restart main loop instead of resuming? (timing issue!)

For the last two days i wrote a program that in basic terms generates a fairly accurate user adjustable pulse signal (both frequency and duty cycle adjustable). It basically uses the micros() function to keep track of time in order to pull low or high the 4 digital output channels.
These 4 channels need to have a phase difference of 90 degrees (think a 4cyl engine) always. In order for the user to change settings an ISR is implemented which returns a flag to the main loop to re-initialise the program. This flag is defined as a boolean 'set4'. When it is false a 'while' statement in the main loop will run the outputs. When it is true an 'if' statement will perform the necessary recalculations and reset the flag so that the 'while' statement will resume.
The program works perfectly with the initial values. Phase is perfect. However when the ISR is called and comes back to the main loop, from how i understand it resumes the program in the 'while' statement from where was originally interrupted, until it finishes and re-checks the flag 'set4' to see it is now true and it should stop.
Then, even though the 'if' statement afterwards resets and re-calculates all the necessary variables the phase between these 4 output channels is lost. Tested manually i see depending on which time the ISR is called it will give different results, usually having all 4 output channels synchronised together!
This happens even though i might don't change any values (thus the 'if' routine resets the variables to exactly the same ones when you first power up the arduino!). However, if i comment out this routine and just leave the line which resets the flag 'set4' the program will continue normally like nothing never happened!
I'm pretty sure that this is somehow caused because of the micros() timer because the loop will be resumed from where the ISR was called. I've tried to do it differently by checking and disabling for interrupts using cli() and sei() but i couldn't get it to work because it will just freeze when the arguments for cli() are true. The only solution that i can think of (i've tried everything, spend the whole day searching and trying out stuff) is to force the ISR to resume from the start of the loop so that the program may initialize properly. Another solution that comes to mind is to maybe reset the micros() timer somehow..but this would mess up the ISR i believe.
To help you visualise what is going on here's a snip of my code (please don't mind the 'Millis" name in the micros variables and any missing curly brackets since it is not pure copy-paste :p):
void loop()
{
while(!set4)
{
currentMillis = micros();
currentMillis2 = micros();
currentMillis3 = micros();
currentMillis4 = micros();
if(currentMillis - previousMillis >= interval) {
// save the last time you blinked the LED
previousMillis = currentMillis;
// if the LED is off turn it on and vice-versa:
if (ledState == LOW)
{
interval = ONTIME;
ledState = HIGH;
}
else
{
interval = OFFTIME;
ledState = LOW;
}
// set the LED with the ledState of the variable:
digitalWrite(ledPin, ledState);
}
.
.
//similar code for the other 3 output channels
.
.
}
if (set4){
//recalculation routine - exactly the same as when declaring the variables initially
currentMillis = 0;
currentMillis2 = 0;
currentMillis3 = 0;
currentMillis4 = 0;
//Output states of the output channels, forced low seperately when the ISR is called (without messing with the 'ledState' variables)
ledState = LOW;
ledState2 = LOW;
ledState3 = LOW;
ledState4 = LOW;
previousMillis = 0;
previousMillis2 = 0;
previousMillis3 = 0;
previousMillis4 = 0;
//ONTIME is the HIGH time interval of the pulse wave (i.e. dwell time), OFFTIME is the LOW time interval
//Note the calculated phase/timing offset at each channel
interval = ONTIME+OFFTIME;
interval2 = interval+interval/4;
interval3 = interval+interval/2;
interval4 = interval+interval*3/4;
set4=false;
}
}
Any idea what is going wrong?
Kind regards,
Ken
The problem is here:
previousMillis = 0;
previousMillis2 = 0;
previousMillis3 = 0;
previousMillis4 = 0;
All if statement will be true on the next loop.
Try with:
previousMillis = micros();
previousMillis2 = micros();
previousMillis3 = micros();
previousMillis4 = micros();

Creating a timeout using time and difftime

gcc (GCC) 4.6.0 20110419 (Red Hat 4.6.0-5)
I am trying to get the time of start and end time. And get the difference between them.
The function I have is for creating a API for our existing hardware.
The API wait_events take one argument that is time in milli-seconds. So what I am trying to get the start before the while loop. And using time to get the number of seconds. Then after 1 iteration of the loop get the time difference and then compare that difference with the time out.
Many thanks for any suggestions,
/* Wait for an event up to a specified time out.
* If an event occurs before the time out return 0
* If an event timeouts out before an event return -1 */
int wait_events(int timeout_ms)
{
time_t start = 0;
time_t end = 0;
double time_diff = 0;
/* convert to seconds */
int timeout = timeout_ms / 100;
/* Get the initial time */
start = time(NULL);
while(TRUE) {
if(open_device_flag == TRUE) {
device_evt.event_id = EVENT_DEV_OPEN;
return TRUE;
}
/* Get the end time after each iteration */
end = time(NULL);
/* Get the difference between times */
time_diff = difftime(start, end);
if(time_diff > timeout) {
/* timed out before getting an event */
return FALSE;
}
}
}
The function that will call will be like this.
int main(void)
{
#define TIMEOUT 500 /* 1/2 sec */
while(TRUE) {
if(wait_events(TIMEOUT) != 0) {
/* Process incoming event */
printf("Event fired\n");
}
else {
printf("Event timed out\n");
}
}
return 0;
}
=============== EDIT with updated results ==================
1) With no sleep -> 99.7% - 100% CPU
2) Setting usleep(10) -> 25% CPU
3) Setting usleep(100) -> 13% CPU
3) Setting usleep(1000) -> 2.6% CPU
4) Setting usleep(10000) -> 0.3 - 0.7% CPU
You're overcomplicating it - simplified:
time_t start = time();
for (;;) {
// try something
if (time() > start + 5) {
printf("5s timeout!\n");
break;
}
}
time_t should in general just be an int or long int depending on your platform counting the number of seconds since January 1st 1970.
Side note:
int timeout = timeout_ms / 1000;
One second consists of 1000 milliseconds.
Edit - another note:
You'll most likely have to ensure that the other thread(s) and/or event handling can happen, so include some kind of thread inactivity (using sleep(), nanosleep() or whatever).
Without calling a Sleep() function this a really bad design : your loop will use 100% of the CPU. Even if you are using threads, your other threads won't have much time to run as this thread will use many CPU cycles.
You should design something like that:
while(true) {
Sleep(100); // lets say you want a precision of 100 ms
// Do the compare time stuff here
}
If you need precision of the timing and are using different threads/processes, use Mutexes (semaphores with a increment/decrement of 1) or Critical Sections to make sure the time compare of your function is not interrupted by another process/thread of your own.
I believe your Red Hat is a System V so you can sync using IPC

Assign delays for 1 ms or 2 ms in C?

I'm using code to configure a simple robot. I'm using WinAVR, and the code used there is similar to C, but without stdio.h libraries and such, so code for simple stuff should be entered manually (for example, converting decimal numbers to hexadecimal numbers is a multiple-step procedure involving ASCII character manipulation).
Example of code used is (just to show you what I'm talking about :) )
.
.
.
DDRA = 0x00;
A = adc(0); // Right-hand sensor
u = A>>4;
l = A&0x0F;
TransmitByte(h[u]);
TransmitByte(h[l]);
TransmitByte(' ');
.
.
.
For some circumstances, I must use WinAVR and cannot external libraries (such as stdio.h). ANYWAY, I want to apply a signal with pulse width of 1 ms or 2 ms via a servo motor. I know what port to set and such; all I need to do is apply a delay to keep that port set before clearing it.
Now I know how to set delays, we should create empty for loops such as:
int value= **??**
for(i = 0; i<value; i++)
;
What value am I supposed to put in "value" for a 1 ms loop ?
Chances are you'll have to calculate a reasonable value, then look at the signal that's generated (e.g., with an oscilloscope) and adjust your value until you hit the right time range. Given that you apparently have a 2:1 margin, you might hit it reasonably close the first time, but I wouldn't be much on it.
For your first approximation, generate an empty loop and count the instruction cycles for one loop, and multiply that by the time for one clock cycle. That should give at least a reasonable approximation of time taken by a single execution of the loop, so dividing the time you need by that should get you into the ballpark for the right number of iterations.
Edit: I should also note, however, that (at least most) AVRs have on-board timers, so you might be able to use them instead. This can 1) let you do other processing and/or 2) reduce power consumption for the duration.
If you do use delay loops, you might want to use AVR-libc's delay loop utilities to handle the details.
If my program is simple enough there is not a need of explicit timer programming, but it should be portable. One of my choices for a defined delay would be AVR Libc's delay function:
#include <delay.h>
_delay_ms (2) // Sleeps 2 ms
Is this going to go to a real robot? All you have is a CPU, no other integrated circuits that can give a measure of time?
If both answers are 'yes', well... if you know the exact timing for the operations, you can use the loop to create precise delays. Output your code to assembly code, and see the exact sequence of instructions used. Then, check the manual of the processor, it'll have that information.
If you need a more precise time value you should employ an interrupt service routine based on an internal timer. Remember a For loop is a blocking instruction, so while it is iterating the rest of your program is blocked. You could set up a timer based ISR with a global variable that counts up by 1 every time the ISR runs. You could then use that variable in an "if statement" to set the width time. Also that core probably supports PWM for use with the RC type servos. So that may be a better route.
This is a really neat little tasker that I use sometimes. It's for an AVR.
************************Header File***********************************
// Scheduler data structure for storing task data
typedef struct
{
// Pointer to task
void (* pTask)(void);
// Initial delay in ticks
unsigned int Delay;
// Periodic interval in ticks
unsigned int Period;
// Runme flag (indicating when the task is due to run)
unsigned char RunMe;
} sTask;
// Function prototypes
//-------------------------------------------------------------------
void SCH_Init_T1(void);
void SCH_Start(void);
// Core scheduler functions
void SCH_Dispatch_Tasks(void);
unsigned char SCH_Add_Task(void (*)(void), const unsigned int, const unsigned int);
unsigned char SCH_Delete_Task(const unsigned char);
// Maximum number of tasks
// MUST BE ADJUSTED FOR EACH NEW PROJECT
#define SCH_MAX_TASKS (1)
************************Header File***********************************
************************C File***********************************
#include "SCH_AVR.h"
#include <avr/io.h>
#include <avr/interrupt.h>
// The array of tasks
sTask SCH_tasks_G[SCH_MAX_TASKS];
/*------------------------------------------------------------------*-
SCH_Dispatch_Tasks()
This is the 'dispatcher' function. When a task (function)
is due to run, SCH_Dispatch_Tasks() will run it.
This function must be called (repeatedly) from the main loop.
-*------------------------------------------------------------------*/
void SCH_Dispatch_Tasks(void)
{
unsigned char Index;
// Dispatches (runs) the next task (if one is ready)
for(Index = 0; Index < SCH_MAX_TASKS; Index++)
{
if((SCH_tasks_G[Index].RunMe > 0) && (SCH_tasks_G[Index].pTask != 0))
{
(*SCH_tasks_G[Index].pTask)(); // Run the task
SCH_tasks_G[Index].RunMe -= 1; // Reset / reduce RunMe flag
// Periodic tasks will automatically run again
// - if this is a 'one shot' task, remove it from the array
if(SCH_tasks_G[Index].Period == 0)
{
SCH_Delete_Task(Index);
}
}
}
}
/*------------------------------------------------------------------*-
SCH_Add_Task()
Causes a task (function) to be executed at regular intervals
or after a user-defined delay
pFunction - The name of the function which is to be scheduled.
NOTE: All scheduled functions must be 'void, void' -
that is, they must take no parameters, and have
a void return type.
DELAY - The interval (TICKS) before the task is first executed
PERIOD - If 'PERIOD' is 0, the function is only called once,
at the time determined by 'DELAY'. If PERIOD is non-zero,
then the function is called repeatedly at an interval
determined by the value of PERIOD (see below for examples
which should help clarify this).
RETURN VALUE:
Returns the position in the task array at which the task has been
added. If the return value is SCH_MAX_TASKS then the task could
not be added to the array (there was insufficient space). If the
return value is < SCH_MAX_TASKS, then the task was added
successfully.
Note: this return value may be required, if a task is
to be subsequently deleted - see SCH_Delete_Task().
EXAMPLES:
Task_ID = SCH_Add_Task(Do_X,1000,0);
Causes the function Do_X() to be executed once after 1000 sch ticks.
Task_ID = SCH_Add_Task(Do_X,0,1000);
Causes the function Do_X() to be executed regularly, every 1000 sch ticks.
Task_ID = SCH_Add_Task(Do_X,300,1000);
Causes the function Do_X() to be executed regularly, every 1000 ticks.
Task will be first executed at T = 300 ticks, then 1300, 2300, etc.
-*------------------------------------------------------------------*/
unsigned char SCH_Add_Task(void (*pFunction)(), const unsigned int DELAY, const unsigned int PERIOD)
{
unsigned char Index = 0;
// First find a gap in the array (if there is one)
while((SCH_tasks_G[Index].pTask != 0) && (Index < SCH_MAX_TASKS))
{
Index++;
}
// Have we reached the end of the list?
if(Index == SCH_MAX_TASKS)
{
// Task list is full, return an error code
return SCH_MAX_TASKS;
}
// If we're here, there is a space in the task array
SCH_tasks_G[Index].pTask = pFunction;
SCH_tasks_G[Index].Delay =DELAY;
SCH_tasks_G[Index].Period = PERIOD;
SCH_tasks_G[Index].RunMe = 0;
// return position of task (to allow later deletion)
return Index;
}
/*------------------------------------------------------------------*-
SCH_Delete_Task()
Removes a task from the scheduler. Note that this does
*not* delete the associated function from memory:
it simply means that it is no longer called by the scheduler.
TASK_INDEX - The task index. Provided by SCH_Add_Task().
RETURN VALUE: RETURN_ERROR or RETURN_NORMAL
-*------------------------------------------------------------------*/
unsigned char SCH_Delete_Task(const unsigned char TASK_INDEX)
{
// Return_code can be used for error reporting, NOT USED HERE THOUGH!
unsigned char Return_code = 0;
SCH_tasks_G[TASK_INDEX].pTask = 0;
SCH_tasks_G[TASK_INDEX].Delay = 0;
SCH_tasks_G[TASK_INDEX].Period = 0;
SCH_tasks_G[TASK_INDEX].RunMe = 0;
return Return_code;
}
/*------------------------------------------------------------------*-
SCH_Init_T1()
Scheduler initialisation function. Prepares scheduler
data structures and sets up timer interrupts at required rate.
You must call this function before using the scheduler.
-*------------------------------------------------------------------*/
void SCH_Init_T1(void)
{
unsigned char i;
for(i = 0; i < SCH_MAX_TASKS; i++)
{
SCH_Delete_Task(i);
}
// Set up Timer 1
// Values for 1ms and 10ms ticks are provided for various crystals
OCR1A = 15000; // 10ms tick, Crystal 12 MHz
//OCR1A = 20000; // 10ms tick, Crystal 16 MHz
//OCR1A = 12500; // 10ms tick, Crystal 10 MHz
//OCR1A = 10000; // 10ms tick, Crystal 8 MHz
//OCR1A = 2000; // 1ms tick, Crystal 16 MHz
//OCR1A = 1500; // 1ms tick, Crystal 12 MHz
//OCR1A = 1250; // 1ms tick, Crystal 10 MHz
//OCR1A = 1000; // 1ms tick, Crystal 8 MHz
TCCR1B = (1 << CS11) | (1 << WGM12); // Timer clock = system clock/8
TIMSK |= 1 << OCIE1A; //Timer 1 Output Compare A Match Interrupt Enable
}
/*------------------------------------------------------------------*-
SCH_Start()
Starts the scheduler, by enabling interrupts.
NOTE: Usually called after all regular tasks are added,
to keep the tasks synchronised.
NOTE: ONLY THE SCHEDULER INTERRUPT SHOULD BE ENABLED!!!
-*------------------------------------------------------------------*/
void SCH_Start(void)
{
sei();
}
/*------------------------------------------------------------------*-
SCH_Update
This is the scheduler ISR. It is called at a rate
determined by the timer settings in SCH_Init_T1().
-*------------------------------------------------------------------*/
ISR(TIMER1_COMPA_vect)
{
unsigned char Index;
for(Index = 0; Index < SCH_MAX_TASKS; Index++)
{
// Check if there is a task at this location
if(SCH_tasks_G[Index].pTask)
{
if(SCH_tasks_G[Index].Delay == 0)
{
// The task is due to run, Inc. the 'RunMe' flag
SCH_tasks_G[Index].RunMe += 1;
if(SCH_tasks_G[Index].Period)
{
// Schedule periodic tasks to run again
SCH_tasks_G[Index].Delay = SCH_tasks_G[Index].Period;
SCH_tasks_G[Index].Delay -= 1;
}
}
else
{
// Not yet ready to run: just decrement the delay
SCH_tasks_G[Index].Delay -= 1;
}
}
}
}
// ------------------------------------------------------------------
************************C File***********************************
Most ATmega AVR chips, which are commonly used to make simple robots, have a feature known as pulse-width modulation (PWM) that can be used to control servos. This blog post might serve as a quick introduction to controlling servos using PWM. If you were to look at the Arduino platform's servo control library, you would find that it also uses PWM.
This might be a better choice than relying on running a loop a constant number of times as changes to compiler optimization flags and the chip's clock speed could potentially break such a simple delay function.
You should almost certainly have an interrupt configured to run code at a predictable interval. If you look in the example programs supplied with your CPU, you'll probably find an example of such.
Typically, one will use a word/longword of memory to hold a timer, which will be incremented each interrupt. If your timer interrupt runs 10,000 times/second and increments "interrupt_counter" by one each time, a 'wait 1 ms' routine could look like:
extern volatile unsigned long interrupt_counter;
unsigned long temp_value = interrupt_counter;
do {} while(10 > (interrupt_counter - temp_value));
/* Would reverse operands above and use less-than if this weren't HTML. */
Note that as written the code will wait between 900 µs and 1000 µs. If one changed the comparison to greater-or-equal, it would wait between 1000 and 1100. If one needs to do something five times at 1 ms intervals, waiting some arbitrary time up to 1 ms for the first time, one could write the code as:
extern volatile unsigned long interrupt_counter;
unsigned long temp_value = interrupt_counter;
for (int i=0; 5>i; i++)
{
do {} while(!((temp_value - interrupt_counter) & 0x80000000)); /* Wait for underflow */
temp_value += 10;
do_action_thing();
}
This should run the do_something()'s at precise intervals even if they take several hundred microseconds to complete. If they sometimes take over 1 ms, the system will try to run each one at the "proper" time (so if one call takes 1.3 ms and the next one finishes instantly, the following one will happen 700 µs later).

Resources