How to engineer a power-loss safe RTC time switch? - c

I'm using a ESP32 with a DS3231 real time clock. The system should automatically switch on and off an output based on a user-programmable time (HH:MM) on a daily basis. The on and off hours/minutes are stored in flash so they are non-volatile. Also, the duration the output stays on is hardcoded.
I'm trying to develop a function which is called each second to check wether the output should be turned on or off based on the current time provided by the DS3231 RTC. This should be safe against misbehaviour if the power fails. So if for example power is temporarily lost in between an on-interval, the output should be set again for the remaining time interval once power is reapplied.
How can I relatively calculate if the current time is in between the on-time interval?
const int8_t light_ontime_h = 2; // Hour interval for how long the output should stay on
const int8_t light_ontime_m = 42; // Minute interval for how long the output should stay on
struct tm currenttime; // Current time is stored in here, refreshed somewhere else in the program from RTC
struct tm ontime; // Hours and minutes to turn on are stored in here. Values are loaded from NVS on each reboot or on change. So struct only holds valid HH:MM info, date etc. is invalid
// This is called each second
void checkTime() {
struct tm offtime;
offtime.tm_hour = ontime.tm_hour + light_ontime_h;
offtime.tm_min = ontime.tm_min + light_ontime_m;
// Normalize time
mktime(&offtime);
// Does not work if power is lost and correct hour/min was missed
if ((currenttime.tm_hour == ontime.tm_hour) && (currenttime.tm_min == ontime.tm_min)) {
// Turn output on
}
if ((currenttime.tm_hour == offtime.tm_hour) && (currenttime.tm_min == offtime.tm_min)) {
// Turn output off
}
}

Related

time() function is always returning maximum value "TIME_MAX" (or) 0xFFFFFFFF

I am working on microcontroller TC23X (Infineon), and my "time()" function is always returning the maximum value 0xFFFFFFFF
I am using the compiler "Tasking VX Toolset for Tricore V6.3r1"
Can someone please help me how to resolve above issue. so that time() returns the time since the Epoch (00:00:00 UTC, January 1, 1970), measured in seconds
I have tried these methods "time(NULL), time(0) and time(&Var)"
Because the availability of time/date is specific to the the hardware platform, time() is provided as a default implementation that simply returns "not available" (-1).
If your platform as a time/date source such as an RTC, GNSS or NTP client, then you can override the default implementation to utilise that source, simply by defining a replacement function - for example:
time_t time( std::time_t *timeptr )
{
time_t epoch_time = 0 ;
struct tm time_struct = {0} ;
// Your code to get RTC data in a tm struct here
...
struct tm timedate = {.tm_mday = ...,
.tm_mon = ..., // January == 0
.tm_year = ..., // Years since 1900
.tm_hour = ...,
.tm_min = ...,
.tm_sec = ...
} ;
// Convert tm struct to UNIX epoch time
epoch_time = std::mktime( &time_struct ) ;
if( tp != 0 )
{
*timeptr = epoch_time ;
}
return epoch_time ;
}
A more efficient method is to initialise a 1Hz periodic counter with the RTC source on first use, then thereafter return the value of the counter:
volatile static time_t epoch_time = TIME_MAX ;
void timer_ISR_1Hz( void )
{
epoch_time++ ;
}
time_t time( std::time_t *timeptr )
{
if( epoch_time == TIME_MAX )
{
struct tm time_struct = {0} ;
// Your code to get RTC data in a tm struct here
...
struct tm timedate = {.tm_mday = ...,
.tm_mon = ..., // January == 0
.tm_year = ..., // Years since 1900
.tm_hour = ...,
.tm_min = ...,
.tm_sec = ...
} ;
// Convert tm struct to UNIX epoch time
epoch_time = std::mktime( &time_struct ) ;
// Start 1Hz timer here
...
}
time_t t = epoch_time ;
// Convert tm struct to UNIX epoch time
if( tp != 0 )
{
*timeptr = t ;
}
return t ;
}
The above solution works where you have no RTC source if you initialise to the epoch time from user supplied time/date input after power-on.
When reading RTC hardware (or any other time source) you need to make sure the time is consistent; it is possible for example to read the 59 seconds, just as it rolls over to 00 seconds, and then read the minutes and so end up with say 20 minutes 59 seconds, when it should be 19 minutes 59 seconds, or 20 minutes 00 seconds. The same applies to the roll-over of minute, hour, day, month and year.
You might further wish to synchronise the second update with UTC seconds via a GNSS 1PPS or NTP for example. It depends on what level of precision you might require.
Even though you have that "autosar" tag set, the TC23x specs seem to only allow AUTOSAR Classic to be run on them. But, in that and the case of automotive, I wonder, why it is actually for you to be allowed to use time() function at all.
First of all, the part of the C-standard, that a freestanding environment has to support, does certainly not include time.h.
Second, only AUTOSAR Adaptive only supports a very small subset of POSIX, the POSIX profile PSE51 defined by IEEE1003.13.
But as stated above, AUTOSAR Adaptive is not really something to run on this TC23x.
So, in general i would suggest to you, forget about all C-Standard library functions to use. Look into the AUTOSAR components, their features and interfaces.
Maybe you should actually look for AUTOSAR features, like the StbM, in order to first of all have support for a synchronized global time base, which then distributed through the vehicle over several possible TimeGateways to your ECU.
The StbM for example has to be configured like the rest of your AUTOSAR stack from your SystemDescription (GlobalTimeDomains and TimeMaster/TimeSlave/TimeGateway), and a reference to a pure local timer (e.g a MCAL GPT timer) as the actual local timebase, even if not yet synchronized.
Then you can use for example this StbM interface:
Std_ReturnType StbM_GetCurrentTime(
StbM_SynchronizedTimeBaseType timeBaseId,
StbM_TimeStampType* timeStamp,
StbM_UserDataType* userData)
/** Variables of this type are used for expressing time stamps including relative time and
* absolute calendar time.
* The absolute time starts from 1970-01-01. <----
* 0 to 281474976710655s == 3257812230d [0xFFFF FFFF FFFF]
* 0 to 999999999ns [0x3B9A C9FF]
* invalid value in nanoseconds: [0x3B9A CA00] to [0x3FFF FFFF]
* Bit 30 and 31 reserved, default: 0
*/
typedef struct {
StbM_TimeBaseStatusType timeBaseStatus;
uint32 nanoSeconds;
uint32 seconds; // lower 32bit of 48bit seconds
uint16 secondsHi; // higher 16bit part of 48bit seconds
} StbM_TimeStampType;
// There is also an "extended" version of the function and structure available
// using uint64 type instead the split uint32/uint16 for the seconds part
typedef uint8 StbM_TimeBaseStatusType;
// Bit 0 (LSB): 0x00: No Timeout on receiving Synchronisation Messages
// 0x01: Timeout on receiving Synchronisation Messages
#define TIMEOUT 0x01
// Bit 2 0x00: Local Time Base is synchronous to Global Time Master
// 0x04: Local Time Base updates are based on a Time Gateway below the Global Time Master
#define SYNC_TO_GATEWAY 0x04
// Bit 3 0x00: Local Time Base is based on Local Time Base reference clock only (never synchronized with Global Time Base)
// 0x08: Local Time Base was at least synchronized with Global Time Base one time
#define GLOBAL_TIME_BASE 0x08
// Bit 4 0x00: No leap into the future within the received time for Time Base
// 0x10: Leap into the future within the received time for Time Base exceeds a configured threshold
#define TIMELEAP_FUTURE 0x10
// Bit 5 0x00: No leap into the past within the received time for Time Base
// 0x20: Leap into the past within the received time for Time Base exceeds a configured threshold
#define TIMELEAP_PAST 0x20
like this:
const StbM_SynchronizedTimeBaseType timeBaseId = YOUR_TIME_DOMAIN_ID; // as configured in the AUTOSAR config tool
StbM_TimeStampType timeStamp;
StbM_UserDataType userData; // up to 3 bytes transmitted from TimeMaster (systemdependent)
Std_ReturnType ret;
SchM_Enter_ExclusiveArea_0();
ret = StbM_GetCurrentTimer(timeBaseId, &timeStamp, &userData);
SchM_Exit_ExclusiveArea_0();
if (ret == E_NOT_OK) {
// Failure Handling and maybe report Error
// Is StbM and AUTOSAR Stack ok and running?
// Was this the correct timeBaseId?
} else {
if (timeStamp.timeBaseStatus & TIMEOUT) {
// TimeSync messages not received --> running further on local time (time since ECU was turned on)
}
if (timeStamp.timeBaseStatus & SYNC_TO_GATEWAY) {
// Synced to gateway, but not the global time master --> synchronized to the gateways time
// (possible drift from that gateway itself, or time since gateway was turned on)
} else {
// if bit is not set, we are synchronized to the global time master
}
// ..
// work with timeStamp.seconds and timeStamp.nanoSeconds
If there is no StbM, or no synchronized timebase, usually the ECUs just run on their local time based since ECU power on, whenever that was.

an efficient way to detect when the system's hour changed from xx:59 to xy:00

I have an application on Linux that needs to change some parameters each hour, e.g. at 11:00, 12:00, etc. and the system's date can be changed by the user anytime.
Is there any signal, posix function that would provides me when a hour changes from xx:59 to xx+1:00?
Normally, I use localtime(3) to fetch the current time each seconds then compare if the minute part is equal to 0. however, it does not look a good way to do it, in order to detect a change, I need to call the same function each second for an hour. Especially I run the code on an embedded board that would be good to use less resources.
Here is an example code how I do it:
static char *fetch_time() { // I use this fcn for some other purpose to fetch the time info
char *p;
time_t rawtime;
struct tm * timeinfo;
char buffer[13];
time(&rawtime);
timeinfo = localtime(&rawtime);
strftime (buffer,13,"%04Y%02m%02d%02k%02M",timeinfo);
p = (char *)malloc(sizeof(buffer));
strcpy(p, buffer);
return p;
}
static int hour_change_check(){
char *p;
p = fetch_time();
char current_minute[3] = {'\0'};
current_minute[0] = p[10];
current_minute[1] = p[11];
int current_minute_as_int = atoi(current_minute);
if (current_minute_as_int == 0){
printf("current_min: %d\n",current_minute_as_int);
free(p);
return 1;
}
free(p);
return 0;
}
int main(void){
while(1){
int x = hour_change_check();
printf("x:%d\n",x);
sleep(1);
}
return 0;
}
There is no such signal, but traditionally the method of waiting until some target time is to compute how long it is between "now" and "then", and then call sleep():
now = time(NULL);
when = (some calculation);
if (when > now)
sleep(when - now);
If you need to be very precise about the transition from, e.g., 3:59:59 to 4:00:00, you may want to sleep for a slightly shorter time in case of time adjustments due to leap seconds. (If you are running in a portable device in which time zones can change, you also need to worry about picking up the new location, and if it runs on a half-hour offset, redo all computations. There's even Solar Time in Saudi Arabia....)
Edit: per the suggestion from R.., if clock_nanosleep() is available, calculate a timespec value for the absolute wakeup time and call it with the TIMER_ABSTIME flag. See http://pubs.opengroup.org/onlinepubs/009695399/functions/clock_nanosleep.html for the definition for clock_nanosleep(). However, if time is allowed to step backwards (e.g., localtime with zone shifts), you may still have to do some maintenance checking.
Have you actually measured the overhead used in your solution of polling the time once per second (or even two given some of your other comments)?
The number of instructions that are invoked is minimal AND you do not have any looping. So at worse maybe the cpu uses 100 micro-seconds (0.1 ms, or 0.0001 s) time. This estimate is very dependent on the processor used in your embedded system and its clock speed, but the idea is that maybe the polling logic uses 1/1000 of the total time available.
Also, you could optimize your hour_change_check code to do all of the time calcs and not call another function that issues malloc which has to be immediately freed! Also, if this is an embedded *nix system, can you still run this polling logic in its own thread so that when it issues sleep() it will not interfere or delay other units of work.
Hence, measure the problem and see if it is a significant problem. The polling's performance must be balanced against the requirement that when a user changes the time then the hour change MUST be detected. That is, I think polling every second will catch the hour rollover even if the user changes the time, but is the overhead worth it. Well, how much, exactly, overhead is there?

Windows Driver Timestamp function

I am modifying an existing Windows Kernel device driver and in there I need to capture a timestamp. I was intending to use time.h library and call the clock() function to get that, however under windows visual studio, the linking is failing. So I took it as a means that I need to work within the driver's libraries.
I found the following function, KeInitializeTimer, and KeSetTimerEx but these are used if I plan to set up a timer and wake up on it. What I really need is something that will give me a timestamp.
Any ideas?
I am updating my question with an answer for others to benefit from my findings.
To get a timestamp, you can use KeQueryTickCount(). This routine will give you the count of interval interrupts that occurred since the system was booted. However, if you need to find out since the last timestamp you captured, an X amount of time has passed you need to also query your system to determine the time it takes for each interval clock interrupt.
ULONG KeQueryTimeIncrement() give you the number of 100-nanosecond units.
Example:
PLARGE_INTEGER timeStamp;
KeQueryTickCount(&timeStamp);
Please note that PLARGE_INTEGER is defined as such:
#if defined(MIDL_PASS)
typedef struct _LARGE_INTEGER {
#else // MIDL_PASS
typedef union _LARGE_INTEGER {
struct {
ULONG LowPart;
LONG HighPart;
} DUMMYSTRUCTNAME;
struct {
ULONG LowPart;
LONG HighPart;
} u;
#endif //MIDL_PASS
LONGLONG QuadPart;
} LARGE_INTEGER;
So lets say, you want to see if 30 seconds passed since you last took a timestamp, you can do the following:
ULONG tickIncrement, ticks;
LARGE_INTEGER waitTillTimeStamp;
tickIncrement = KeQueryTimeIncrement();
// 1sec is 1,000,000,000 nano sec, however, since KeQueryTimeIncrement is in
// 100ns increments, divide that and your constant is 10,000,000
ticks = ((30 * 10,000,000) / tickIncrement);
KeQueryTickCount(&waitTillTimeStamp);
waitTillTimeStamp.QuadPart += ticks;
<.....Some code and time passage....>
KeQueryTickCount(&currTimeStamp);
if (waitTillTimeStamp.QuadPart < currTimeStamp.QuadPart) {
<...Do whatever...>
}
Another example to help you understand this, what if you want to translate the timestamp you got into a time value such as milliseconds.
LARGE_INTEGER mSec, currTimeStamp;
ULONG timeIncrement;
timeIncrement = KeQueryTimeIncrement();
KeQueryTickCount(&currTimeStamp);
// 1 millisecond is 1,000,000 nano seconds, but remember divide by 100 to account for
// KeQueryTickCount granularity.
mSec.QuadPart = (currTimeStamp.QuadPart * timeIncrement) / 10000;
Remember this example is for demonstration purposes, mSec is not the current time in milliseconds. Based on the APIs used above, it is merely the number of milliseconds that have elapsed since the system was started.
You can also use GetTickCount(), but this returns a DWORD and thus will only be able to give you the number of milliseonds since the system was started for up to 49.7 days.
I know this is a 10 years old question but... better later than never. I disagree with the OP's answer.
Proper solution:
// The KeQuerySystemTime routine obtains the current system time.
LARGE_INTEGER SystemTime;
KeQuerySystemTime(&SystemTime);
// The ExSystemTimeToLocalTime routine converts a GMT system time value to the local system time for the current time zone.
LARGE_INTEGER LocalTime;
ExSystemTimeToLocalTime(&SystemTime, &LocalTime);
// The RtlTimeToTimeFields routine converts system time into a TIME_FIELDS structure.
TIME_FIELDS TimeFields;
RtlTimeToTimeFields(&LocalTime, &TimeFields);

Assign delays for 1 ms or 2 ms in C?

I'm using code to configure a simple robot. I'm using WinAVR, and the code used there is similar to C, but without stdio.h libraries and such, so code for simple stuff should be entered manually (for example, converting decimal numbers to hexadecimal numbers is a multiple-step procedure involving ASCII character manipulation).
Example of code used is (just to show you what I'm talking about :) )
.
.
.
DDRA = 0x00;
A = adc(0); // Right-hand sensor
u = A>>4;
l = A&0x0F;
TransmitByte(h[u]);
TransmitByte(h[l]);
TransmitByte(' ');
.
.
.
For some circumstances, I must use WinAVR and cannot external libraries (such as stdio.h). ANYWAY, I want to apply a signal with pulse width of 1 ms or 2 ms via a servo motor. I know what port to set and such; all I need to do is apply a delay to keep that port set before clearing it.
Now I know how to set delays, we should create empty for loops such as:
int value= **??**
for(i = 0; i<value; i++)
;
What value am I supposed to put in "value" for a 1 ms loop ?
Chances are you'll have to calculate a reasonable value, then look at the signal that's generated (e.g., with an oscilloscope) and adjust your value until you hit the right time range. Given that you apparently have a 2:1 margin, you might hit it reasonably close the first time, but I wouldn't be much on it.
For your first approximation, generate an empty loop and count the instruction cycles for one loop, and multiply that by the time for one clock cycle. That should give at least a reasonable approximation of time taken by a single execution of the loop, so dividing the time you need by that should get you into the ballpark for the right number of iterations.
Edit: I should also note, however, that (at least most) AVRs have on-board timers, so you might be able to use them instead. This can 1) let you do other processing and/or 2) reduce power consumption for the duration.
If you do use delay loops, you might want to use AVR-libc's delay loop utilities to handle the details.
If my program is simple enough there is not a need of explicit timer programming, but it should be portable. One of my choices for a defined delay would be AVR Libc's delay function:
#include <delay.h>
_delay_ms (2) // Sleeps 2 ms
Is this going to go to a real robot? All you have is a CPU, no other integrated circuits that can give a measure of time?
If both answers are 'yes', well... if you know the exact timing for the operations, you can use the loop to create precise delays. Output your code to assembly code, and see the exact sequence of instructions used. Then, check the manual of the processor, it'll have that information.
If you need a more precise time value you should employ an interrupt service routine based on an internal timer. Remember a For loop is a blocking instruction, so while it is iterating the rest of your program is blocked. You could set up a timer based ISR with a global variable that counts up by 1 every time the ISR runs. You could then use that variable in an "if statement" to set the width time. Also that core probably supports PWM for use with the RC type servos. So that may be a better route.
This is a really neat little tasker that I use sometimes. It's for an AVR.
************************Header File***********************************
// Scheduler data structure for storing task data
typedef struct
{
// Pointer to task
void (* pTask)(void);
// Initial delay in ticks
unsigned int Delay;
// Periodic interval in ticks
unsigned int Period;
// Runme flag (indicating when the task is due to run)
unsigned char RunMe;
} sTask;
// Function prototypes
//-------------------------------------------------------------------
void SCH_Init_T1(void);
void SCH_Start(void);
// Core scheduler functions
void SCH_Dispatch_Tasks(void);
unsigned char SCH_Add_Task(void (*)(void), const unsigned int, const unsigned int);
unsigned char SCH_Delete_Task(const unsigned char);
// Maximum number of tasks
// MUST BE ADJUSTED FOR EACH NEW PROJECT
#define SCH_MAX_TASKS (1)
************************Header File***********************************
************************C File***********************************
#include "SCH_AVR.h"
#include <avr/io.h>
#include <avr/interrupt.h>
// The array of tasks
sTask SCH_tasks_G[SCH_MAX_TASKS];
/*------------------------------------------------------------------*-
SCH_Dispatch_Tasks()
This is the 'dispatcher' function. When a task (function)
is due to run, SCH_Dispatch_Tasks() will run it.
This function must be called (repeatedly) from the main loop.
-*------------------------------------------------------------------*/
void SCH_Dispatch_Tasks(void)
{
unsigned char Index;
// Dispatches (runs) the next task (if one is ready)
for(Index = 0; Index < SCH_MAX_TASKS; Index++)
{
if((SCH_tasks_G[Index].RunMe > 0) && (SCH_tasks_G[Index].pTask != 0))
{
(*SCH_tasks_G[Index].pTask)(); // Run the task
SCH_tasks_G[Index].RunMe -= 1; // Reset / reduce RunMe flag
// Periodic tasks will automatically run again
// - if this is a 'one shot' task, remove it from the array
if(SCH_tasks_G[Index].Period == 0)
{
SCH_Delete_Task(Index);
}
}
}
}
/*------------------------------------------------------------------*-
SCH_Add_Task()
Causes a task (function) to be executed at regular intervals
or after a user-defined delay
pFunction - The name of the function which is to be scheduled.
NOTE: All scheduled functions must be 'void, void' -
that is, they must take no parameters, and have
a void return type.
DELAY - The interval (TICKS) before the task is first executed
PERIOD - If 'PERIOD' is 0, the function is only called once,
at the time determined by 'DELAY'. If PERIOD is non-zero,
then the function is called repeatedly at an interval
determined by the value of PERIOD (see below for examples
which should help clarify this).
RETURN VALUE:
Returns the position in the task array at which the task has been
added. If the return value is SCH_MAX_TASKS then the task could
not be added to the array (there was insufficient space). If the
return value is < SCH_MAX_TASKS, then the task was added
successfully.
Note: this return value may be required, if a task is
to be subsequently deleted - see SCH_Delete_Task().
EXAMPLES:
Task_ID = SCH_Add_Task(Do_X,1000,0);
Causes the function Do_X() to be executed once after 1000 sch ticks.
Task_ID = SCH_Add_Task(Do_X,0,1000);
Causes the function Do_X() to be executed regularly, every 1000 sch ticks.
Task_ID = SCH_Add_Task(Do_X,300,1000);
Causes the function Do_X() to be executed regularly, every 1000 ticks.
Task will be first executed at T = 300 ticks, then 1300, 2300, etc.
-*------------------------------------------------------------------*/
unsigned char SCH_Add_Task(void (*pFunction)(), const unsigned int DELAY, const unsigned int PERIOD)
{
unsigned char Index = 0;
// First find a gap in the array (if there is one)
while((SCH_tasks_G[Index].pTask != 0) && (Index < SCH_MAX_TASKS))
{
Index++;
}
// Have we reached the end of the list?
if(Index == SCH_MAX_TASKS)
{
// Task list is full, return an error code
return SCH_MAX_TASKS;
}
// If we're here, there is a space in the task array
SCH_tasks_G[Index].pTask = pFunction;
SCH_tasks_G[Index].Delay =DELAY;
SCH_tasks_G[Index].Period = PERIOD;
SCH_tasks_G[Index].RunMe = 0;
// return position of task (to allow later deletion)
return Index;
}
/*------------------------------------------------------------------*-
SCH_Delete_Task()
Removes a task from the scheduler. Note that this does
*not* delete the associated function from memory:
it simply means that it is no longer called by the scheduler.
TASK_INDEX - The task index. Provided by SCH_Add_Task().
RETURN VALUE: RETURN_ERROR or RETURN_NORMAL
-*------------------------------------------------------------------*/
unsigned char SCH_Delete_Task(const unsigned char TASK_INDEX)
{
// Return_code can be used for error reporting, NOT USED HERE THOUGH!
unsigned char Return_code = 0;
SCH_tasks_G[TASK_INDEX].pTask = 0;
SCH_tasks_G[TASK_INDEX].Delay = 0;
SCH_tasks_G[TASK_INDEX].Period = 0;
SCH_tasks_G[TASK_INDEX].RunMe = 0;
return Return_code;
}
/*------------------------------------------------------------------*-
SCH_Init_T1()
Scheduler initialisation function. Prepares scheduler
data structures and sets up timer interrupts at required rate.
You must call this function before using the scheduler.
-*------------------------------------------------------------------*/
void SCH_Init_T1(void)
{
unsigned char i;
for(i = 0; i < SCH_MAX_TASKS; i++)
{
SCH_Delete_Task(i);
}
// Set up Timer 1
// Values for 1ms and 10ms ticks are provided for various crystals
OCR1A = 15000; // 10ms tick, Crystal 12 MHz
//OCR1A = 20000; // 10ms tick, Crystal 16 MHz
//OCR1A = 12500; // 10ms tick, Crystal 10 MHz
//OCR1A = 10000; // 10ms tick, Crystal 8 MHz
//OCR1A = 2000; // 1ms tick, Crystal 16 MHz
//OCR1A = 1500; // 1ms tick, Crystal 12 MHz
//OCR1A = 1250; // 1ms tick, Crystal 10 MHz
//OCR1A = 1000; // 1ms tick, Crystal 8 MHz
TCCR1B = (1 << CS11) | (1 << WGM12); // Timer clock = system clock/8
TIMSK |= 1 << OCIE1A; //Timer 1 Output Compare A Match Interrupt Enable
}
/*------------------------------------------------------------------*-
SCH_Start()
Starts the scheduler, by enabling interrupts.
NOTE: Usually called after all regular tasks are added,
to keep the tasks synchronised.
NOTE: ONLY THE SCHEDULER INTERRUPT SHOULD BE ENABLED!!!
-*------------------------------------------------------------------*/
void SCH_Start(void)
{
sei();
}
/*------------------------------------------------------------------*-
SCH_Update
This is the scheduler ISR. It is called at a rate
determined by the timer settings in SCH_Init_T1().
-*------------------------------------------------------------------*/
ISR(TIMER1_COMPA_vect)
{
unsigned char Index;
for(Index = 0; Index < SCH_MAX_TASKS; Index++)
{
// Check if there is a task at this location
if(SCH_tasks_G[Index].pTask)
{
if(SCH_tasks_G[Index].Delay == 0)
{
// The task is due to run, Inc. the 'RunMe' flag
SCH_tasks_G[Index].RunMe += 1;
if(SCH_tasks_G[Index].Period)
{
// Schedule periodic tasks to run again
SCH_tasks_G[Index].Delay = SCH_tasks_G[Index].Period;
SCH_tasks_G[Index].Delay -= 1;
}
}
else
{
// Not yet ready to run: just decrement the delay
SCH_tasks_G[Index].Delay -= 1;
}
}
}
}
// ------------------------------------------------------------------
************************C File***********************************
Most ATmega AVR chips, which are commonly used to make simple robots, have a feature known as pulse-width modulation (PWM) that can be used to control servos. This blog post might serve as a quick introduction to controlling servos using PWM. If you were to look at the Arduino platform's servo control library, you would find that it also uses PWM.
This might be a better choice than relying on running a loop a constant number of times as changes to compiler optimization flags and the chip's clock speed could potentially break such a simple delay function.
You should almost certainly have an interrupt configured to run code at a predictable interval. If you look in the example programs supplied with your CPU, you'll probably find an example of such.
Typically, one will use a word/longword of memory to hold a timer, which will be incremented each interrupt. If your timer interrupt runs 10,000 times/second and increments "interrupt_counter" by one each time, a 'wait 1 ms' routine could look like:
extern volatile unsigned long interrupt_counter;
unsigned long temp_value = interrupt_counter;
do {} while(10 > (interrupt_counter - temp_value));
/* Would reverse operands above and use less-than if this weren't HTML. */
Note that as written the code will wait between 900 µs and 1000 µs. If one changed the comparison to greater-or-equal, it would wait between 1000 and 1100. If one needs to do something five times at 1 ms intervals, waiting some arbitrary time up to 1 ms for the first time, one could write the code as:
extern volatile unsigned long interrupt_counter;
unsigned long temp_value = interrupt_counter;
for (int i=0; 5>i; i++)
{
do {} while(!((temp_value - interrupt_counter) & 0x80000000)); /* Wait for underflow */
temp_value += 10;
do_action_thing();
}
This should run the do_something()'s at precise intervals even if they take several hundred microseconds to complete. If they sometimes take over 1 ms, the system will try to run each one at the "proper" time (so if one call takes 1.3 ms and the next one finishes instantly, the following one will happen 700 µs later).

UTC time stamp on Windows

I have a buffer with the UTC time stamp in C, I broadcast that buffer after every ten seconds. The problem is that the time difference between two packets is not consistent. After 5 to 10 iterations the time difference becomes 9, 11 and then again 10. Kindly help me to sort out this problem.
I am using <time.h> for UTC time.
If your time stamp has only 1 second resolution then there will always be +/- 1 uncertainty in the least significant digit (i.e. +/- 1 second in this case).
Clarification: if you only have a resolution of 1 second then your time values are quantized. The real time, t, represented by such a quantized value has a range of t..t+0.9999. If you take the difference of two such times, t0 and t1, then the maximum error in t1-t0 is -0.999..+0.999, which when quantized is +/-1 second. So in your case you would expect to see difference values in the range 9..11 seconds.
A thread that sleeps for X milliseconds is not guaranteed to sleep for precisely that many milliseconds. I am assuming that you have a statement that goes something like:
while(1) {
...
sleep(10); // Sleep for 10 seconds.
// fetch timestamp and send
}
You will get a more accurate gauge of time if you sleep for shorter periods (say 20 milliseconds) in a loop checking until the time has expired. When you sleep for 10 seconds, your thread gets moved further out of the immediate scheduling priority of the underlying OS.
You might also take into account that the time taken to send the timestamps may vary, depending on network conditions, etc, if you do a sleep(10) -> send ->sleep(10) type of loop, the time taken to send will be added onto the next sleep(10) in real terms.
Try something like this (forgive me, my C is a little rusty):
bool expired = false;
double last, current;
double t1, t2;
double difference = 0;
while(1) {
...
last = (double)clock();
while(!expired) {
usleep(200); // sleep for 20 milliseconds
current = (double)clock();
if(((current - last) / (double)CLOCKS_PER_SEC) >= (10.0 - difference))
expired = true;
}
t1 = (double)clock();
// Set and send the timestamp.
t2 = (double)clock();
//
// Calculate how long it took to send the stamps.
// and take that away from the next sleep cycle.
//
difference = (t2 - t1) / (double)CLOCKS_PER_SEC;
expired = false;
}
If you are not bothered about using the standard C library, you could look at using the high resolution timer functionality of windows such as QueryPerformanceFrequency/QueryPerformanceCounter functions.
LONG_INTEGER freq;
LONG_INTEGER t2, t1;
//
// Get the resolution of the timer.
//
QueryPerformanceFrequency(&freq);
// Start Task.
QueryPerformanceCounter(&t1);
... Do something ....
QueryPerformanceCounter(&t2);
// Very accurate duration in seconds.
double duration = (double)(t2.QuadPart - t1.QuadPart) / (double)freq.QuadPart;

Resources