Hy I have two devices A and B both are using a 64bit timer. One tick equals 1ns. I would like to syncronise the drift of both timer. I allready know the delay between A and B. Normally I would send the current time of A frequently to B. In B i would add the propagation delay and use this value as compare value to sync B. In my case I can only send the lower 32 bit of the timer every 100ms. I tried to simple replace the lower part of B with the part of A but this work only as long as there is no overflow of the 32 bit. Is there a way to detect this overflow? Overflow example (shorted timer):
timer B: 0x00FF
timer A: 0x010A
=> 0x0A to B
replacing leads to
timer B 0x000A instead of timer B 0x010A.
So I need to detect that in A a overflow already occured which hasn't occured in B.
Underflow example (shorted timer):
timer B: 0x0205
timer A: 0x01F6
=> 0xF6 => timer B: 0x02F6 instead of 0x01F6
In this case timer B is faster than timer A.
Here's what I'd use to find out the closest possible value of timera, given timerb, the 32 low bits of timera in message, and the expected propagation delay in delay:
#include <stdint.h>
int64_t expected_timera(const int64_t timerb,
const uint32_t message,
const uint32_t delay)
{
const int64_t timer1 = (timerb / INT64_C(4294967296)) * INT64_C(4294967296)
+ (int64_t)message + (int64_t)delay;
const int64_t timer0 = timer1 - INT64_C(4294967296);
const int64_t timer2 = timer1 + INT64_C(4294967296);
const uint64_t delta0 = timerb - timer0;
const uint64_t delta1 = (timer1 > timerb) ? timer1 - timerb : timerb - timer1;
const uint64_t delta2 = timer2 - timerb;
if (delta0 < delta1)
return (delta0 < delta2) ? timer0 : timer2;
else
return (delta1 <= delta2) ? timer1 : timer2;
}
Personally, I would also calculate timerb from the actual timer B using a linear adjustment:
static volatile int64_t real_timer_b; /* Actual timer b */
static int64_t timerb_real; /* real_timer_b at last sync time */
static int64_t timerb_offset; /* timera at last sync time */
static int32_t timerb_scale; /* Q30 scale factor (0,2) */
static inline int64_t timerb(void)
{
return ((int64_t)(int32_t)(real_timer_b - timerb_real) * (int64_t)timerb_scale) / INT64_C(1073741824) + timerb_offset;
}
With timerb_real = 0, timerb_offset = 0, and timerb_scale = 1073741824, timerb() == real_timer_b.
When timera > timerb(), you need to increase timerb_scale. When timera < timerb(), you need to decrease timerb_scale. You do not want to calculate the exact value every 100ms, because then jitter in propagation delay would directly affect timerb(); you want to adjust it slowly, over several (dozens) of seconds. As a bonus, you can keep timerb() monotonic, without sudden jumps forward or backwards.
Network Time Protocol implementations do something very similar, and you might find further implementation help there.
Related
so my problem is related to Timers in atmega32 in general, my problem is that I am using timer0 in my atmega32 as a delay timer with interrupts every unit time specified by the callee function, so for example, if the application user specified that I want an interrupt every 1 second for example, then I initialize the timer0 and based on some equations I can delay for one second then I call the application user ISR.
my problem in the equations itself requires me to use floating variables while the microprocessor on atmega32 doesn't have a floating point unit so the compiler increases the code size.
by the way, I am using my timer in Normal mode and this is the datasheet of timer0 (page 69)
here are the equations I use:
T(tick) = prescalar / freq(CPU) -> where T(tick) is the time needed by one tick for the timer, freq(CPU) is the frequency of the MCU.
T(max_delay) = (2^8) * T(tick) -> where T(max_delay) represents the max delay the timer can provide until the 1st overflow, (2^8) is the maximum number of ticks that timer0 can make before the overflow
Timer(init_value) = (T(max_delay) - T(delay)) / T(tick) -> where Timer(init_value) is the intial value to be inserted into TCNT0 register at the first and every time there is an overflow, T(delay) is the user required delay.
N(of_overflows) = [ceil](T(delay) / T(max_delay)) -> where N(of_overflows) is the number of overflows needed to achieve application user delay if it's greater than T(max_delay)
and this is my code that I wrote for just as a reference:
/*
* #fn -: -calculatInitValueForTimer0
*
* #params[0] -: -a number in milliseconds to delay for
*
* #brief -: -calculate the initial value needed for timer0 to be inserted into the timer
*
* #return -: -the initial value to be in the timer0
*/
static uint8_t calculatInitValueForTimer0(uint32_t args_u32TimeInMilliSeconds, uint16_t args_u8Prescalar)
{
/*local variable for time in seconds*/
double volatile local_f64TimerInSeconds = args_u32TimeInMilliSeconds / 1000.0;
/*local variable that will contain the value for init timer*/
uint8_t volatile local_u8TimerInit = 0;
/*local variable that will contain the time for one tick*/
double volatile local_f64Ttick;
/*local variable that will contain the time for max delay*/
double volatile local_f64Tmaxdelay;
/*get the tick timer*/
local_f64Ttick = args_u8Prescalar / 1000000.0;
/*get the max delay*/
local_f64Tmaxdelay = 256 * local_f64Ttick;
/*see which init time to be used*/
if (local_f64TimerInSeconds == (uint32_t) local_f64Tmaxdelay)
{
/*only one overflow needed*/
global_ValueToReachCount = 1;
/*begin counting from the start*/
local_u8TimerInit = 0;
}
else if (local_f64TimerInSeconds < (uint32_t) local_f64Tmaxdelay)
{
/*only one overflow needed*/
global_ValueToReachCount = 1;
/*begin counting from the start*/
local_u8TimerInit = (uint8_t)((local_f64Tmaxdelay - local_f64TimerInSeconds) / local_f64Ttick);
}
else if (local_f64TimerInSeconds > (uint32_t) local_f64Tmaxdelay)
{
/*many overflow needed*/
global_ValueToReachCount = ((local_f64TimerInSeconds / local_f64Tmaxdelay) == ((uint32_t)local_f64TimerInSeconds / (uint32_t)local_f64Tmaxdelay)) ? (uint32_t)(local_f64TimerInSeconds / local_f64Tmaxdelay) : (uint32_t)(local_f64TimerInSeconds / local_f64Tmaxdelay) + 1;
/*begin counting from the start*/
local_u8TimerInit = 256 - (uint8_t)( (local_f64TimerInSeconds / local_f64Ttick) / global_ValueToReachCount);
}
/*return the calculated value*/
return local_u8TimerInit;
}
currently I am not handling the case when there is only one overflow required, but this isn't the case.
my problem is that in order to calculate the timer initial value and number of overflows needed to achieve a big delay is all calculated using double or float variables and the problem is that the microprocessor in atmega32 doesn't have FPU so it makes the Compiler increase code size to achieve these equations, so is there any other way to calculate the timer initial value and number of overflows needed without double or float variables ?
I have to write a C code so that the RGB LED on the board breaths. My code is blinking not breathing. My teacher said that varying brightness is achieved by varying duty-cycle so in that case I can't use pwm. Please help me to understand this code.
#include <stdint.h>
#include <stdlib.h>
#define SYSCTL_RCGC2_R (*((volatile unsigned long *)0x400FE108))
#define SYSCTL_RCGC2_GPIOF 0x00000020 //port F clock gating control
#define GPIO_PORTF_DATA_R (*((volatile unsigned long *)0x400253FC))
#define GPIO_PORTF_DIR_R (*((volatile unsigned long *)0x40025400))
#define GPIO_PORTF_DEN_R (*((volatile unsigned long *)0x4002551C))
void delay (double sec);
int cond;
int main(void){
SYSCTL_RCGC2_R = SYSCTL_RCGC2_GPIOF;
GPIO_PORTF_DIR_R=0x0E;
GPIO_PORTF_DEN_R=0x0E;
cond=0;
while(1){
GPIO_PORTF_DATA_R = 0x02;
delay(12.5);
GPIO_PORTF_DATA_R = 0x00;
delay(0);
GPIO_PORTF_DATA_R = 0x02;
delay(2.5);
GPIO_PORTF_DATA_R = 0x00;
delay(10);
GPIO_PORTF_DATA_R = 0x02;
delay(5);
GPIO_PORTF_DATA_R = 0x00;
delay(7.5);
GPIO_PORTF_DATA_R = 0x02;
delay(7.5);
GPIO_PORTF_DATA_R = 0x00;
delay(5);
GPIO_PORTF_DATA_R = 0x02;
delay(12.5);
GPIO_PORTF_DATA_R = 0x00;
delay(0);
GPIO_PORTF_DATA_R = 0x02;
delay(7.5);
GPIO_PORTF_DATA_R = 0x00;
delay(5);
GPIO_PORTF_DATA_R = 0x02;
delay(5);
GPIO_PORTF_DATA_R = 0x00;
delay(7.5);
}
return 0;
}
void delay(double sec){
int c=1, d=1;
for(c=1;c<=sec;c++)
for(d=1;d<= 4000000;d++){}
}
There are two ways you can drive LEDs: either with constant current through some general-purpose I/O, or with repeated duty cycle from PWM. PWM meaning pulse-width modulation and it will happen with pulses that are too fast for the human eye to notice, could be anywhere from some 100Hz up to 10kHz or so.
The main advantage of PWM is that you easily can control current. Which is case of RGB means color intensity of the 3 individual LEDs. Most smaller LEDs are rated at 20mA so that's usually the maximum current you are aiming for, corresponding to 100% duty cycle.
The correct way to achieve this is to use PWM.
But what your current code does is to "bit bang" simulate PWM by pulling GPIO pins. That's very crude and inefficient. Normally microcontrollers have a timer and/or PWM hardware peripheral built in, where you just provide a duty cycle and the hardware takes care of everything from there. In this case you would set up 3 PWM hardware channels which should ideally be clocked at the same time.
LEDs are diodes with different forward voltage depending on chemistry. So you very likely have different forward voltages per each of the 3 colors. You have to check the datasheet of the RGB and look for luminous intensity experessed in candela. In this case very likely millicandela, mcd. Lets assume that your green led has 300mcd but the red and blue have 100mcd. They are somewhat linear, or you can probably get away with assuming they are. So a crude equation in this case is to give the green LED 3 times less current than the others, in order to get an even mix of colors. Once you have compensated for that, you can give your 3 PWM channels a RGB code and hopefully get the corresponding color.
As a side note, the delay function in your code is completely broken in many ways. The loop iterator for such busy-delays must be volatile or any half-decent compiler will simply remove the delay when optimizations are enabled. And there is no reason to use floating point either.
If you are doing it with your delay function and your delay resolution is in seconds as suggested in the code of course it will "blink" - the frequency needs to be faster than human visual perception - say for example about 50Hz, then to get a smooth variation you might divide that up into say 20 levels, requiring a millisecond delay.
In any case your delay() function defeats itself by taking a floating point number of seconds but comparing it with an integer loop counter - it will only ever work in whole seconds.
So given a function delayms( unsigned millisec ) (which I discuss later) then:
#define BREATHE_UPDATE_MS 100
#define BREATHE_MINIMUM 0
#define PWM_PERIOD_MS 20
unsigned tick = 0 ;
unsigned duty_cycle = 0 ;
unsigned cycle_start_tick= 0 ;
unsigned breath_update_tick = 0 ;
int breathe_dir = 1 ;
for(;;)
{
// If in PWM "mark"...
if( tick - cycle_start_tick < duty_cycle )
{
// LED on
GPIO_PORTF_DATA_R |= 0x02 ;
}
// else PWM "space"
else
{
// LED off
GPIO_PORTF_DATA_R &= ~0x02 ;
}
// Update tick counter
tick++ ;
// If PWM cycle complete, restart
if( tick - cycle_start_tick >= PWM_PERIOD_MS )
{
cycle_start_tick = tick ;
}
// If time to update duty-cycle...
if( tick - breath_update_tick > BREATHE_UPDATE_MS )
{
breath_update_tick = tick ;
duty_cycle += breathe_dir ;
if( duty_cycle >= PWM_PERIOD_MS )
{
// Breathe in
breathe_dir = -1 ;
}
else if( duty_cycle == BREATHE_MINIMUM )
{
// Breathe out
breathe_dir = 1 ;
}
}
delayms( 1 ) ;
}
Change BREATHE_UPDATE_MS to breathe faster, change BREATHE_MINIMUM to "shallow breathe" - i.e. not dim to off.
If your delay function truly results in a delay resolution in seconds then approximately and rather crudely:
void delayms( unsigned millisec )
{
for( int c = 0; c < millisec; c++ )
{
for( volatile int d = 0; d < 4000; d++ ) {}
}
}
However that suggests to me a rather low core clock rate, so you may need to adjust that. Note the use of volatile to prevent the removal of the empty loop by the optimiser. The problem with this delay is that you will need to calibrate it to the clock speed of your target and its timing is likely to differ in any case depending on what compiler you use and what compiler options you use. It is generally a poor solution.
In practice using a "busy-loop" delay for this is ill-advised and crude and it would be better to use the Cortex-M SYSTICK:
volatile uint32_t tick = 0 ;
void SysTick_Handler(void)
{
tick++ ;
}
... removing the tick and tick++ from the original; code. Then you don't need a delay in the loop above because all the timing is pegged to the value of tick. However should you want a delay for other reasons then:
delayms( uint32_t millisec )
{
uint32_t start = tick ;
while( tick - start < millisec ) ;
}
Then you would initialise the SYSTICK at start-up thus:
int main (void)
{
SysTick_Config(SystemCoreClock / 1000) ;
...
}
This assumes that you are using the CMSIS, but your code suggests that you are not doing that (or even using a vendor supplied register header). You will in that case need to get down and dirty with the SYSTICK and NVIC registers if you (or your tutor) insists on that. The source for SysTick_Config() is as follows:
__STATIC_INLINE uint32_t SysTick_Config(uint32_t ticks)
{
if ((ticks - 1UL) > SysTick_LOAD_RELOAD_Msk)
{
return (1UL); /* Reload value impossible */
}
SysTick->LOAD = (uint32_t)(ticks - 1UL); /* set reload register */
NVIC_SetPriority (SysTick_IRQn, (1UL << __NVIC_PRIO_BITS) - 1UL); /* set Priority for Systick Interrupt */
SysTick->VAL = 0UL; /* Load the SysTick Counter Value */
SysTick->CTRL = SysTick_CTRL_CLKSOURCE_Msk |
SysTick_CTRL_TICKINT_Msk |
SysTick_CTRL_ENABLE_Msk; /* Enable SysTick IRQ and SysTick Timer */
return (0UL); /* Function successful */
}
I want to configure timer1 of PIC24F16KA102 to count it. The clock source must be the internal clock of 8 MHz. I configured the register T1CON and set on high level the bit TON to start the timer. Timer1 is set to go in overflow every 100 us, then with a while cycle I wille increase the variable count. I'am not understanding because timer1 don't work, I observed that it does not increase. Why?
#include <xc.h>
#include "config.h"
int count = 0;
void main(void) {
TRISB = 0;
T1CON = 0; //TRM1 stopped, internal clock source, prescaler 1:1
_TON = 1;
TMR1 = 65135; //overflow of TM1 every 100 us (400 counts)
while (1) {
if (TMR1 == 65535) {
count++; // increase every 100 us
TMR1 = 65135;
}
}
}
Try setting the Timer 1 period register (PR1) and using an interrupt rather than trying to catch and reload TMR1 on its final count. You're trying to catch TMR1 on EXACTLY 65535, and that will almost never work because once TMR1 hits 65535, it's just going to overflow and begin counting from 0 again.
EDIT: Of course, this assumes it counts at all. I don't know what the behavior of a timer is when you leave the period register at 0. It may simply count to it's maximum of 65535 then reset to 0, or it may never count at all and continuously load PRx into TMRx since they match at 0
PRx is meant to define the period you want for a given timer, in this case 100uS. PR1 = 400. Once TMR1 = PR1, the timer will reset itself automatically and raise an interrupt to alert you that the timer has elapsed.
volatile unsigned int count = 0; //Vars that change in an ISR should be volatile
PR1 = 400; //Set Period for Timer1 (100us)
T1CON = 0x8000; //Enable Timer1
IEC0bits.T1IE = 1; //Enable Timer1 Interrupt
IPC0bits.T1IP = 0b011;
Pair this with an ISR function to increment count whenever the timer elapses:
void __attribute__ ((interrupt,no_auto_psv)) _T1Interrupt (void)
{
count++;
IFS0bits.T1IF = 0; //Make sure to clear the interrupt flag
}
You could also try something like this without any interrupts:
void main(void){
unsigned int count = 0;
TMR1 = 0;
T1CON = 0x8000; //TON = 1
while(1){
if (TMR1 >= 400){
count++;
TMR1=0;
}
}
}
However I would recommend using the PR register and an ISR. This is what it's meant to do.
EDIT: I would also recommend reading the PIC24F Reference Manual on timers:
Here
I am using STM32F2 controller and I am interfacing with an ST7036 LCD display via 8 bit parallel interface.
The datasheet says there should be a 20 nano second delay between address hold and setup time.
How do I generate a 20 nanosecond delay in C?
Use stopwatch_delay(4) below to accomplish approximately 24ns of delay. It uses the STM32's DWT_CYCCNT register, which is specifically designed to count actual clock ticks, located at address 0xE0001004.
To verify the delay accuracy (see main), you can call STOPWATCH_START, run stopwatch_delay(ticks), then call STOPWATCH_STOP and verify with CalcNanosecondsFromStopwatch(m_nStart, m_nStop). Adjust ticks as needed.
uint32_t m_nStart; //DEBUG Stopwatch start cycle counter value
uint32_t m_nStop; //DEBUG Stopwatch stop cycle counter value
#define DEMCR_TRCENA 0x01000000
/* Core Debug registers */
#define DEMCR (*((volatile uint32_t *)0xE000EDFC))
#define DWT_CTRL (*(volatile uint32_t *)0xe0001000)
#define CYCCNTENA (1<<0)
#define DWT_CYCCNT ((volatile uint32_t *)0xE0001004)
#define CPU_CYCLES *DWT_CYCCNT
#define CLK_SPEED 168000000 // EXAMPLE for CortexM4, EDIT as needed
#define STOPWATCH_START { m_nStart = *((volatile unsigned int *)0xE0001004);}
#define STOPWATCH_STOP { m_nStop = *((volatile unsigned int *)0xE0001004);}
static inline void stopwatch_reset(void)
{
/* Enable DWT */
DEMCR |= DEMCR_TRCENA;
*DWT_CYCCNT = 0;
/* Enable CPU cycle counter */
DWT_CTRL |= CYCCNTENA;
}
static inline uint32_t stopwatch_getticks()
{
return CPU_CYCLES;
}
static inline void stopwatch_delay(uint32_t ticks)
{
uint32_t end_ticks = ticks + stopwatch_getticks();
while(1)
{
if (stopwatch_getticks() >= end_ticks)
break;
}
}
uint32_t CalcNanosecondsFromStopwatch(uint32_t nStart, uint32_t nStop)
{
uint32_t nDiffTicks;
uint32_t nSystemCoreTicksPerMicrosec;
// Convert (clk speed per sec) to (clk speed per microsec)
nSystemCoreTicksPerMicrosec = CLK_SPEED / 1000000;
// Elapsed ticks
nDiffTicks = nStop - nStart;
// Elapsed nanosec = 1000 * (ticks-elapsed / clock-ticks in a microsec)
return 1000 * nDiffTicks / nSystemCoreTicksPerMicrosec;
}
void main(void)
{
int timeDiff = 0;
stopwatch_reset();
// =============================================
// Example: use a delay, and measure how long it took
STOPWATCH_START;
stopwatch_delay(168000); // 168k ticks is 1ms for 168MHz core
STOPWATCH_STOP;
timeDiff = CalcNanosecondsFromStopwatch(m_nStart, m_nStop);
printf("My delay measured to be %d nanoseconds\n", timeDiff);
// =============================================
// Example: measure function duration in nanosec
STOPWATCH_START;
// run_my_function() => do something here
STOPWATCH_STOP;
timeDiff = CalcNanosecondsFromStopwatch(m_nStart, m_nStop);
printf("My function took %d nanoseconds\n", timeDiff);
}
The first specification I found of Stm32f2 assumes a clock frequency of 120 MHz. That's about 8ns per clock cycle. You would need about three single cycle instructions between successive write or read/write operations. In C, a++; will probably do (if a is located in stack).
You should look into the FSMC peripheral available in your chip. While the configuration might be complicated, especially if you're not dropping in a memory part that it was designed for, you might find that your parallel interfaced device maps pretty well to one of the memory interface modes.
These sorts of external memory controllers must have a bunch of configurable timing options to support the range of different memory chips out there so you'll be able to guarantee the timings required by your datasheet.
The nice benefit of being able to do this is your LCD will then seem like any old memory mapped peripheral, abstracting away the lower level interfacing details.
I am using MSP430F5418 with IAR EW 5.10.
I want to use Timer B in up mode.
I want to use two interrupts.
TimerB0(1 ms) and TimerB1(1 second).
My Configuration is
TBCTL = MC__UP + TBSSEL__ACLK + TBCLR;
TB0CCTL0 = CCIE;
TB0CCR0 = 32;
TB0CCTL1 = CCIE;
TB0CCR1 = 32768;
On the ISR I just toggled two pins.
But only the pin for TB0CCR0 is toggling.
My Pin Configurations are correct.
Can anyone tell me why??
I suppose your problem is the timer period.
MC__UP Up mode: Timer counts up to TBxCL0
So your timer TBxR will reset to 0 when it reaches TBxCL0 which seems to be the value TB0CCR0.
So it can never reach the value of 32768.
You could switch TB0CCR0 with TB0CCR1, so your period will be 1 second.
And to get your 1ms interrupt you need to increment your TB0CCR1 each time.
INTERRUPT ISR_1MS()
{
TB0CCR1 = (TB0CCR1 + 32) & 0x7FFF;
}
But normally you don't need a second timer for a second intervall.
You could simply count 1000 times your 1ms intervall.
INTERRUPT ISR_1MS()
{
ms_count++;
if (ms_count >= 1000)
{
ms_count=0;
// Do your second stuff
}
}
And if you need more and different intervalls, you could change to another model.
To a system clock time and check only against this time.
volatile unsigned int absolute_time=0;
INTERRUPT ISR_1MS()
{
absolute_time++;
}
unsigned int systime_now(void)
{
unsigned int result;
di();
result = absolute_time;
ei();
return result;
}
uint8_t systime_reached(unsigned int timeAt)
{
uint8_t result;
result = (systime_now() - timeAt ) < 0x1000;
return result;
}