I am using STM32F2 controller and I am interfacing with an ST7036 LCD display via 8 bit parallel interface.
The datasheet says there should be a 20 nano second delay between address hold and setup time.
How do I generate a 20 nanosecond delay in C?
Use stopwatch_delay(4) below to accomplish approximately 24ns of delay. It uses the STM32's DWT_CYCCNT register, which is specifically designed to count actual clock ticks, located at address 0xE0001004.
To verify the delay accuracy (see main), you can call STOPWATCH_START, run stopwatch_delay(ticks), then call STOPWATCH_STOP and verify with CalcNanosecondsFromStopwatch(m_nStart, m_nStop). Adjust ticks as needed.
uint32_t m_nStart; //DEBUG Stopwatch start cycle counter value
uint32_t m_nStop; //DEBUG Stopwatch stop cycle counter value
#define DEMCR_TRCENA 0x01000000
/* Core Debug registers */
#define DEMCR (*((volatile uint32_t *)0xE000EDFC))
#define DWT_CTRL (*(volatile uint32_t *)0xe0001000)
#define CYCCNTENA (1<<0)
#define DWT_CYCCNT ((volatile uint32_t *)0xE0001004)
#define CPU_CYCLES *DWT_CYCCNT
#define CLK_SPEED 168000000 // EXAMPLE for CortexM4, EDIT as needed
#define STOPWATCH_START { m_nStart = *((volatile unsigned int *)0xE0001004);}
#define STOPWATCH_STOP { m_nStop = *((volatile unsigned int *)0xE0001004);}
static inline void stopwatch_reset(void)
{
/* Enable DWT */
DEMCR |= DEMCR_TRCENA;
*DWT_CYCCNT = 0;
/* Enable CPU cycle counter */
DWT_CTRL |= CYCCNTENA;
}
static inline uint32_t stopwatch_getticks()
{
return CPU_CYCLES;
}
static inline void stopwatch_delay(uint32_t ticks)
{
uint32_t end_ticks = ticks + stopwatch_getticks();
while(1)
{
if (stopwatch_getticks() >= end_ticks)
break;
}
}
uint32_t CalcNanosecondsFromStopwatch(uint32_t nStart, uint32_t nStop)
{
uint32_t nDiffTicks;
uint32_t nSystemCoreTicksPerMicrosec;
// Convert (clk speed per sec) to (clk speed per microsec)
nSystemCoreTicksPerMicrosec = CLK_SPEED / 1000000;
// Elapsed ticks
nDiffTicks = nStop - nStart;
// Elapsed nanosec = 1000 * (ticks-elapsed / clock-ticks in a microsec)
return 1000 * nDiffTicks / nSystemCoreTicksPerMicrosec;
}
void main(void)
{
int timeDiff = 0;
stopwatch_reset();
// =============================================
// Example: use a delay, and measure how long it took
STOPWATCH_START;
stopwatch_delay(168000); // 168k ticks is 1ms for 168MHz core
STOPWATCH_STOP;
timeDiff = CalcNanosecondsFromStopwatch(m_nStart, m_nStop);
printf("My delay measured to be %d nanoseconds\n", timeDiff);
// =============================================
// Example: measure function duration in nanosec
STOPWATCH_START;
// run_my_function() => do something here
STOPWATCH_STOP;
timeDiff = CalcNanosecondsFromStopwatch(m_nStart, m_nStop);
printf("My function took %d nanoseconds\n", timeDiff);
}
The first specification I found of Stm32f2 assumes a clock frequency of 120 MHz. That's about 8ns per clock cycle. You would need about three single cycle instructions between successive write or read/write operations. In C, a++; will probably do (if a is located in stack).
You should look into the FSMC peripheral available in your chip. While the configuration might be complicated, especially if you're not dropping in a memory part that it was designed for, you might find that your parallel interfaced device maps pretty well to one of the memory interface modes.
These sorts of external memory controllers must have a bunch of configurable timing options to support the range of different memory chips out there so you'll be able to guarantee the timings required by your datasheet.
The nice benefit of being able to do this is your LCD will then seem like any old memory mapped peripheral, abstracting away the lower level interfacing details.
Related
I have to write a C code so that the RGB LED on the board breaths. My code is blinking not breathing. My teacher said that varying brightness is achieved by varying duty-cycle so in that case I can't use pwm. Please help me to understand this code.
#include <stdint.h>
#include <stdlib.h>
#define SYSCTL_RCGC2_R (*((volatile unsigned long *)0x400FE108))
#define SYSCTL_RCGC2_GPIOF 0x00000020 //port F clock gating control
#define GPIO_PORTF_DATA_R (*((volatile unsigned long *)0x400253FC))
#define GPIO_PORTF_DIR_R (*((volatile unsigned long *)0x40025400))
#define GPIO_PORTF_DEN_R (*((volatile unsigned long *)0x4002551C))
void delay (double sec);
int cond;
int main(void){
SYSCTL_RCGC2_R = SYSCTL_RCGC2_GPIOF;
GPIO_PORTF_DIR_R=0x0E;
GPIO_PORTF_DEN_R=0x0E;
cond=0;
while(1){
GPIO_PORTF_DATA_R = 0x02;
delay(12.5);
GPIO_PORTF_DATA_R = 0x00;
delay(0);
GPIO_PORTF_DATA_R = 0x02;
delay(2.5);
GPIO_PORTF_DATA_R = 0x00;
delay(10);
GPIO_PORTF_DATA_R = 0x02;
delay(5);
GPIO_PORTF_DATA_R = 0x00;
delay(7.5);
GPIO_PORTF_DATA_R = 0x02;
delay(7.5);
GPIO_PORTF_DATA_R = 0x00;
delay(5);
GPIO_PORTF_DATA_R = 0x02;
delay(12.5);
GPIO_PORTF_DATA_R = 0x00;
delay(0);
GPIO_PORTF_DATA_R = 0x02;
delay(7.5);
GPIO_PORTF_DATA_R = 0x00;
delay(5);
GPIO_PORTF_DATA_R = 0x02;
delay(5);
GPIO_PORTF_DATA_R = 0x00;
delay(7.5);
}
return 0;
}
void delay(double sec){
int c=1, d=1;
for(c=1;c<=sec;c++)
for(d=1;d<= 4000000;d++){}
}
There are two ways you can drive LEDs: either with constant current through some general-purpose I/O, or with repeated duty cycle from PWM. PWM meaning pulse-width modulation and it will happen with pulses that are too fast for the human eye to notice, could be anywhere from some 100Hz up to 10kHz or so.
The main advantage of PWM is that you easily can control current. Which is case of RGB means color intensity of the 3 individual LEDs. Most smaller LEDs are rated at 20mA so that's usually the maximum current you are aiming for, corresponding to 100% duty cycle.
The correct way to achieve this is to use PWM.
But what your current code does is to "bit bang" simulate PWM by pulling GPIO pins. That's very crude and inefficient. Normally microcontrollers have a timer and/or PWM hardware peripheral built in, where you just provide a duty cycle and the hardware takes care of everything from there. In this case you would set up 3 PWM hardware channels which should ideally be clocked at the same time.
LEDs are diodes with different forward voltage depending on chemistry. So you very likely have different forward voltages per each of the 3 colors. You have to check the datasheet of the RGB and look for luminous intensity experessed in candela. In this case very likely millicandela, mcd. Lets assume that your green led has 300mcd but the red and blue have 100mcd. They are somewhat linear, or you can probably get away with assuming they are. So a crude equation in this case is to give the green LED 3 times less current than the others, in order to get an even mix of colors. Once you have compensated for that, you can give your 3 PWM channels a RGB code and hopefully get the corresponding color.
As a side note, the delay function in your code is completely broken in many ways. The loop iterator for such busy-delays must be volatile or any half-decent compiler will simply remove the delay when optimizations are enabled. And there is no reason to use floating point either.
If you are doing it with your delay function and your delay resolution is in seconds as suggested in the code of course it will "blink" - the frequency needs to be faster than human visual perception - say for example about 50Hz, then to get a smooth variation you might divide that up into say 20 levels, requiring a millisecond delay.
In any case your delay() function defeats itself by taking a floating point number of seconds but comparing it with an integer loop counter - it will only ever work in whole seconds.
So given a function delayms( unsigned millisec ) (which I discuss later) then:
#define BREATHE_UPDATE_MS 100
#define BREATHE_MINIMUM 0
#define PWM_PERIOD_MS 20
unsigned tick = 0 ;
unsigned duty_cycle = 0 ;
unsigned cycle_start_tick= 0 ;
unsigned breath_update_tick = 0 ;
int breathe_dir = 1 ;
for(;;)
{
// If in PWM "mark"...
if( tick - cycle_start_tick < duty_cycle )
{
// LED on
GPIO_PORTF_DATA_R |= 0x02 ;
}
// else PWM "space"
else
{
// LED off
GPIO_PORTF_DATA_R &= ~0x02 ;
}
// Update tick counter
tick++ ;
// If PWM cycle complete, restart
if( tick - cycle_start_tick >= PWM_PERIOD_MS )
{
cycle_start_tick = tick ;
}
// If time to update duty-cycle...
if( tick - breath_update_tick > BREATHE_UPDATE_MS )
{
breath_update_tick = tick ;
duty_cycle += breathe_dir ;
if( duty_cycle >= PWM_PERIOD_MS )
{
// Breathe in
breathe_dir = -1 ;
}
else if( duty_cycle == BREATHE_MINIMUM )
{
// Breathe out
breathe_dir = 1 ;
}
}
delayms( 1 ) ;
}
Change BREATHE_UPDATE_MS to breathe faster, change BREATHE_MINIMUM to "shallow breathe" - i.e. not dim to off.
If your delay function truly results in a delay resolution in seconds then approximately and rather crudely:
void delayms( unsigned millisec )
{
for( int c = 0; c < millisec; c++ )
{
for( volatile int d = 0; d < 4000; d++ ) {}
}
}
However that suggests to me a rather low core clock rate, so you may need to adjust that. Note the use of volatile to prevent the removal of the empty loop by the optimiser. The problem with this delay is that you will need to calibrate it to the clock speed of your target and its timing is likely to differ in any case depending on what compiler you use and what compiler options you use. It is generally a poor solution.
In practice using a "busy-loop" delay for this is ill-advised and crude and it would be better to use the Cortex-M SYSTICK:
volatile uint32_t tick = 0 ;
void SysTick_Handler(void)
{
tick++ ;
}
... removing the tick and tick++ from the original; code. Then you don't need a delay in the loop above because all the timing is pegged to the value of tick. However should you want a delay for other reasons then:
delayms( uint32_t millisec )
{
uint32_t start = tick ;
while( tick - start < millisec ) ;
}
Then you would initialise the SYSTICK at start-up thus:
int main (void)
{
SysTick_Config(SystemCoreClock / 1000) ;
...
}
This assumes that you are using the CMSIS, but your code suggests that you are not doing that (or even using a vendor supplied register header). You will in that case need to get down and dirty with the SYSTICK and NVIC registers if you (or your tutor) insists on that. The source for SysTick_Config() is as follows:
__STATIC_INLINE uint32_t SysTick_Config(uint32_t ticks)
{
if ((ticks - 1UL) > SysTick_LOAD_RELOAD_Msk)
{
return (1UL); /* Reload value impossible */
}
SysTick->LOAD = (uint32_t)(ticks - 1UL); /* set reload register */
NVIC_SetPriority (SysTick_IRQn, (1UL << __NVIC_PRIO_BITS) - 1UL); /* set Priority for Systick Interrupt */
SysTick->VAL = 0UL; /* Load the SysTick Counter Value */
SysTick->CTRL = SysTick_CTRL_CLKSOURCE_Msk |
SysTick_CTRL_TICKINT_Msk |
SysTick_CTRL_ENABLE_Msk; /* Enable SysTick IRQ and SysTick Timer */
return (0UL); /* Function successful */
}
There is Attiny85, with an internal clock source at 8 MHz.
I am trying to implement a microsecond timer based on the hardware timer timer0.
What is my logic:
Since the clock frequency is 8 MHz and the prescaler is off, the time of one clock cycle will be about 0.1us (1/8000000).
Initially, the timer overflows and causes interruptions when passing 0 ... 255, it takes more than 0.1us and is inconvenient for calculating 1μs.
To solve this, I thought about the option to change the initial value of the timer instead of 0 to 245. It turns out that in order to get to the interruption, you need to go through 10 clock cycles, which takes about 1us in time.
I load this code, but the Attiny LED obviously does not switch for about 5 seconds, although the code indicates 1 second (1000000us).
Code:
#include <avr/io.h>
#undef F_CPU
#define F_CPU 8000000UL
#include <avr/interrupt.h>
// Timer0 init
void timer0_Init() {
cli();
//SREG &= ~(1 << 7);
// Enable interrupt for timer0 overflow
TIMSK |= (1 << 1);
// Enabled timer0 (not prescaler) - CS02..CS00 = 001
TCCR0B = 0;
TCCR0B |= (1 << 0);
// Clear timer0 counter
TCNT0 = 245;
sei();
//SREG |= (1 << 7);
}
// timer0 overflow interrupt
// 1us interval logic:
// MCU frequency = 8mHz (8000000Hz), not prescaler
// 1 tick = 1/8000000 = 100ns = 0.1us, counter up++ after 1 tick (0.1us)
// 1us timer = 10 tick's => 245..255
static unsigned long microsecondsTimer;
ISR(TIMER0_OVF_vect) {
microsecondsTimer++;
TCNT0 = 245;
}
// Millis
/*unsigned long timerMillis() {
return microsecondsTimer / 1000;
}*/
void ledBlink() {
static unsigned long blinkTimer;
static int ledState;
// 10000us = 0.01s
// 1000000us = 1s
if(microsecondsTimer - blinkTimer >= 1000000) {
if(!ledState) {
PORTB |= (1 << 3); // HIGH
} else {
PORTB &= ~(1 << 3); // LOW
}
ledState = !ledState;
blinkTimer = microsecondsTimer;
}
}
int main(void)
{
// Set LED pin to OUTPUT mode
DDRB |= (1 << 3);
timer0_Init();
while (1)
{
ledBlink();
}
}
Attiny85 Datasheet
What could be the mistake? I have not yet learned how to work with fuses, so I initially loaded the fuses at 8 MHz through the Arduino IDE, and after that I already downloaded the main code (without changing the fuses) through AVRDUDE and Atmel Studio.
And another question, should I check the maximum value when updating my microsecond counter? I know that in Arduino, the micro and millis counters are reset when they reach the maximum value. For example, if I do not clear the TimerMicrosecond variables variable and it exceeds the size of the unsigned long, will it crash?
As pointed out by #ReAI, your ISR does not have enough time to run. Your ISR will take more than 1 microsecond to execute and return, so you always are missing interrupts.
There are other problems here too. For example, your microsecondsTimer variable is accessed in both the ISR and the foreground and is a long. long variables are 4 bytes wide and so are not updated atomically. It is possible, for example, that your foreground could start reading the value for microsecondsTimer and then in the middle of the read, the ISR could update some of the unread bytes, and then when the foreground starts again it will end up with a mangled value. Also, you should avoid messing with the count register since updating it can miss ticks unless you are very careful.
So how could you implement a working uSec timer? Firstly you'd like to call the ISR as infrequently as possible, so maybe pick the largest prescaller you can get get the resolution that you want and only ISR on overflow. In the case of the ATTINY85 Timer0, you can pick /8 prescaller which gets you one tick of the timer per microsecond with an 8Mhz system clock. Now your ISR only runs once every 256 microseconds and when it runs, it need only increment a "microseconds * 256" counter in each call.
Now to read the current microseconds in the foreground, you can get the number of microseconds mod 256 by directly reading the count register, and then read the "microseconds * 256" counter and multiply this by 256 and add that the counter and you'll have the full count. Note that you will need take special precautions to make sure your reads are atomic. You can do this either by carefully turning off the interrupts, quickly reading the values, and then turning the interrupts back on (save all the math for when interrupts are back on), or looping on the read values to make sure you get two full reads in a row that are the same (time means that have not updated while you were reading them).
Note that you can check out the source code to Arduino timer ISR for some insights, but note that theirs is more complicated because it can handle a wide range of tick speeds whereas you are able to keep things simple by specifically picking a 1us period.
why you didn't use pre-scaler ?!
your code need a relly relly big delay intervall(1 sec it's huge time according to cpu speed) .... so it's not wisdom choose to interrupt microcontroller every 1 us !!.. so it will be great if we could slow down your microcontroller clock and make interrupt for example every 1 ms
calculation
the microcontroller clock speed is 8 mega Hz so if we chose the preScaller to 64 then the timer clock will be 8MHz/64=125 KHz so that mean each tik (timer clock) time will be 1/125KHZ=8 us
so if we like to have inturrpt every 1ms then we need 125 tik
modify code
try this code it's more clear to understand
#undef F_CPU
#define F_CPU 8000000UL
#include <avr/io.h>
#include <avr/interrupt.h>
volatile int millSec;
void timer0_Init();
void toggleLed();
int main(void)
{
// Set LED pin to OUTPUT mode
DDRB |= (1 << 3);
timer0_Init();
millSec = 0; // init the millsecond
sei(); // set Global Interrupt Enable
while (1)
{
if(millSec >= 1000){
// this block of code will run every 1 sec
millSec =0; // start count for the new sec
toggleLed(); // just toggle the led state
}
// Do other backGround jobs
}
}
//#####Helper functions###########
void timer0_Init() {
// Clear timer0 counter
TCNT0 = 130; //255-125=130
// Enable interrupt for timer0 overflow
TIMSK = (1 << 1);
// set prescaler to 64 and start the timer
TCCR0B = (1<<CS00)|(1<<CS01);
}
void toggleLed(){
PORTB ^= (1 << 3); // toggle led output
}
ISR(TIMER0_OVF_vect) {
// this interrupt will happen every 1 ms
millSec++;
// Clear timer0 counter
TCNT0 = 130;
}
Sorry, i am late but i have got some suggestions. If you calculate the Timer0 with prescaler 1, the timer is counting up every 125ns. It is not possible to reach 1 us without a small divergence. But if you use prescaler 8 you reach exactly 1 us. I actually do not have your hardware but give this a try:
#ifndef F_CPU
#define F_CPU 8000000UL
#else
#error "F_CPU already defined"
#endif
#include <avr/io.h>
#include <avr/interrupt.h>
volatile unsigned int microsecondsTimer;
// Interrupt for Timer0 Compare Match A
ISR(TIMER0_COMPA_vect)
{
microsecondsTimer++;
}
// Timer0 init
void timer0_Init()
{
// Timer0:
// - Mode: CTC
// - Prescaler: /8
TCCR0A = (1<<WGM01);
TCCR0B = (1<<CS01);
OCR0A = 1;
TIMSK = (1<<OCIE0A)
sei();
}
void ledBlink() {
static unsigned int blinkTimer;
if(microsecondsTimer >= 1000)
{
microsecondsTimer = 0;
blinkTimer++;
}
if(blinkTimer >= 1000)
{
PORTB ^= (1<<PINB3);
blinkTimer = 0;
}
}
int main(void)
{
// Set LED pin to OUTPUT mode
DDRB |= (1 << PINB3);
timer0_Init();
while (1)
{
ledBlink();
}
}
If you are using internal clock of attiny it may be divied by 8. To disable the clock division you have to disable the prescaler within 4 clock cycles (atomic operation):
int main(void)
{
// Reset clock prescaling
CLKPR = (1<<CLKPR);
CLKPR = 0x00;
// ...
Please try this solution an give feedback if it is working. Maybe you can verify it with an oscilloscope...
Notice that operations with unsigned long needs more than 1 clock cycle to handle on an 8 bit microcontroller. Maybe it would be better to use unsigned int or unsigned char. The main loop also should not contain lots if instructions. Otherwise error correction of microsecond timer has to be implemented.
Simply, I want to implement a delay function using stm32 timers, like the one in AVR microcontrollers "Normal mode". Can anybody help ? I just can't find that in the stm32 datasheet! It only supports PWM, input capture, output compare and one-pulse mode output!
N.B: I forgot to mention that I'm using stm32F401 microcontroller
You have very special timer for this purpose called SysTick. Set it to overflow every 1ms. In its handler
static volatile uint32_t counter;
void SysTick_Handler(void)
{
counter++;
}
inline uint32_t __attribute__((always_inline)) GetCounter(void)
{
return counter;
}
void Dealy(uint32_t ms)
{
uint32_t tickstart = GetCounter();
while((GetCounter() - tickstart) < ms);
}
If you want to be able to delay by periods shorter than milli-seconds
you can set up a timer to auto-reload and use the internal clock.
The internal clock frequency set to the timer is shown in the cube MX
clock tab. On my NUCLEO-F446RE most timers get 84MHz or 42MHz.
Timer 3 gets 42 MHz by default.
So if I set the pre scaler to 42 and the count period to maximum (0xFFFF
if 16 bit) I will have a cycling clock that changes every micro-second.
I can then, using a property of twos complement maths, simply subtract the
old time from the new to get the period.
void wait_us (int16_t delay) {
int16_t t1= htim3.Instance->CNT;
while (( htim3.Instance->CNT - t1 ) < delay) {
asm ("\t nop");
}
}
This is an old trick from PIC18 C coding
I'm trying to write a code in C (Using Keil µVision 5, device: AT89C51AC3) that lets me enter 2 Integer numbers, add them and then print them out. The problem is that I'm limited to a byte code size of max. 2048.
My actual code needs 2099 Bytes to run.
Any idea how I could do the same thing using less memory?
#include <stdio.h>
#include <REG52.H>
int main()
{
int a, b;
/*------------------------------------------------
Setup the serial port for 1200 baud at 16MHz.
------------------------------------------------*/
#ifndef MONITOR51
SCON = 0x50; /* SCON: mode 1, 8-bit UART, enable rcvr */
TMOD |= 0x20; /* TMOD: timer 1, mode 2, 8-bit reload */
TH1 = 221; /* TH1: reload value for 1200 baud # 16MHz */
TR1 = 1; /* TR1: timer 1 run */
TI = 1; /* TI: set TI to send first char of UART */
#endif
printf("Enter 2 numbers\n");
scanf("%d%d",&a,&b);
printf("%d\n",a+b);
return 0;
}
You should hiccup when you see this simple code take up 2k+ of memory. That's a lot! The reason for this is that the stdio functions are terribly inefficient.
If you need to save memory and execution speed, you need to code these yourself. Which is not so hard, since you probably just need to read integers and not everything else those function can handle (float numbers, strings etc).
Also get rid of the int type, use the fixed size types from stdint.h instead. (If this is a 8 bit MCU, you should also avoid 16 bit numbers unless they are necessary.)
In addition, you will have to code the I/O part as well. On a microcontroller this would probably mean writing your own UART driver.
You should be able to reduce the code size to a couple of hundred bytes, depending on how code (in)efficient your microcontroller is.
If you just want to print the sum of int a and int b, you should be able to get rid of the
/------------------------------------------------
Setup the serial port for 1200 baud at 16MHz.
------------------------------------------------/
#ifndef MONITOR51
SCON = 0x50; /* SCON: mode 1, 8-bit UART, enable rcvr /
TMOD |= 0x20; / TMOD: timer 1, mode 2, 8-bit reload /
TH1 = 221; / TH1: reload value for 1200 baud # 16MHz /
TR1 = 1; / TR1: timer 1 run /
TI = 1; / TI: set TI to send first char of UART */
#endif:
`
code. Just keep the printf()... and scarf()... functions.
Hy I have two devices A and B both are using a 64bit timer. One tick equals 1ns. I would like to syncronise the drift of both timer. I allready know the delay between A and B. Normally I would send the current time of A frequently to B. In B i would add the propagation delay and use this value as compare value to sync B. In my case I can only send the lower 32 bit of the timer every 100ms. I tried to simple replace the lower part of B with the part of A but this work only as long as there is no overflow of the 32 bit. Is there a way to detect this overflow? Overflow example (shorted timer):
timer B: 0x00FF
timer A: 0x010A
=> 0x0A to B
replacing leads to
timer B 0x000A instead of timer B 0x010A.
So I need to detect that in A a overflow already occured which hasn't occured in B.
Underflow example (shorted timer):
timer B: 0x0205
timer A: 0x01F6
=> 0xF6 => timer B: 0x02F6 instead of 0x01F6
In this case timer B is faster than timer A.
Here's what I'd use to find out the closest possible value of timera, given timerb, the 32 low bits of timera in message, and the expected propagation delay in delay:
#include <stdint.h>
int64_t expected_timera(const int64_t timerb,
const uint32_t message,
const uint32_t delay)
{
const int64_t timer1 = (timerb / INT64_C(4294967296)) * INT64_C(4294967296)
+ (int64_t)message + (int64_t)delay;
const int64_t timer0 = timer1 - INT64_C(4294967296);
const int64_t timer2 = timer1 + INT64_C(4294967296);
const uint64_t delta0 = timerb - timer0;
const uint64_t delta1 = (timer1 > timerb) ? timer1 - timerb : timerb - timer1;
const uint64_t delta2 = timer2 - timerb;
if (delta0 < delta1)
return (delta0 < delta2) ? timer0 : timer2;
else
return (delta1 <= delta2) ? timer1 : timer2;
}
Personally, I would also calculate timerb from the actual timer B using a linear adjustment:
static volatile int64_t real_timer_b; /* Actual timer b */
static int64_t timerb_real; /* real_timer_b at last sync time */
static int64_t timerb_offset; /* timera at last sync time */
static int32_t timerb_scale; /* Q30 scale factor (0,2) */
static inline int64_t timerb(void)
{
return ((int64_t)(int32_t)(real_timer_b - timerb_real) * (int64_t)timerb_scale) / INT64_C(1073741824) + timerb_offset;
}
With timerb_real = 0, timerb_offset = 0, and timerb_scale = 1073741824, timerb() == real_timer_b.
When timera > timerb(), you need to increase timerb_scale. When timera < timerb(), you need to decrease timerb_scale. You do not want to calculate the exact value every 100ms, because then jitter in propagation delay would directly affect timerb(); you want to adjust it slowly, over several (dozens) of seconds. As a bonus, you can keep timerb() monotonic, without sudden jumps forward or backwards.
Network Time Protocol implementations do something very similar, and you might find further implementation help there.