Why Arduino uses interrupt every 1.024ms in millis function? - timer

I am implementing a time counter on my atmega 328p. I looked on the implementation of arduino millis function and I am bit confused, why they use Timer Overflow Interrupt which is executed every 1.024 ms (freg = 16MHz, 64 prescaling), when they could use Output Compare Match Interrupt which can be set up to trigger exactly every 1ms (OCR0A = 249). Is there any advantage to use Timer Overflow Interupt and do some corrections to counted ms over Output Compare Match interupt that is executed exactly every 1ms? Or why they are using it?

The counter value TCNT is used for calculation of microseconds beyond interrupt. Using compare match for defining TOP value would generate exact interrupt but complicates finer micros calculation as TCNT is reset. Using compare match for non TOP value (for PWM generation) does not generate periodic 1ms interrupt.
I'm personally using as second timer for sampling TOP value defined by OCRxA register.

Related

How to write a time difference function to STM32F4

i am working on STM32F4 and pretty new at it. I know basics of C but with more than 1 day research, i still not found a solution of this.
I simply want to make a delay function myself, processor runs at 168MHz ( HCLK ). So my intuition says that it produces 168x10^6 clock cycles at each seconds. So the method should be something like that,
1-Store current clock count to a variable
2-Time diff = ( clock value at any time - stored starting clock value ) / 168000000
This flow should give me time difference in terms of seconds and then i can use it to convert whatever i want.
But, unfortunately, despite it seems so easy, I just cant implement any methods to MCU.
I tried time.h but it did not work properly. For ex, clock() gave same result over and over, and time( the one returns seconds since 1970 ) gave hexadecimal 0xFFFFFFFF ( -1, I guess means error ) .
Thanks.
Edit : While writing i assumed that some func like clock() will return total clock count since the start of program flow, but now i think after 4Billion/168Million secs it will overflow uint32_t size. I am really confused.
The answer depends on the required precision and intervals.
For shorter intervals with sub-microsecond precision there is a cycle counter. Your suspicion is correct, it would overflow after 232/168*106 ~ 25.5 seconds.
For longer intervals there are timers that can be prescaled to support any possible subdivision of the 168 MHz clock. The most commonly used setup is the SysTick timer set to generate an interrupt at 1 kHz frequency, which increments a software counter. Reading this counter would give the number of milliseconds elapsed since startup. As it is usually a 32 bit counter, it would overflow after 49.7 days. The HAL library sets SysTick up this way, the counter can then be queried using the HAL_GetTick() function.
For even longer or more specialized timing requirements you can use the RTC peripheral which keeps calendar time, or the TIM peripherals (basic, general and advanced timers), these have their own prescalers, and they can be arranged in a master-slave setup to give almost arbitrary precision and intervals.

handling interruption in auxiliary clock

I am having following code from big code base of an embedded application. I am trying to understand code and have following questions.
old_rate = sysAuxClkRateGet();
sysAuxClkRateSet(50);
sysAuxClkConnect ((FUNCPTR) scanDispatcher, 0);
/* Enable dispatcher */
sysAuxClkEnable ();
My questions are
Do scanDispatcher is called for each tick or after 50 ticks?
Is sysAuxClkRateSet(50); means we have 50 ticks per second? Is my understanding is right.
The auxiliary clock ISR will call scanDispatcher (with argument 0) every time it's invoked to handle the auxilary clock interrupt.
sysAuxClkRateSet(50) defines the frequency of the auxiliary clock interrupt. Since the auxiliary clock driver ISR doesn't perform other actions than managing the timer device and calling the scanDispatcher routine, you can change the frequency.
There are two kind of limits in the frequency values you can use:
The auxiliary clock driver (part of the BSP you're using) defines absolute minimum and maximum values that the driver is able to manage
The real maximum limit is defined by the system load introduced by scanDispatcher and it's execution time; remember, in any case, that scanDispatcher is executed at interrupt time, so its' execution time should always be very short.
A last caveat: auxiliary clock isn't a mandatory device in VxWorks: most of the BSPs support an auxiliary clock device, but (in principle) you could find a BSP that doesn't support it.

Timer reading when postscalar turned ON

I have read that the postscalar of a timer specifies how many times the counter has to overflow inorder to get an interrupt.
But i have a doubt there.
So what i understand is if i put 0x55 and start timer with postscalar as 2, then timer will count from 0x55 to 0xFF and then again 0x55 to 0xFF and generate an interrupt.
Consider a case that i start the timer in an external inetrrupt. My requyirement may be to get the timegap between two interrupt. I start the timer in first interrupt, then read the timer in the next interrupt.
but if i have put postscalar then i will get the wrong time right.
I just used this as an example to make my question clear.
Edit: So will there be any issue if a timer value is read when postscalar turned ON
Usage Context: To get time difference between two interrupts
No. PostScale - Pre-Scale divide the clock input/output so you can sample at lower frequencies or intervals, depending on the application where you need more count than available. Let's say you have a XTAl of 8MHz with a Pre-Scaler of 1:8 (found on many PICS), you won't sample at 8MHz but at 1MHz.
Addind a pre-scaler - Post-Scaler will change the time between your 2 interrupts, surely. But that won't affect the reading of the counter value, assuming you count a variable each time there's one of the 2 interrupts on. You will simply count slower, or faster, depending on which timer you are using (most of them only have a pre-scaler option).

Long Delay using Delay Functions from C18 Libraries for PIC18

I'm using a PIC18 with Fosc = 10MHz. So if I use Delay10KTCYx(250), I get 10,000 x 250 x 4 x (1/10e6) = 1 second.
How do I use the delay functions in the C18 for very long delays, say 20 seconds? I was thinking of just using twenty lines of Delay10KTCYx(250). Is there another more efficient and elegant way?
Thanks in advance!
It is strongly recommended that you avoid using the built-in delay functions such as Delay10KTCYx()
Why you might ask?
These delay functions are very inaccurate, and they may cause your code to be compiled in unexpected ways. Here's one such example where using the Delay10KTCYx() function can cause problems.
Let's say that you have a PIC18 microprocessor that has only two hardware timer interrupts. (Usually they have more but let's just say there are only two).
Now let's say you manually set up the first hardware timer interrupt to blink once per second exactly, to drive a heartbeat monitor LED. And let's say you set up the second hardware timer interrupt to interrupt every 50 milliseconds because you want to take some sort of digital or analog reading at exactly 50 milliseconds.
Now, lastly, let's say that in your main program you want to delay 100,000 clock cycles. So you put a call to Delay10KTCYx(10) in your main program. What happenes do you suppose? How does the PIC18 magically count off 100,000 clock cycles?
One of two things will happen. It may "hijack" one of your other hardware timer interrupts to get exactly 100,000 clock cycles. This would either cause your heartbeat sensor to not clock at exactly 1 second, or, cause your digital or analog readings to happen at some time other than every 50 milliseconds.
Or, the delay function will just call a bunch of Nop() and claim that 1 Nop() = 1 clock cycle. What isn't accounted for is "overheads" within the Delay10KTCYx(10) function itself. It has to increment a counter to keep track of things, and surely it takes more than 1 clock cycle to increment the timer. As the Delay10KTCYx(10) loops around and around it is just not capable of giving you exactly 100,000 clock cycles. Depending on a lot of factors you may get way more, or way less, clock cycles than you expected.
The Delay10KTCYx(10) should only be used if you need an "approximate" amount of time. And pre-canned delay functions shouldn't be used if you are already using the hardware timer interrupts for other purposes. The compiler may not even successfully compile when using Delay10KTCYx(10) for very long delays.
I would highly recommend that you set up one of your timer interrupts to interrupt your hardware at a known interval. Say 50,000 clock cycles. Then, each time the hardware interrupts, within your ISR code for that timer interrupt, increment a counter and reset the timer over again to 0 cycles. When enough 50,000 clock cycles have expired to equal 20 seconds (or in other words in your example, 200 timer interrupts at 50,000 cycles per interrupt), reset your counter. Basically my advice is that you should always manually handle time in a PIC and not rely on pre-canned Delay functions - rather build your own delay functions that integrate into the hardware timer of the chip. Yes, it's going to be extra work - "but why can't I just use this easy and nifty built-in delay function, why would they even put it there if it's gonna muck up my program?" - but this should become second nature. Just like you should be manually configuring EVERY SINGLE REGISTER in your PIC18 upon boot-up, whether you are using it or not, to prevent unexpected things from happening.
You'll get way more accurate timing - and way more predictable behavior from your PIC18. Using pre-canned Delay functions is a recipe for disaster... it may work... it may work on several projects... but sooner or later your code will go all buggy on you and you'll be left wondering why and I guarantee the culprit will be the pre-canned delay function.
To create very long time use an internal timer. This can helpful to avoid block in your application and you can check the running time. Please refer to PIC data sheet on how to setup a timer and its interrupt.
If you want a very high precision 1S time I suggest also to consider an external RTC device or an internal RTC if the micro has one.

Generate/ output clock pulse ( C code )

Im using Ethernut 2.1 B and I need a C program that outputs a clock signal at the timer 1 output B, with other words on output OCIB. The frequency of the clock signal should be at 1.0 kHz.
Anyone know how this could be done?
You need to look in COM bits for your timer. For instance, for Timer0 (8-bit), the COM bits are set in the TCCR0 register. Probably the setting you'd be interested in is
TCCR0 |= (0<<COM1)|1<<COM0); // Toggle OC0 on compare match
This will toggle the OC0 (pin14) line when timer reaches the specified value.
Which timer you use depends on the precision you need: obviosely the 16-bit timers can give you more precise time resolution then the 8-bit timers.
The setting of the registers for your specific frequency (1Khz) depends on the clock speed of your chip, and which timer you are using: the timers use a pre-scaled general clock signal (see table 56 of the datasheet for possible values). This means that the prescaler settings will depend on your clock speed, and how high you want to count. For most precision you will want to count as high as possible, which means the lowest possible prescaler setting compatible with your timer's maximum value.
As far as where to start, generally, reading the datasheet is a good place, but googling "AVR timer" can also be very helpful.
It seems to be based on the Atmel ATmega 128, so read that CPU's data sheet to figure out how to program the timer hardware.
Not sure if this microcontroller supports directly driving an output from a timer, if it doesn't you're going to have to do it in software from the interrupt service routine.

Resources