How to write a time difference function to STM32F4 - c

i am working on STM32F4 and pretty new at it. I know basics of C but with more than 1 day research, i still not found a solution of this.
I simply want to make a delay function myself, processor runs at 168MHz ( HCLK ). So my intuition says that it produces 168x10^6 clock cycles at each seconds. So the method should be something like that,
1-Store current clock count to a variable
2-Time diff = ( clock value at any time - stored starting clock value ) / 168000000
This flow should give me time difference in terms of seconds and then i can use it to convert whatever i want.
But, unfortunately, despite it seems so easy, I just cant implement any methods to MCU.
I tried time.h but it did not work properly. For ex, clock() gave same result over and over, and time( the one returns seconds since 1970 ) gave hexadecimal 0xFFFFFFFF ( -1, I guess means error ) .
Thanks.
Edit : While writing i assumed that some func like clock() will return total clock count since the start of program flow, but now i think after 4Billion/168Million secs it will overflow uint32_t size. I am really confused.

The answer depends on the required precision and intervals.
For shorter intervals with sub-microsecond precision there is a cycle counter. Your suspicion is correct, it would overflow after 232/168*106 ~ 25.5 seconds.
For longer intervals there are timers that can be prescaled to support any possible subdivision of the 168 MHz clock. The most commonly used setup is the SysTick timer set to generate an interrupt at 1 kHz frequency, which increments a software counter. Reading this counter would give the number of milliseconds elapsed since startup. As it is usually a 32 bit counter, it would overflow after 49.7 days. The HAL library sets SysTick up this way, the counter can then be queried using the HAL_GetTick() function.
For even longer or more specialized timing requirements you can use the RTC peripheral which keeps calendar time, or the TIM peripherals (basic, general and advanced timers), these have their own prescalers, and they can be arranged in a master-slave setup to give almost arbitrary precision and intervals.

Related

Hertz used in AVR

I'm very new to AVR and got confused about a question from one of our tutorials. It says:
"To toggle an output compare pin 8 times per second (4Hz period), what clock prescale and output output compare values do we need?"
My confusion is:
why it says "4Hz period"? isn't Hertz a measurement for frequency? why is it describing time period?
You are correct Hz is the unit for frequency. Frequency is changes per second so if you toggle a pin 4 times per second its frequency is 4Hz. The author was problably lazy and did not want to calculate the period time to 1/4 s
Hertz represents >> cycles per seconds in Metrical System SI system.
As mentioned above it tells you how many times a process, event anything else takes place. Mostly everything is measured in Hz because its convenient and has direct relation to time. So one can easily jump back and forth in units.
For the AVR has a system clock which is then prescaled to lower rates for use in different system peripherals. The event loop speed is also measured in Hertz. Currently on mine the loop execution time is 2.7 MHz so very fast loop.

Better way to determine seconds since power on in microcontroller?

Short question: How to get seconds since reset in STM32L051T6 microcontroller?
My effort and detailed issue:
I am using an STM32L051T6 series microcontroller. I need to count seconds since power on. I am also using low power mode. So I wrote code to use wakeup timer interrupt functionality of internal RTC of microcontroller. I used 1 second interval wake up timer with external LSE clock of 32768 Hz. I observed the accumulated seconds since power on (SSPO) after 3 days and found that it is falling behind by 115 seconds compared to actual time elapsed. My guess for this drift is interrupt latency in executing wakeup timer interrupt. How can I remove drift of this 115 seconds? Or is there any other better method than using wakeup interrupt to count seconds since power on?
UPDATE:
I tried to use Systick with HAL_GetTick() function as seconds since power on. But even systick is also getting delayed over time.
If you want to measure time with accuracy over a longer period, an RTC is the way to go. As you mentioned that you have an RTC, you can use the method below.
At startup, load the RTC with zero.
Then you can read the seconds elapsed when required without the errors above.
Edit: As per comment, the RTC can be changed by user. In that case,
If you can modify the RTC write function called by the user, then when the user calls the RTC write function, you update a global variable VarA = time set by user. The elapsed time will be Time read by RTC - VarA.
If the RTC is accurate, you should use the RTC by storing its value at boot time and later comparing to that saved value. But you said that the RTC can be reset by user so I can see two ways to cope with it:
if you have enough control on the system, replace the command or IHM that a user can use to reset the clock with a wrapper that inform you module and allows to read the RTC before and after it has been reset
if you have not enough control or cannot wrap the user's reset (because it uses a direct system call, etc.) use a timer to control the RTC value on every second
But you should define a threshold on the delta on RTC clock. If it is small, it is likely to be an adjustment because unless your system uses an atomic clock, even RTC can derive over time. In that case I would not care because you can hardly know whether it derived since last reboot or not. If you want a more clever algorythm, you can make the threshold dependant on the current time since last reboot: the longer the system is up, the higher the probability it has derived since then.
On the opposite, a large delta is likely to be a correction because RTC was blatantly erroneous, the saving battery is out of use, or what else. In that case you should compute the new start RTC time that gives same duration with the new RTC value.
As a rule of thumb, I would use a threshold of about 1 or 2 seconds per uptime day without RTC clock adjustement (ref) - meaning I would also store the time of last RTC adjustement, initially boot time.

Regarding timers in embedded C [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
So I am having my exam tomorrow. I missed a lecture but I have a recorded professor's lecture. During the lecture, the professor mentioned that we will need to know how Timers work within embedded procesors.
I have a basic understanding, however I am confused on the Math part. Professor said that for example he will give us a 12-bit timer running at some rate X. We will have to set the timer value initially and wait for it to overflow. If we want the timer to wait for 3 miliseconds, what do we set the timer to?
In addition, professor said that "it is a simple math, giving us clocks equal to 1,000,000 and the time will be 1000 so we can do divisions easily."
Can somebody please explain how timers exactly work and what would I need to do in order to get the math part correctly.
Most embedded timers work as follows:
set timer to some initial value, T
start timer
timer decrements once per timer clock
when timer reaches 0 (or underflows beyond 0) an interrupt is generated
So the elapsed time will just be T / timer_clock_rate, where T is the initial timer value and timer_clock_rate will depend on how you've configured the timer.
So for example if you want a 3 ms delay and your timer clock rate is 1 MHz (i.e. the timer decrements once every 1 µs) then you need an initial timer value of 3000 (3000 x 1 µs = 3 ms).
EDIT: see also #Rev1.0's answer - apparently AVR timers count up rather than down - note that some other micro-controller families use count down timers. The same general principle applies to both however, but the initial constant that you would load will be different depending on whether you are counting up or down.
While Paul's example maybe sufficient for you to get the idea how it works, it addresses the problem somewhat different from what your question suggested.
The mentioned overflow occurs when the timer reaches the maximum value. That would be 4096 for a 12 bit timer (2^12). Having a given clock of 1MHz (1us per tick) you have to count to 3000 to get 3ms like Paul already pointed out.
So you would set the initial value of the timer to 4096 - 3000 = 1096 to get an overflow after 3ms.
Just follow the dimensions. X is cycles per second, assume that one tick is one cycle (if there is a prescaler, then there is another adjustment here, the timer counts in ticks). And one second is 1000 milliseconds. so just arrange all of these dimensions so that they cancel out leaving only the one you want
1 tick X cycles 1 second 3 milliseconds
-------- * -------- * ----------------- * --------------
1 cycles 1 second 1000 milliseconds
cancel out all the units divided by themselves. from grade school math (almost) anything divided by itself is one, just apply that to units. leaving:
1 tick X 1 3
------ * --- * ----- * ---
1 1 1000
So whatever your timer clock source frequency is in cycles per second (Hz) is multiplied by 3 and divided by 1000. If that clock frequency is 1000000 then (1000000*3)/1000 = 3000.
That is how you can easily figure out what is multiplied and what is divided, works for every flavor of conversion. Miles per hour to kilometers per second, whatever.
then just follow Rev1.0's answer or Paul R.
Sometimes there is an N-1 thing you have to be aware of which is either documented or you can test for. For example if you have a down counter the docs will/should say if the timer rolls over or interrupts WHEN it reaches zero or AFTER, usually they are AFTER, when the actual roll over happens, so 3000 to 0 inclusive is 3001 counts, you are off by a little, so for a system like that you need to program the timer with 2999 to get 3000 ticks. For an up counter it is usually one of two ways, it counts from zero to the programmed value, same deal count from 0 to 2999 to get 3000 counts so you would probably program a 2999 in the register not 3000. Or the value you program is the start count as Rev1.0 showed, and the rollover is after the all ones value for whatever the register size is, in this case they tell you 12 bits which is 0xFFF, dont make the 0xFFF - 3000 mistake and get 1095, easy to do it the way Rev1.0 shows (0xFFF+1)-3000 = 4096-3000 = 1096 that is your start count.
Same goes for prescalers, you have to be very careful to read what they are saying do you program a 2 to divide by 2 or program a 1 to divide by 2? What happens if you program a 0 is that a divide by 1 or invalid or a divide by the max value, or an invalid divisor setting? Where does that fit into the dimensional analysis? ticks / cycles. A prescaler that divides by 8 means that for every 8 cycles you get one tick, so that is a 1 tick / 8 cycles.
Now sometimes you will have a system clock that divides down for the peripheral clock and the the peripheral clock is what feeds the timer, then you may have a prescaler there so n system clocks / 1 peripheral clock, then 1 peripheral clock / 1 timer clock, then 1 tick is M timer clocks. Get everything lined up so all but one cancels out and there you go.
You can do this in reverse as well and will and should. From the numbers we have right now 3000 timer ticks is 3ms or 0.003 seconds. 3000/0.003 is ticks per second or cycles per second of 1000000.
But what if we had a different controller or one with a clock we didnt know (or we have a crystal we know but we suspect there is a prescaler somewhere we cant find in the docs) so let the timer roll over 4096 ticks for example, we measure that with a stopwatch or oscilloscope or something, not that accurate in either case but might give us a rough enough idea to figure out if there is a prescaler or what clock we are actually running on if there is a pll multiplying the clock, etc. Say it was 0.0005 seconds for every 4096 timer ticks, 4096/0.0005 = 8192000 hz. Now if the crystal/oscillator in the schematic or we can read off of the part says 16Mhz that would make some sense 8000000/4096 = 0.000512 and 8Mhz is half of 16Mhz, so your measurement is probably off by a smidge and the clock has some accuracy as well and may be off by some amount. So you check the docs to see if there is an internal oscillator, if not then you are probably running off of the 16Mhz clock but there is a divide by 2 that is documented and you have not found it or isnt documented (it happens some times) and your timer is running off of system clock/2. Now you can use that number as X and figure out whatever you need. Why not have the timer count to 1000 or some other number that is easier to compute. that is fine too, but can take more work and more experiments, sometimes you have a free running timer you cannot reset at a some max or min count and instead you can just say
while(1)
{
while((read_timer()&0x1000)==0) continue;
turn_gpio_on();
while((read_timer()&0x1000)!=0) continue;
turn_gpio_off();
}
Measure the on time or the off time and that is 0x1000 ticks. hex not as pretty a number as 1000 decimal when doing decimal math on your calculator, but it is pretty when you can simply use an and operation with one bit to toggle the gpio/led.
This last point being, you can read the docs and schematics and think you know how it is working, but you should test your results, and if you are off by some whole multiple, esp if it is a power of two either your math was wrong by some whole number or there is a clock divisor somewhere in the chip you dont know about or have not looked hard enough to find, this could also mean you may have thought you boosted the crystal speed to some other speed using the pll, and maybe there is a mistake there and everything in the chip is not running at the desired speed.
(if this answer is useful to you then upvote Paul and Rev1.0 before upvoting me, thanks, just expanding on their answers).

Precise Linux Timing - What Determines the Resolution of clock_gettime()?

I need to do precision timing to the 1 us level to time a change in duty cycle of a pwm wave.
Background
I am using a Gumstix Over Water COM (https://www.gumstix.com/store/app.php/products/265/) that has a single core ARM Cortex-A8 processor running at 499.92 BogoMIPS (the Gumstix page claims up to 1Ghz with 800Mhz recommended) according to /proc/cpuinfo. The OS is an Angstrom Image version of Linux based of kernel version 2.6.34 and it is stock on the Gumstix Water COM.
The Problem
I have done a fair amount of reading about precise timing in Linux (and have tried most of it) and the consensus seems to be that using clock_gettime() and referencing CLOCK_MONOTONIC is the best way to do it. (I would have liked to use the RDTSC register for timing since I have one core with minimal power saving abilities but this is not an Intel processor.) So here is the odd part, while clock_getres() returns 1, suggesting resolution at 1 ns, actual timing tests suggest a minimum resolution of 30517ns or (it can't be coincidence) exactly the time between a 32.768KHz clock ticks. Here's what I mean:
// Stackoverflow example
#include <stdio.h>
#include <time.h>
#define SEC2NANOSEC 1000000000
int main( int argc, const char* argv[] )
{
// //////////////// Min resolution test //////////////////////
struct timespec resStart, resEnd, ts;
ts.tv_sec = 0; // s
ts.tv_nsec = 1; // ns
int iters = 100;
double resTime,sum = 0;
int i;
for (i = 0; i<iters; i++)
{
clock_gettime(CLOCK_MONOTONIC, &resStart); // start timer
// clock_nanosleep(CLOCK_MONOTONIC, 0, &ts, &ts);
clock_gettime(CLOCK_MONOTONIC, &resEnd); // end timer
resTime = ((double)resEnd.tv_sec*SEC2NANOSEC + (double)resEnd.tv_nsec
- ((double)resStart.tv_sec*SEC2NANOSEC + (double)resStart.tv_nsec);
sum = sum + resTime;
printf("resTime = %f\n",resTime);
}
printf("Average = %f\n",sum/(double)iters);
}
(Don't fret over the double casting, tv_sec in a time_t and tv_nsec is a long.)
Compile with:
gcc soExample.c -o runSOExample -lrt
Run with:
./runSOExample
With the nanosleep commented out as shown, the result is either 0ns or 30517ns with the majority being 0ns. This leads me to believe that CLOCK_MONOTONIC is updated at 32.768kHz and most of the time the clock has not been updated before the second clock_gettime() call is made and in cases where the result is 30517ns the clock has been updated between calls.
When I do the same thing on my development computer (AMD FX(tm)-6100 Six-Core Processor running at 1.4 GHz) the minimum delay is a more constant 149-151ns with no zeros.
So, let's compare those results to the CPU speeds. For the Gumstix, that 30517ns (32.768kHz) equates to 15298 cycles of the 499.93MHz cpu. For my dev computer that 150ns equates to 210 cycles of the 1.4Ghz CPU.
With the clock_nanosleep() call uncommented the average results are these:
Gumstix: Avg value = 213623 and the result varies, up and down, by multiples of that min resolution of 30517ns
Dev computer: 57710-68065 ns with no clear trend. In the case of the dev computer I expect the resolution to actually be at the 1 ns level and the measured ~150ns truly is the time elapsed between the two clock_gettime() calls.
So, my question's are these:
What determines that minimum resolution?
Why is the resolution of the dev computer 30000X better than the Gumstix when the processor is only running ~2.6X faster?
Is there a way to change how often CLOCK_MONOTONIC is updated and where? In the kernel?
Thanks! If you need more info or clarification just ask.
As I understand, the difference between two environments(Gumstix and your Dev-computer) might be the underlying timer h/w they are using.
Commented nanosleep() case:
You are using clock_gettime() twice. To give you a rough idea of what this clock_gettime() will ultimately get mapped to(in kernel):
clock_gettime -->clock_get() -->posix_ktime_get_ts -->ktime_get_ts() -->timekeeping_get_ns()
-->clock->read()
clock->read() basically reads the value of the counter provided by underlying timer driver and corresponding h/w. A simple difference with stored value of the counter in the past and current counter value and then nanoseconds conversion mathematics will yield you the nanoseconds elapsed and will update the time-keeping data structures in kernel.
For example , if you have a HPET timer which gives you a 10 MHz clock, the h/w counter will get updated at 100 ns time interval.
Lets say, on first clock->read(), you get a counter value of X.
Linux Time-keeping data structures will read this value of X, get the difference 'D'compared to some old stored counter value.Do some counter-difference 'D' to nanoseconds 'n' conversion mathematics, update the data-structure by 'n'
Yield this new time value to the user space.
When second clock->read() is issued, it will again read the counter and update the time.
Now, for a HPET timer, this counter is getting updated every 100ns and hence , you will see this difference being reported to the user-space.
Now, Let's replace this HPET timer with a slow 32.768 KHz clock. Now , clock->read()'s counter will updated only after 30517 ns seconds, so, if you second call to clock_gettime() is before this period, you will get 0(which is majority of the cases) and in some cases, your second function call will be placed after counter has incremented by 1, i.e 30517 ns has elapsed. Hence , the value of 30517 ns sometimes.
Uncommented Nanosleep() case:
Let's trace the clock_nanosleep() for monotonic clocks:
clock_nanosleep() -->nsleep --> common_nsleep() -->hrtimer_nanosleep() -->do_nanosleep()
do_nanosleep() will simply put the current task in INTERRUPTIBLE state, will wait for the timer to expire(which is 1 ns) and then set the current task in RUNNING state again. You see, there are lot of factors involved now, mainly when your kernel thread (and hence the user space process) will be scheduled again. Depending on your OS, you will always face some latency when your doing a context-switch and this is what we observe with the average values.
Now Your questions:
What determines that minimum resolution?
I think the resolution/precision of your system will depend on the underlying timer hardware being used(assuming your OS is able to provide that precision to the user space process).
*Why is the resolution of the dev computer 30000X better than the Gumstix when the processor is only running ~2.6X faster?*
Sorry, I missed you here. How it is 30000x faster? To me , it looks like something 200x faster(30714 ns/ 150 ns ~ 200X ? ) .But anyway, as I understand, CPU speed may or may not have to do with the timer resolution/precision. So, this assumption may be right in some architectures(when you are using TSC H/W), though, might fail in others(using HPET, PIT etc).
Is there a way to change how often CLOCK_MONOTONIC is updated and where? In the kernel?
you can always look into the kernel code for details(that's how i looked into it).
In linux kernel code , look for these source files and Documentation:
kernel/posix-timers.c
kernel/hrtimer.c
Documentation/timers/hrtimers.txt
I do not have gumstix on hand, but it looks like your clocksource is slow.
run:
$ dmesg | grep clocksource
If you get back
[ 0.560455] Switching to clocksource 32k_counter
This might explain why your clock is so slow.
In the recent kernels there is a directory /sys/devices/system/clocksource/clocksource0 with two files: available_clocksource and current_clocksource. If you have this directory, try switching to a different source by echo'ing its name into second file.

Long Delay using Delay Functions from C18 Libraries for PIC18

I'm using a PIC18 with Fosc = 10MHz. So if I use Delay10KTCYx(250), I get 10,000 x 250 x 4 x (1/10e6) = 1 second.
How do I use the delay functions in the C18 for very long delays, say 20 seconds? I was thinking of just using twenty lines of Delay10KTCYx(250). Is there another more efficient and elegant way?
Thanks in advance!
It is strongly recommended that you avoid using the built-in delay functions such as Delay10KTCYx()
Why you might ask?
These delay functions are very inaccurate, and they may cause your code to be compiled in unexpected ways. Here's one such example where using the Delay10KTCYx() function can cause problems.
Let's say that you have a PIC18 microprocessor that has only two hardware timer interrupts. (Usually they have more but let's just say there are only two).
Now let's say you manually set up the first hardware timer interrupt to blink once per second exactly, to drive a heartbeat monitor LED. And let's say you set up the second hardware timer interrupt to interrupt every 50 milliseconds because you want to take some sort of digital or analog reading at exactly 50 milliseconds.
Now, lastly, let's say that in your main program you want to delay 100,000 clock cycles. So you put a call to Delay10KTCYx(10) in your main program. What happenes do you suppose? How does the PIC18 magically count off 100,000 clock cycles?
One of two things will happen. It may "hijack" one of your other hardware timer interrupts to get exactly 100,000 clock cycles. This would either cause your heartbeat sensor to not clock at exactly 1 second, or, cause your digital or analog readings to happen at some time other than every 50 milliseconds.
Or, the delay function will just call a bunch of Nop() and claim that 1 Nop() = 1 clock cycle. What isn't accounted for is "overheads" within the Delay10KTCYx(10) function itself. It has to increment a counter to keep track of things, and surely it takes more than 1 clock cycle to increment the timer. As the Delay10KTCYx(10) loops around and around it is just not capable of giving you exactly 100,000 clock cycles. Depending on a lot of factors you may get way more, or way less, clock cycles than you expected.
The Delay10KTCYx(10) should only be used if you need an "approximate" amount of time. And pre-canned delay functions shouldn't be used if you are already using the hardware timer interrupts for other purposes. The compiler may not even successfully compile when using Delay10KTCYx(10) for very long delays.
I would highly recommend that you set up one of your timer interrupts to interrupt your hardware at a known interval. Say 50,000 clock cycles. Then, each time the hardware interrupts, within your ISR code for that timer interrupt, increment a counter and reset the timer over again to 0 cycles. When enough 50,000 clock cycles have expired to equal 20 seconds (or in other words in your example, 200 timer interrupts at 50,000 cycles per interrupt), reset your counter. Basically my advice is that you should always manually handle time in a PIC and not rely on pre-canned Delay functions - rather build your own delay functions that integrate into the hardware timer of the chip. Yes, it's going to be extra work - "but why can't I just use this easy and nifty built-in delay function, why would they even put it there if it's gonna muck up my program?" - but this should become second nature. Just like you should be manually configuring EVERY SINGLE REGISTER in your PIC18 upon boot-up, whether you are using it or not, to prevent unexpected things from happening.
You'll get way more accurate timing - and way more predictable behavior from your PIC18. Using pre-canned Delay functions is a recipe for disaster... it may work... it may work on several projects... but sooner or later your code will go all buggy on you and you'll be left wondering why and I guarantee the culprit will be the pre-canned delay function.
To create very long time use an internal timer. This can helpful to avoid block in your application and you can check the running time. Please refer to PIC data sheet on how to setup a timer and its interrupt.
If you want a very high precision 1S time I suggest also to consider an external RTC device or an internal RTC if the micro has one.

Resources