Short question: How to get seconds since reset in STM32L051T6 microcontroller?
My effort and detailed issue:
I am using an STM32L051T6 series microcontroller. I need to count seconds since power on. I am also using low power mode. So I wrote code to use wakeup timer interrupt functionality of internal RTC of microcontroller. I used 1 second interval wake up timer with external LSE clock of 32768 Hz. I observed the accumulated seconds since power on (SSPO) after 3 days and found that it is falling behind by 115 seconds compared to actual time elapsed. My guess for this drift is interrupt latency in executing wakeup timer interrupt. How can I remove drift of this 115 seconds? Or is there any other better method than using wakeup interrupt to count seconds since power on?
UPDATE:
I tried to use Systick with HAL_GetTick() function as seconds since power on. But even systick is also getting delayed over time.
If you want to measure time with accuracy over a longer period, an RTC is the way to go. As you mentioned that you have an RTC, you can use the method below.
At startup, load the RTC with zero.
Then you can read the seconds elapsed when required without the errors above.
Edit: As per comment, the RTC can be changed by user. In that case,
If you can modify the RTC write function called by the user, then when the user calls the RTC write function, you update a global variable VarA = time set by user. The elapsed time will be Time read by RTC - VarA.
If the RTC is accurate, you should use the RTC by storing its value at boot time and later comparing to that saved value. But you said that the RTC can be reset by user so I can see two ways to cope with it:
if you have enough control on the system, replace the command or IHM that a user can use to reset the clock with a wrapper that inform you module and allows to read the RTC before and after it has been reset
if you have not enough control or cannot wrap the user's reset (because it uses a direct system call, etc.) use a timer to control the RTC value on every second
But you should define a threshold on the delta on RTC clock. If it is small, it is likely to be an adjustment because unless your system uses an atomic clock, even RTC can derive over time. In that case I would not care because you can hardly know whether it derived since last reboot or not. If you want a more clever algorythm, you can make the threshold dependant on the current time since last reboot: the longer the system is up, the higher the probability it has derived since then.
On the opposite, a large delta is likely to be a correction because RTC was blatantly erroneous, the saving battery is out of use, or what else. In that case you should compute the new start RTC time that gives same duration with the new RTC value.
As a rule of thumb, I would use a threshold of about 1 or 2 seconds per uptime day without RTC clock adjustement (ref) - meaning I would also store the time of last RTC adjustement, initially boot time.
Related
i am working on STM32F4 and pretty new at it. I know basics of C but with more than 1 day research, i still not found a solution of this.
I simply want to make a delay function myself, processor runs at 168MHz ( HCLK ). So my intuition says that it produces 168x10^6 clock cycles at each seconds. So the method should be something like that,
1-Store current clock count to a variable
2-Time diff = ( clock value at any time - stored starting clock value ) / 168000000
This flow should give me time difference in terms of seconds and then i can use it to convert whatever i want.
But, unfortunately, despite it seems so easy, I just cant implement any methods to MCU.
I tried time.h but it did not work properly. For ex, clock() gave same result over and over, and time( the one returns seconds since 1970 ) gave hexadecimal 0xFFFFFFFF ( -1, I guess means error ) .
Thanks.
Edit : While writing i assumed that some func like clock() will return total clock count since the start of program flow, but now i think after 4Billion/168Million secs it will overflow uint32_t size. I am really confused.
The answer depends on the required precision and intervals.
For shorter intervals with sub-microsecond precision there is a cycle counter. Your suspicion is correct, it would overflow after 232/168*106 ~ 25.5 seconds.
For longer intervals there are timers that can be prescaled to support any possible subdivision of the 168 MHz clock. The most commonly used setup is the SysTick timer set to generate an interrupt at 1 kHz frequency, which increments a software counter. Reading this counter would give the number of milliseconds elapsed since startup. As it is usually a 32 bit counter, it would overflow after 49.7 days. The HAL library sets SysTick up this way, the counter can then be queried using the HAL_GetTick() function.
For even longer or more specialized timing requirements you can use the RTC peripheral which keeps calendar time, or the TIM peripherals (basic, general and advanced timers), these have their own prescalers, and they can be arranged in a master-slave setup to give almost arbitrary precision and intervals.
I have a POSIX timer that should fire every time 3 AM rolls around.
The obvious implementation is to use timer_create with CLOCK_REALTIME, and set the timeout to 3 AM, either today or tomorrow depending on the current time.
That works well for all cases except when the system time is corrected backwards. If the user sets the clock one year in the future and then back to the correct date, the new timer value will be set to tomorrow in one year, so the timer will not elapse for an entire year.
As a workaround, I can probably determine for each repeating timer when the previous timeout should have occurred, and if I ever see a system time before that, I correct all existing timers, however that still requires a separate wake up reason.
Is there a way to be notified if the system clock is set backwards, either in general or before a given time, or is there a better way to handle this?
First of all, I think you're overthinking this - it's 2017, a system's clock never needs to be set (rather than just drift-corrected) except at boot, and the clock should never be wrong by more than a few milliseconds (ideally microseconds), all kept in proper shape by ntp, ptp, or similar. In my book, it's perfectly acceptable to simply document that your software will not behave as expected under discontinuous clock adjustments.
If that's not acceptable to you, your options are limited:
You could use CLOCK_MONOTONIC instead of CLOCK_REALTIME, and set the timer without TIMER_ABSTIME, using the difference between 3am and the current time reported by CLOCK_REALTIME as the interval. This has the advantage that, even with arbitrary clock resets, you will never miss the 3am event by more than 24 hours, but you can still miss it if the time within the day is adjusted.
Use a timer with short expiry and poll on each timer even whether you've just crossed the 3am boundary. This will never miss the event by more than your timer interval, but wastes considerable cpu resources (and prevents deep sleep) polling.
There are other variants of these methods but in terms of pros and cons they're essentially equivalent.
My processor is an STM32F437ZGT6 and I wish to count two different pulse trains (RPM). The range is quite wide, I may have an engine that idles at 150 rpm and we get a pulse from the cam, so 0.5 pulses per revolution, or 1.25 pulses per second. At the other extreme I may need to count 460 flywheel teeth at 3000 rpm, 23000 pulses per second. I have a prescaler available so I can divide the external event by up to 8 but even so this become too intense at higher speeds because every event or eight event causes an interrupt.
One alternative I am considering would be to have one timer use the external event as the clock and it would just count events within a time window. My difficulty comes from determining how to use another timer to control the window by setting and clearing CEN or some similar action.
In RM0090, section 18.3.15 Timer synchronization the example shows one timer controlling another, timer 1 controlling timer two. I thought that may be useable but althought I did not read otherwise I don't see that any two timers could be paired. The signal I am interested in actually feeds two timers. TIM1 ch1 and TIM9 ch1.
Any suggestions would be appreciated as I don't want to cobble up some Rube Goldberg scheme where one timer fires off an ISR and then the ISR opens and closes the time window.
I should have noted that a lookup table is provided that provides the expected engine speed and the number of pulses per revolution.
Thanks,
jh
If you want to just count external events, you can select external clock source for timer. (Point "Clock selection" of reference manual). SPL should have an example.
And read count from Tim CNT register every time you need.
The problem here is to read counts often enough.
Usually auto-reload register is 2 bytes so you have up to 2 ^ 16 counts before overflow, and loosing counted value.
Timers 2 and 5 have 4 bytes auto-reload registers so you have up to 2 ^ 32 counts.
If you need more then 2 ^ 32 counts you have at least two ways:
- Timer cascade, by setting one timer event as a clock for another.
You can find this in the reference manual as "Using one timer as prescaler for another timer".
Cascading offers you up to 2 ^ 64 timer.
There is an example for SPL in "TIM_CascadeSynchro" folder.
- Less beautiful, but more easy way is to create a counter variable and, increment it in timer irq handler.
Number of counts may be found as cnt_variable * TIMx-> ARR.
Several cascaded variables give the unlimited counter).
Thanks for the post. I will try to add a little detail. RPM 1 is fed into TIM3 ch2 and TIM4 ch1. RPM 2 is fed into TIM1 ch1 and Tim9 ch1. Both have a range of 1.25 pulses per second up to 30000 pulses per second. I am given the number of pulses per revolution which can range from 0.5 to 460 and the expected engine rpm, 150 - 3000 rpm so I can scale things a bit. The reason for feeding two different timers is to be able to use different counting techniques based on speed (pulses per second). For low speed I can capture events (pulses) and grab the timer count using an ISR. But when the pulse count gets high I want to use a different method so as not to incur more than 1000 interrupts per second per channel. So my idea there is to have one timer control another. One timer would simply count events without generating interrupts. The second counter would control the period of time that the first timer would be allowed to collect events.
Thanks,
jh
Seems like you need: timer synchronization with enabling/disabling slave timer according to the trigger output of master timer.
Description can be found in the following sections of RM0090:
18.3.14 Timers and external trigger synchronization in paragraph Slave mode: Gated mode
18.3.15 Timer synchronization in paragraph Using one timer to enable another timer
Also good explanation can be found in TIMx register section for registers TIMx_SMCR: bits TS and SMS; TIMx_CR2: bits MMS.
TIMx internal trigger connection (tables 93, 97 and 100) сontains possible connections of the trigger output of one timer with the input of another. Timers that you can use as master are marked in the picture below:
TIM_ExtTriggerSynchro example from SPL library can be used for code copy-paste.
I think the best way is:
Set RPM pins as external clock source for slave timer.
Set enabling/disabling of slave timer from the output compare of master timer. So changing TIMx_CCRx register value you can change duration of measurement.
Set master timer interrupts on update event (maybe on few events TIMx_RCR register).
Do all the calculations in master timer interrupt handler
Also it seems to me that you can just use 16 bit timer as RPM counter. Even on 30000 pulses you will have overflow every 2^16/30000 = 2,18 seconds which is rarely rare for STM32F4 clock frequencies. And use other timer, with, for example, 2 second period, interrupt for calculations.
Good luck!
Normally when the linux system boots up it actually takes the reference time from RTC and runs a software timer on its own [i.e, generally known as system clock/wall clock]. When the system is about to shutdown it sync its wall clock time with RTC. I am looking for a method to implement a wall clock in c as similar to this. Can any body suggest some idea for me?
Thanks in advance,
Anandhakrishnan Ramasamy.
What OS usually do is they fetch the system startup time from RTC or HPET or any other timer device. And after they load PIC or APIC with a value to receive periodic interrupts from them (e.g after every 100ms). Based on these interrupts value of system clock or wall clock gets updated.
You can't do it in plain C without relying on functionalities provided by the OS. The reason is that the OS schedules several applications through multiprogramming, and your C application can't have knowledge about when it has been suspended by the scheduler.
Therefore, you have to use Posix functions like gettimeofday(), time() and so on.
Its hard to do this 100% correctly. You will have to detect times when the CPU goes to sleep, if the system is suspended, and also any time someone changes the timezone, or when daylight savings time starts or ends. You would have to do all these things yourself.
All CPUs today have a high resolution timer. Its just a register that increments every CPU clock cycle. If you know the frequency of the CPU, and you read that register on a regular basis ( e.g. often enough that it doesn't overflow ) you can measure time.
On linux there is a family of functions that reads this register for you, and figures out the CPU frequency, and returns the time in that register in nano-seconds:
timespec ts;
clock_gettime( CLOCK_MONOTONIC_RAW, &ts );
u64 timeInNanoSeconds = ts.tv_nsec + ( ts.tv_sec * 1000000000LL );
That time will wrap around every 5 minutes or so. So you have to read it pretty often, so you can detect the wrap around. So any time you read it, if ts.tv.nsec is smaller than the last time you called it, they you had an overflow, and you have to account for it.
Once you can accurately measure the passage of a second, then you can build your wall clock from there.
I'm using a PIC18 with Fosc = 10MHz. So if I use Delay10KTCYx(250), I get 10,000 x 250 x 4 x (1/10e6) = 1 second.
How do I use the delay functions in the C18 for very long delays, say 20 seconds? I was thinking of just using twenty lines of Delay10KTCYx(250). Is there another more efficient and elegant way?
Thanks in advance!
It is strongly recommended that you avoid using the built-in delay functions such as Delay10KTCYx()
Why you might ask?
These delay functions are very inaccurate, and they may cause your code to be compiled in unexpected ways. Here's one such example where using the Delay10KTCYx() function can cause problems.
Let's say that you have a PIC18 microprocessor that has only two hardware timer interrupts. (Usually they have more but let's just say there are only two).
Now let's say you manually set up the first hardware timer interrupt to blink once per second exactly, to drive a heartbeat monitor LED. And let's say you set up the second hardware timer interrupt to interrupt every 50 milliseconds because you want to take some sort of digital or analog reading at exactly 50 milliseconds.
Now, lastly, let's say that in your main program you want to delay 100,000 clock cycles. So you put a call to Delay10KTCYx(10) in your main program. What happenes do you suppose? How does the PIC18 magically count off 100,000 clock cycles?
One of two things will happen. It may "hijack" one of your other hardware timer interrupts to get exactly 100,000 clock cycles. This would either cause your heartbeat sensor to not clock at exactly 1 second, or, cause your digital or analog readings to happen at some time other than every 50 milliseconds.
Or, the delay function will just call a bunch of Nop() and claim that 1 Nop() = 1 clock cycle. What isn't accounted for is "overheads" within the Delay10KTCYx(10) function itself. It has to increment a counter to keep track of things, and surely it takes more than 1 clock cycle to increment the timer. As the Delay10KTCYx(10) loops around and around it is just not capable of giving you exactly 100,000 clock cycles. Depending on a lot of factors you may get way more, or way less, clock cycles than you expected.
The Delay10KTCYx(10) should only be used if you need an "approximate" amount of time. And pre-canned delay functions shouldn't be used if you are already using the hardware timer interrupts for other purposes. The compiler may not even successfully compile when using Delay10KTCYx(10) for very long delays.
I would highly recommend that you set up one of your timer interrupts to interrupt your hardware at a known interval. Say 50,000 clock cycles. Then, each time the hardware interrupts, within your ISR code for that timer interrupt, increment a counter and reset the timer over again to 0 cycles. When enough 50,000 clock cycles have expired to equal 20 seconds (or in other words in your example, 200 timer interrupts at 50,000 cycles per interrupt), reset your counter. Basically my advice is that you should always manually handle time in a PIC and not rely on pre-canned Delay functions - rather build your own delay functions that integrate into the hardware timer of the chip. Yes, it's going to be extra work - "but why can't I just use this easy and nifty built-in delay function, why would they even put it there if it's gonna muck up my program?" - but this should become second nature. Just like you should be manually configuring EVERY SINGLE REGISTER in your PIC18 upon boot-up, whether you are using it or not, to prevent unexpected things from happening.
You'll get way more accurate timing - and way more predictable behavior from your PIC18. Using pre-canned Delay functions is a recipe for disaster... it may work... it may work on several projects... but sooner or later your code will go all buggy on you and you'll be left wondering why and I guarantee the culprit will be the pre-canned delay function.
To create very long time use an internal timer. This can helpful to avoid block in your application and you can check the running time. Please refer to PIC data sheet on how to setup a timer and its interrupt.
If you want a very high precision 1S time I suggest also to consider an external RTC device or an internal RTC if the micro has one.