I've got an issue with STM32F0 timers. I'm doing everything register level and am disabling a timer and zeroing the count. After this I re-enable external interrupts and restart the timer on receipt of an edge.
It seems though that it the interrupt happens very soon after the count has been written, the zeroing does not take place.
I'm wondering if there is a delay when writing to the counter. I wonder if there would have to be at least one pre-scaled clock cycle?
Related
I am working with Cortex M3 ARM processor.So, I have a main loop like this;
while(true){
foo();
System_Watchdog_Refresh();
__ASM("wfe");//System wait for event...
}
So, manufacturer company said to me this;
If you don't want to reset your program from wdt(Watchdog Timer), you should set a empty timer ISR for every 1 ms.
There is problem for me here because ı have used "System_Watchdog_Refresh();" function and yeah processor running this function every loop.How watchdog timer reset the processor in this state?
Note that:
System_Watchdog_Refresh(): Reset wdt timer
Wdt can't be disable
foo() function doesn't matter for this state
When ı remove "__ASM("wfe");" processor doesn't reset from wdt
Thank you...
WFE sets the processor to standby until the next interrupt (or event). So even though you refresh the watchdog, the processor goes to sleep immediately after that and in the absence of any other events, stays in that state until the watchdog expires and resets the processor. To prevent that, you will need something that periodically triggers an interrupt (like an empty timer that the manufacturer suggests) to ensure the processor wakes up and resumes execution, thereby also refreshing the watchdog.
The timer interval should be something reasonably close to, but much less than, the watchdog timeout to ensure you get the ideal mix of power-saving and reliability.
(Moved my comments to an answer, since the OP says it works for him.)
I am working with a STM8 timer (not my code, but maintaining it) and in it it uses a timer. Apparently the clock is set at 16MHz erfo 0.0625uS. The settings of the timer are ARRH=0x03 ARRL=0x20 therefore (0x0320=800) it resets at 800 (ergo 50us)
PSCR is set at 0 so the timer has the same freq as the micro.
Anyway, when checking this with an oscilloscope, it does not give good readings.
The timer interrupt is called at:
56us , 54uS, 54uS, 52uS, 52uS, 52us, 38us(!!!), 42us(?), 50us, 50us....
curiosly summed up it gives 500uS so it does count as 10 times 50uS
The first 8 times at the timer interrupt some AD conversion is happening so there is the possibility that an AD interrupt is being called in between too.
1) Do you think this is affecting the frequency of the timer?
2) why does it "correct" itself by firing an interrupt at 38uS??
I would appreciate any comment based on your embedded or STM8 experience, since I know precise answers would need to examine the code...
I'm not sure if you still need an answer. I once had the same and searched for a long time... simple solution in my case:
I had an ADC ISR with high jitter. That came from my main loop. In some sub-sub-sub routine the ADC interrupt was temporarily deactivated for a critical section (data transfer between interrupt and main loop context). The effect is exactly what you discribe:
Sometimes the time between two interrupts is longer, because the interupt is pending and waiting for execution until the interrupt is enabled again. The timer is still continuing to run. Timing example:
interrupt is disabled in main loop (or sub routine)
interrupt flag is set by timer -> interrupt pending
interrupt is enabled again -> ISR is executed too late
interrupt is disabled in main loop
interrupt flag is set by timer -> pending
interrupt is enabled again -> ISR is executed much too late
main loop does NOT disable interrupt for some case (maybe by control flow, maybe timing issue)
The next interrupt is executed at the right time which is 50 us after raising the last interrupt, NOT 50 us after calling the last ISR. --> time between ISR calling is shortened.
I hope I could help.
I'm working on an embedded project that's running on an ARM Cortex M3 based microcontroller. Some code provided by our vendor uses a delay function that sets up built-in hardware timer and then spins until the timer expires. Typically this is used to wait between 1 and a couple hundred microseconds. These delays are almost because they are waiting on some register, chip or bus to complete an action and need to wait at least the given number of microseconds. The hardware timer also appears to cost at least 6 microseconds in overhead to setup.
In a multithreaded environment this is a problem because there are N threads but only 1 hardware timer. I could disable interrupts while the timer is being used to prevent context switches and thus race conditions but it seems a bit ugly. I am thinking of replacing the function that uses the hardware timer with a function that uses the ARM CPU Cycle Counter (CCNT). Are there are pitfalls I am missing or other alternatives? Obviously the cycle counter function requires it be tuned to the proper CPU frequency which will never change for our system, but I suppose could be detected at boot programmatically using the hardware timer.
Setup the timer once at startup and let the counter run continuously. When you want to start a delay, read the counter value and remember this start value. Then in the delay loop read the counter value again and loop until the counter value minus the start value is greater than or equal to the requested delay ticks. (If you do the subtraction correctly then rollovers will wash out and you don't need special handling to check for them.)
You could multiplex your timer such that you have a table of when each thread wants to fire off and a function pointer / vector for execution. When the timer interrupt occurs, fire off that thread's interrupt and then set the timer to the next one in the list, minus elapsed time. This is what I see many *nix operating systems do in their kernel code, so there should be code to pull from as example.
A bigger concern is the fact that you are spin locking the thread waiting for the timer. Besides CPU usage, and depending on what OS you have (or if you have an OS) you could easily introduce thread inversion issues or even full on lock ups. It might be better to use thread primitives instead so that any OS can actually sleep your threads and wake them when needed.
I am writing an preemptive kernel in C and assembly. I've been looking at and setting up timer interrupts through the PIT and the PIC but one thing I am utterly unable to find an answer on.
We have initilized the 8254 chip to be counting on counter 0 in mode 2. We set it to fire an interrupt on IR0 on the PIC every 10 ms. After that we enable the IR0 on the PIC and things work as intended.
However lets say at certain conditions we want to alter the time that the PIT fires at by feeding it a new value. Or just restart the counter midcounting.
The intel manual for the chip has some detail on the gate and using it to restart the counter by getting a rising edge on the gate.
THe manual also says that if we give the counter a new value it doesn't reset the counter until after the current counting sequence is finished unless a trigger (rising edge on the gate) happens before the counting is over.
The manual also says that sending a new CW to the chip would reset the counter, however I don't believe this is the optimal way of restarting or altering the counter.
So the question is, how would this be done in either c or assembly? (We got full write access whenever we want).
To not leave a question unanswered and as I've somewhat of an answer I'll answer it myself.
As far as I've understood the chip has 3 counters but only counter 2 (we start counting at 0) has the gate pin connected (and this one has it connected to the speaker). As a result counter 0 which is the real timer counter doesn't have a connection on the gate which means we can't cause a trigger on it after sending it a new value.
This means that sending a value to it and then restarting it on the value before the timer is up is impossible without sending it new ICW.
In case we want to reset the timer when we get out of an interrupt caused by the 8259 chip which the 8254 is connected to at the end of the handling of that interrupt (that is we don't want the time to be running during the actual interrupt) we would be best of changing the mode to mode 0 which doesn't restart the timer on terminal count and then just manually restart it with the time we want to use for it each time we are about to end and interrupt.
I'm using a PIC18 with Fosc = 10MHz. So if I use Delay10KTCYx(250), I get 10,000 x 250 x 4 x (1/10e6) = 1 second.
How do I use the delay functions in the C18 for very long delays, say 20 seconds? I was thinking of just using twenty lines of Delay10KTCYx(250). Is there another more efficient and elegant way?
Thanks in advance!
It is strongly recommended that you avoid using the built-in delay functions such as Delay10KTCYx()
Why you might ask?
These delay functions are very inaccurate, and they may cause your code to be compiled in unexpected ways. Here's one such example where using the Delay10KTCYx() function can cause problems.
Let's say that you have a PIC18 microprocessor that has only two hardware timer interrupts. (Usually they have more but let's just say there are only two).
Now let's say you manually set up the first hardware timer interrupt to blink once per second exactly, to drive a heartbeat monitor LED. And let's say you set up the second hardware timer interrupt to interrupt every 50 milliseconds because you want to take some sort of digital or analog reading at exactly 50 milliseconds.
Now, lastly, let's say that in your main program you want to delay 100,000 clock cycles. So you put a call to Delay10KTCYx(10) in your main program. What happenes do you suppose? How does the PIC18 magically count off 100,000 clock cycles?
One of two things will happen. It may "hijack" one of your other hardware timer interrupts to get exactly 100,000 clock cycles. This would either cause your heartbeat sensor to not clock at exactly 1 second, or, cause your digital or analog readings to happen at some time other than every 50 milliseconds.
Or, the delay function will just call a bunch of Nop() and claim that 1 Nop() = 1 clock cycle. What isn't accounted for is "overheads" within the Delay10KTCYx(10) function itself. It has to increment a counter to keep track of things, and surely it takes more than 1 clock cycle to increment the timer. As the Delay10KTCYx(10) loops around and around it is just not capable of giving you exactly 100,000 clock cycles. Depending on a lot of factors you may get way more, or way less, clock cycles than you expected.
The Delay10KTCYx(10) should only be used if you need an "approximate" amount of time. And pre-canned delay functions shouldn't be used if you are already using the hardware timer interrupts for other purposes. The compiler may not even successfully compile when using Delay10KTCYx(10) for very long delays.
I would highly recommend that you set up one of your timer interrupts to interrupt your hardware at a known interval. Say 50,000 clock cycles. Then, each time the hardware interrupts, within your ISR code for that timer interrupt, increment a counter and reset the timer over again to 0 cycles. When enough 50,000 clock cycles have expired to equal 20 seconds (or in other words in your example, 200 timer interrupts at 50,000 cycles per interrupt), reset your counter. Basically my advice is that you should always manually handle time in a PIC and not rely on pre-canned Delay functions - rather build your own delay functions that integrate into the hardware timer of the chip. Yes, it's going to be extra work - "but why can't I just use this easy and nifty built-in delay function, why would they even put it there if it's gonna muck up my program?" - but this should become second nature. Just like you should be manually configuring EVERY SINGLE REGISTER in your PIC18 upon boot-up, whether you are using it or not, to prevent unexpected things from happening.
You'll get way more accurate timing - and way more predictable behavior from your PIC18. Using pre-canned Delay functions is a recipe for disaster... it may work... it may work on several projects... but sooner or later your code will go all buggy on you and you'll be left wondering why and I guarantee the culprit will be the pre-canned delay function.
To create very long time use an internal timer. This can helpful to avoid block in your application and you can check the running time. Please refer to PIC data sheet on how to setup a timer and its interrupt.
If you want a very high precision 1S time I suggest also to consider an external RTC device or an internal RTC if the micro has one.