AVR Timer Programming : CTC mode vs. Normal mode - timer

When comparing the advantages and disadvantages of CTC mode and Normal mode in AVR Timer programming, which one do you think is better? Why? Can you explain more for me?
Thank you for you help.

In Normal Mode, the timer triggers interrupt handlers. These can do practically any function you want, but they run on the CPU, which prevents anything else from running at the same time.
In CTC mode, you can also trigger interrupts, but it is also possible to not use interrupts and still toggle an output pin. Using it this way, the functionality occurs parallel to the CPU and doesn't interrupt anything.
PWM runs in the background like CTC, but the timing of the output on the pin is different. It is more suited to devices like servos that take pulse-width modulation as input.
If all you want to do is toggle an output pin, use CTC or PWM. If you want to do more, use normal mode (or CTC or PWM, depending on the timing requirements).
From the manual:
Using the Output Compare to generate waveforms in Normal mode is not recommended, since this will occupy too much of the CPU time.
For generating a waveform output in CTC mode, the OC1A output can be set to toggle its logical level on each compare match by setting the Compare Output mode bits to toggle mode (COM1A1:0 = 1).

There is no "better" between the two. Sometimes you need to go full count, and sometimes you don't. You use the one that fits your needs, not the one that is "better".

Related

Embedded system interrupts

I have been reading about interrupts in embedded systems and I came across this.
In Normal Mode, the timer triggers interrupt handlers. These can do
practically any function you want, but they run on the CPU, which
prevents anything else from running at the same time. In CTC mode, you
can also trigger interrupts, but it is also possible to not use
interrupts and still toggle an output pin. Using it this way, the
functionality occurs parallel to the CPU and doesn't interrupt
anything.
So I have the following doubts:
What does it mean by toggling the output pin in CTC mode? Does it mean that the processes are running in parallel? That would imply that both the main loop and interrupt function are running in parallel? I am not sure about this.
Is it safe to assume that a timer counts more in CTC mode as it is resetting the timer register each time it matches with the compare register?
The hardware circuitry that constitutes the timer peripheral within the microcontroller is able to perform a comparison and toggle an output in CTC mode. This logic is performed in hardware, without relying on the CPU to execute software instructions. Therefore, the CTC mode compare and toggle occurs in parallel with whatever the CPU happens to be executing.
I don't understand what you mean by the timer "counts more". More as in more often or faster rate? More as in greater total counts? Regardless, I think the answer is no. The timer counts at the rate of the input clock that is driving it. In CTC mode the timer counts up to the comparison value that you have configured it for.

Which MCU(Cortex-M) for time critical GPIO application?

We have an application which runs on PIC24H, we would like to port it to another MCU, preferably ARM Cortex. Application is extremely time critical, meaning that we need extremely deterministic code behaviour. In short, there are pulses which are obtained via special hardware to GPIO pins, data is analyzed right away. Processing of data is not complex(we don't need a beefy cpu/mcu to do it). After analyzing the data GPIO output pins are written to their values.
App in 3 short lines:
process input pins
determine pattern within processing of input pins
based on the received pattern write output pins
PIC24H is working at 40MHz, we can toggle the pin in 25ns, we would be grateful with at least 2x speed for future upgrades. So MCU which can run deterministic code and toggle pins with at least 80MHz (12.5ns) would be just fine. We don't need toggling of the pins at constant fast rate, we need a mcu which can toggle it in less than 25ns. We can't waste cycles while toggling, if one cycle is off we loose synchronization. Everything must be done in one cycle precision(or two but constant two cycles), so code should be 100% deterministic.
Please let me know if I'm missing something or if what we need can be done using some other methods on Cortex-M. Just keep in mind that if one cycle is lost(due cache or similar) we loose signal sync and app will not do it's work right or at all.
Thanks!
Br
According to this blog post, the interrupt latency for Cortex-M ranges from 12 to 16 cycles (assuming you are not using FPU registers) with best-case memories. M0 and M0+ are slower than M3/M4/M7. On top of this, you need to add the GPIO access times (and watch out for different clock frequencies between the core and the peripherals. Cortex-M7 will suppport higher clock speeds than M3/M4.
It still isn't clear how many cycles are consumed in recognising a pattern, and how an interrupt is useful in doing this - generally a low latency interface function like this would be an obvious target for dedicated hardware, but since you have an existing software solution it seems the problem is mis-specified.
Providing you avoid accessing any 'slow' peripherals which might stall the bus, the interrupt latency should be deterministic - any specific device should have documentation which covers this.
NXP have an application note which describes some of the detail of how to measure what is going on.

whether Single-shot timer stops automatically?

I'm implementing a Timer 1 (which is basically a comparator & capture timer ) in comparator mode with single-shot mode operation? There's an option for starting the timer in continuous mode too.
My question is when I start the timer in single shot mode , after it reaches a mentioned count & compares, it will generate an interrupt flag but then does it mean that the timer is also stopped?
or do I need to stop it explicitly in single -shot mode too? I think it makes sense only in continuous mode?
I'm currently checking only the generated interrupt flag & assuming the timer is stopped & clearing the interrupt flag for further operation & the n come out of my function.
however, there is a control bit in the control register of the timer which can be toggled to make it run or stop? Should I just check the bit after the interrupt flag has generated or do I need to reset this control bit too? Which means I should have an explicit function to stop the timer as well?
Additional Information -
I'm using NXP (Philips ) controller.
Thank you in advance,
Prateek
I just got to read in the NXP datasheet that yes, If any timer, started in single shot (one- shot) mode will stop automatically.
Btw, If any one of you have any explanation kindly, put it below.
Thank you.
To understand microcontroller timers, just have to first realize that there is generally just one single main timer running. When enabled, this timer counts up until it overflows and then starts over.
When you start a "hardware timer", you only set up a register with a timer value which holds the value main_timer + delay. The hardware compares this register with the main timer at every tick, and when they match, it triggers an interrupt, sets a port or whatever you have configured it to do. Typically, you'd have to set up your timer register anew after that.
More more specific answers you have to specify the MCU family and part number used. NXP has made everything from ancient 8051 to modern ARM Cortex, and the timer peripheral hardware will be different for every MCU family.

Generate/ output clock pulse ( C code )

Im using Ethernut 2.1 B and I need a C program that outputs a clock signal at the timer 1 output B, with other words on output OCIB. The frequency of the clock signal should be at 1.0 kHz.
Anyone know how this could be done?
You need to look in COM bits for your timer. For instance, for Timer0 (8-bit), the COM bits are set in the TCCR0 register. Probably the setting you'd be interested in is
TCCR0 |= (0<<COM1)|1<<COM0); // Toggle OC0 on compare match
This will toggle the OC0 (pin14) line when timer reaches the specified value.
Which timer you use depends on the precision you need: obviosely the 16-bit timers can give you more precise time resolution then the 8-bit timers.
The setting of the registers for your specific frequency (1Khz) depends on the clock speed of your chip, and which timer you are using: the timers use a pre-scaled general clock signal (see table 56 of the datasheet for possible values). This means that the prescaler settings will depend on your clock speed, and how high you want to count. For most precision you will want to count as high as possible, which means the lowest possible prescaler setting compatible with your timer's maximum value.
As far as where to start, generally, reading the datasheet is a good place, but googling "AVR timer" can also be very helpful.
It seems to be based on the Atmel ATmega 128, so read that CPU's data sheet to figure out how to program the timer hardware.
Not sure if this microcontroller supports directly driving an output from a timer, if it doesn't you're going to have to do it in software from the interrupt service routine.

Implementing a non-standard SPI variation on ARM Cortex M3

I need to create a driver for a flash memory chip connected to a STM32 Cortex M3 MCU. The chip is controlled via an SPI bus. I intended to use integrated SPI peripheral of the MCU, but unfortunately it only supports 8- or 16-bit data packets while the flash chip commands are 14 bit long. Thus, I have to implement the protocol from scratch using GPIOs. My question is: what is the right way to ensure correct timings of the signals? I currently think of inserting delays between asserting and deasserting GPIO lines with interrupts disabled, but it seems fairly unreliable to me. Are there any better methods?
Jeb's answer is the preferred method and you should use the hardware SPI if possible, and if DMA is an option that is nice as well.
If you for some reason find out that you cannot use the hardware SPI, but that you must implement it using "bit-banging" over GPIO, you should check what options there are available in the timer/PWM hardware on the MCU. You cannot and should not use blunt "hobbyist burn-away delays" as in the link you posted, the real-time performance will be crap and you will occupy the CPU 100%.
Most MCU timers come with a pin output feature, that would allow a pin to change state when the timer elapses. The pseudo code would then be:
Determine if the next bit to send is 1 or 0.
Set the MCU polarity register accordingly, so that it will switch the pin to a high or low level.
When the timer elapses, you need to set the polarity once again, likely through an interrupt. How to do this is very hardware-dependent.
At the same time as you bit-bang the data (MOSI), you also need to generate the clock and chip select. The clock can be generated in the same way as the data, or possibly through a PWM signal if that option is available. Chip select is the easiest part as you only need to pull a pin low during the data transmission.
Finally, there is most likely some application note or official example over how to write a software SPI for your particular MCU.
I would recommend to use the build in SPI and DMA if possible!
You could remapping your data into an array of bytes with a size of a multiple of 14bits.
So you have to send a multiple of 7*4Bits=28bytes each time.
Then you can use the standard SPI with 8Bit-size.
But this should be much faster with SPI/DMA than bit banging the GPIO's.
Some devices that use obscure data lengths are designed so that at the start of a transaction they will either ignore all "0" bits that are clocked in before the first "1", or all "1" bits that are clocked in before the first "0". If your device happens to be designed in such a fashion, you may be able to use 8- or 16-bit SPI mode by clocking out two "junk" bits along with the bits of interest.

Resources