interrupt every x seconds EFM32G (C) - c

I working with EFM microcontroller (Silicon Labs)
I need to make a beep every x seconds, when the device in EM3 mode.
I tried so many ways without success.
Please try to help me with code example (I'm HW man, not a SW haha)
Thanks,
Gal.

Refer to page 8 on the datasheet (thanks to #user694733)
EM3 mode description:
still full CPU and RAM retention, as well as Power-on Reset, Pin reset and Brown-out Detection, with a consumption of only 0.6 μA. The low-power ACMP, asynchronous external interrupt, PCNT, and I2C can wake-up the device.
So these are your options. One of these things can wake up the microcontroller. All of them are external inputs. So the microcontroller cannot wake itself up in this mode. This makes sense because all clocks are stopped.
If you had an outside clock connected to the PCNT you could use that to wake it up.
If you want the microcontroller to wake itself up, then you need EM2 mode or less:
In EM2 the high frequency oscillator is turned off, but with the 32.768 kHz
oscillator running, selected low energy peripherals (LCD, RTC, LETIMER,
PCNT, LEUART, I2C, WDOG and ACMP) are still available
In EM2 mode the microcontroller may wake itself up using the RTC (real-time clock), LETIMER (low-energy timer), WDOG (watchdog timer) or PCNT (pulse counter, which can be set to count pulses of the 32.768kHz clock).
The datasheet recommends using the Real-Time Clock or Low Energy Timer (RTC or LETIMER) modules.
... however, if we pay attention, we see the datasheet mentions something called the ULFRCO, Ultra-Low-Frequency RC Oscillator, which runs at approximately 1000 Hz. By searching for the keyword ULFRCO, we see that it does still run in EM3 mode, and it can be used as input for the WDOG. On page 89 we see this listed as a feature of EM3 mode.
So, you may configure the WDOG to reset the system after a few seconds. When the microcontroller resets due to watchdog timeout, it wakes up. You should not be afraid of using a system reset. The RMU_RSTCAUSE allows you to see that the system was reset because of the watchdog timer (not because it was first turned on or the reset pin was used). Memory contents are probably still there, but all peripherals are reset. As long as you can deal with peripherals being reset, you can probably make this work. You might even be able to use a little bit of assembly programming to jump back to exactly the point where the program left off.

Related

Embedded system interrupts

I have been reading about interrupts in embedded systems and I came across this.
In Normal Mode, the timer triggers interrupt handlers. These can do
practically any function you want, but they run on the CPU, which
prevents anything else from running at the same time. In CTC mode, you
can also trigger interrupts, but it is also possible to not use
interrupts and still toggle an output pin. Using it this way, the
functionality occurs parallel to the CPU and doesn't interrupt
anything.
So I have the following doubts:
What does it mean by toggling the output pin in CTC mode? Does it mean that the processes are running in parallel? That would imply that both the main loop and interrupt function are running in parallel? I am not sure about this.
Is it safe to assume that a timer counts more in CTC mode as it is resetting the timer register each time it matches with the compare register?
The hardware circuitry that constitutes the timer peripheral within the microcontroller is able to perform a comparison and toggle an output in CTC mode. This logic is performed in hardware, without relying on the CPU to execute software instructions. Therefore, the CTC mode compare and toggle occurs in parallel with whatever the CPU happens to be executing.
I don't understand what you mean by the timer "counts more". More as in more often or faster rate? More as in greater total counts? Regardless, I think the answer is no. The timer counts at the rate of the input clock that is driving it. In CTC mode the timer counts up to the comparison value that you have configured it for.

How reliable is DMA to GPIO on STM32 MCUs?

ST has some application notes that talk about emulating a parallel bus using DMA to GPIO. I appreciate that, but it doesn't answer important questions. I am looking through the reference manual, and I can't seem to find clarify the things that I am concerned about.
I am most concerned about the jitter. The reference manual repeatedly states, that when DMA is triggered (e.g., by a timer), the DMA controller will read the memory and transfer the value to the peripheral. That might be fine with peripherals that have their own FIFO. There, when space is available in the FIFO, DMA is triggered and fills the FIFO. That will probably happen before the FIFO runs empty.
But with GPIO, if the DMA channels doesn't have a FIFO itself, the data will not be ready when the timer triggers and it needs to be fetched from SRAM. So between the timer triggering and between the value actually arriving in the GPIO output register, some time may pass. This might be measurable when looking at the clock output by the timer and the GPIO pins. The DMA controller has to compete for access to the SRAM with the running program, so certain activities by the program may increase the jitter.
Maybe that is a colossal oversight on my part, but ST's reference manual doesn't seem mention a FIFO as part of the DMA. If that is the case, that would result in jitter which may impact performance at higher frequencies.
I need to toggle 3 to 4 pins synchronously to a clock from 100kHz to 1MHz. I am considering DMA to GPIO and also abusing a QuadSPI controller. I am currently testing on a STM32L4 but I'm also considering STM32F4 or even F1.
DMA to/from GPIOit is just memory-to-memory transfer. Many STM32 uCs have built in DMA FIFOs - but they will have not use here.
The core has always priority over the DMA so if it can be the issue (very unlikely) place the core accesible data (this data which uC will access when DMA is active in the separate memory area - for example CCM (if your uC has one)
Answering the question
memory to/FROM GPIO is very reliable - I personally did not have any problems with it.
If your clock can be anything between 100 kHz and 1 MHz, I guess you're not worried about jitter in the clock itself, only jitter in the data versus the clock. If your clock need not be continuous, a novel idea then is to do some preprocessing of the data to include the clock signal as part of the GPIO data. Then you could trigger the DMA at regular intervals using a timer, and you'll get the data frequency on the bus at half that rate with perfect alignment between clock and data.
So if you you want to send the four-bit data 5 6 B D with data valid on the positive clock edge, prepare the DMA buffer as so: 05 15 06 16 0B 1B 0D 1D and connect the GPIO pin 4 as the clock. Leave a final byte in the buffer to reset the clock/bus to idle state, if you need.
You can of course extend the idea and incorporate control signals such as chip selects and tri-state signals for external buffers, if needed.
Also take note that not all DMA blocks may have access to the AHB bus which is holding the GPIO registers. For example on STM32F40x, only DMA2 can be used (this is what got me, until I read this answer https://stackoverflow.com/a/46619315/6552613).
I haven't fully explored this space yet, but, by disabling interrupts and polling for interrupt flags in my main loop, it's made the jitter on my GPIO DMA basically disappear! Granted it might just be the set of interrupts have enabled, but everything down to the systick timer was killing me. By polling the interrupts in the main loop it seems to have fixed my issue.
Note that this is on an STM32F042, and I never exceed 6 MHz for my period. When I try to, i.e. try to go to 8 MHz sampling out, everything falls apart. YMMV

Timer supply to CPU in QEMU

I am trying to emulate the clock control for STM32 machine with CPU cortex m4. It is provided in the STM32 reference manual the clock supplied to the core is by the HCLK.
The RCC feeds the external clock of the Cortex System Timer (SysTick) with the AHB clock (HCLK) divided by 8. The SysTick can work either with this clock or with the Cortex clock (HCLK), configurable in the SysTick control and status register.
Now Cortex m4 is already emulated by QEMU and I am using the same for STM32 emulation. My confusion is should i supply the clock frequency of "HCLK" I have developed for STM32 to send clock pulses to cortex m4 or cortex -m4 itself manages to have its own clock with HCLK clock frequency 168MHz? or the clock frequency is different ?
If I have to pass this frequency to cortex m4, how do i do that?
QEMU's emulation does not generally try to emulate actual clock lines which send pulses at megahertz rates (this would be incredibly inefficient). Instead when the guest programs a timer device the model of the timer device sets up an internal QEMU timer to fire after the appropriate duration (and the handler for that then raises the interrupt line or does whatever is necessary for emulating the hardware behaviour). The duration is calculated from the values the guest has written to the device registers together with a value for what the clock frequency should be.
QEMU doesn't have any infrastructure for handling things like programmable clock dividers or a "clock tree" that routes clock signals around the SoC (one could be added, but nobody has got around to it yet). Instead timer devices are usually either written with a hard-coded frequency, or may be written to have a QOM property that allows the frequency to be set by the board or SoC model code that creates them.
In particular for the SysTick device in the Cortex-M models the current implementation will program the QEMU timer it uses with durations corresponding to a frequency of:
1MHz, if the guest has set the CLKSOURCE bit to 1 (processor clock)
something which the board model has configured via the 'system_clock_scale' global variable (eg 25MHz for the mps2 boards), if the guest has set CLKSOURCE to 0 (external reference clock)
(The system_clock_scale global should be set to NANOSECONDS_PER_SECOND / clk_frq_in_hz.)
The 1MHz is just a silly hardcoded value that nobody has yet bothered to improve upon, because we haven't run into guest code that cares yet. The system_clock_scale global is clunky but works.
None of this affects the speed of the emulated QEMU CPU (ie how many instructions it executes in a given time period). By default QEMU CPUs will run "as fast as possible". You can use the -icount option to specify that you want the CPU to run at a particular rate relative to real time, which sort of implicitly sets the 'cpu frequency', but this will only sort of roughly set an average -- some instructions will run much faster than others, in a not very predictable way. In general QEMU's philosophy is "run guest code as fast as we can", and we don't make any attempt at anything approaching cycle-accurate or otherwise tightly timed emulation.
Update as of 2020: QEMU now has some API and infrastructure for modelling clock trees, which is documented in docs/devel/clocks.rst in the source tree. This is basically a formalized version of the concepts described above, to make it easier for one device to tell another "my clock rate is 20MHz now" without hacks like the "system_clock_scale" global variable or ad-hoc QOM properties.
Systick is supplied via multiplexer and you can choose the AHB bus clock or divided by 8 system timer clock
An old thread and an oft asked question so this should help some of you trying to emulate cortex systems.
If using a .dtb when booting then in your .dts one can add to the 'timers' block a line of clock-frequency = <value>; and recompile it. This will indeed increase the speed of cortex processors. Clearly, value is some large number.

Can I disable Interrupts on a BBB for a short duration (0.5ms)?

I am trying to write a small driver program on a Beaglebone Black that needs to send a signal with timings like this:
I need to send 360 bits of information. I'm wondering if I can turn off all interrupts on the board for a duration of 500µs while I send the signal. I have no idea if I can just turn off all the interrupts like that. Searches have been unkind to me so far. Any ideas how I might achieve this? I do have some prototypes in assembly language for the signal, but I'm pretty sure its being broken by interrupts.
So for example, I'm hoping I could have something like this:
disable_irq();
/* asm code to send my bytes */
reenable_irq();
What would the bodies of disable_irq() and reenable_irq() look like?
The calls you would want to use are local_irq_disable() and local_irq_enable() to disable & enable IRQs locally on the current CPU. This also has the effect of disabling all preemption on the CPU.
Now lets talk about your general approach. If I understand you correctly, you'd like to bit bang your protocol over a GPIO with timing accurate to < 1/3 us.
This will be a challenge. Tests show that the Beaglebone black GPIO toggle frequency is going to max out at ~2.78MHz writing directly to the SoC IO registers in kernel mode (~0.18 us minimum pulse width).
So, although this might be achievable by the thinnest of margins by writing atomic code in kernel space, I propose another concept:
Implement your custom serial protocol on the SPI bus.
Why?
The SPI bus can be clocked up to 48MHz on the Beaglebone Black, its buffered and can be used with the DMA engine. Therefore, you don't have to worry about disabling interrupts and monopolizing your CPU for this one interface. With a timing resolution of ~0.021us (# 48MHz), you should be able to achieve your timing needs with an acceptable margin of error.
With the bus configured for Single Channel Continuous Transfer Transmit-Only Master mode and 30-bit word length (2 30-bit words for each bit of your protocol):
To write a '0' with your protocol, you'd write the 2 word sequence - 17 '1's followed by 43 '0's - on SPI (#48MHz).
To write a '1' with your protocol, you'd write the 2 word sequence - 43 '1's followed by 17 '0's - on SPI (#48MHz).
From your signal timmings it's easy to figure out that SPI or other serial peripheral can not reach your demand. In your timmings, encoding is based on the width of the pulse. So let's get to the point:
Q1 Could you turn off all interrupts for a duration of 500µs?
A: 0.5ms is quite a long time in embedded system. ISR is born to enable the concurrency of multi-task and improve the real-time capability. Your should keep in mind that ISR and context-switch(in some chip architecture) are all influenced by global interrupt.
But if your top priority is to perform the timmings, and the real-time window of other tasks are acceptable, of cause you can disable the global interrupt in the duration. Even longer. If not, don't do ATOM operation in such a long time.
Q2 How?
A: For a certain chip, there's asm instruction for open/close global interrupt undoubtedly. Find the instructions or the APIs provided by your OS, do the 3 steps below(pseudocode):
state_t tState = get_interrupt_status( );
disable_interrupt( );
... /*your operation here*/
resume_interrupt( tState );

Possible to disable LSI oscillator on ARM Cortex M3 for low power stop mode?

I am trying to add the 'independent watchdog' functionality to a project. It works fine but I am putting the chip to sleep for extended periods to conserve the battery and the watchdog still wakes everything up and forces a reset. Is there any way to disable the low speed internal oscillator? I haven't been able to find any info on that.
Thanks
I am using the ST Micro stm32f103v8 cortex M3. It turns out with this device it is impossible to disable the independent watchdog or disable the LSI oscillator once the watchdog has been enabled. Since the max watchdog time is about 37 seconds the current solution seems to be to wakeup every 25 seconds (to account for oscillator temperature speed differences) and reload the counter before going back to sleep. I am going to do a power analysis on this in the next couple weeks and see if it makes sense.
You could use the System window watchdog (WWDG) instead of the Independent watchdog (IWDG). The WWDG will be halted when you go into sleep/stop since you stop the APB clock (PCLK)

Resources