Related
I need a very precise timing, so I wrote some assembly code (for ARM M0+).
However, the timing is not what I expected when measuring on an oscilliscope.
#define LOOP_INSTRS_CNT 4 // subs: 1, cmp: 1, bne: 2 (when branching)
#define FREQ_MHZ (BOARD_BOOTCLOCKRUN_CORE_CLOCK / 1000000)
#define DELAY_US_TO_CYCLES(t_us) ((t_us * FREQ_MHZ + LOOP_INSTRS_CNT / 2) / LOOP_INSTRS_CNT)
static inline __attribute__((always_inline)) void timing_delayCycles(uint32_t loopCnt)
{
// note: not all instructions take one cycle, so in total we have 4 cycles in the loop, except for the last iteration.
__asm volatile(
".syntax unified \t\n" /* we need unified to use subs (not for sub, though) */
"0: \t\n"
"subs %[cyc], #1 \t\n" /* assume cycles > 0 */
"cmp %[cyc], #0 \t\n"
"bne.n 0b\t\n" /* this instruction costs 2 cycles when branching! */
: [cyc]"+r" (loopCnt) /* actually input, but we need a temporary register, so we use a dummy output so we can also write to the input register */
: /* input specified in output */
: /* no clobbers */
);
}
// delay test
#define WAIT_TEST_US 100
gpio_clear(PIN1);
timing_delayCycles(DELAY_US_TO_CYCLES(WAIT_TEST_US));
gpio_set(PIN1);
So pretty basic stuff. However, the delay (measured by setting a GPIO pin low, looping, then setting high again) timing is consistently 50% higher than expected. I tried for low values (1 us giving 1.56 us), up to 500 ms giving 750 ms.
I tried to single step, and the loop really does only the 3 steps: subs (1), cmp (1), branch (2). Paranthesis is number of expected clock cycles.
Can anybody shed light on what is going on here?
After some good suggestions I found the issue can be resolved in two ways:
Run core clock and flash clock at the same frequency (if code is running from flash)
Place the code in the SRAM to avoid flash access wait-states.
Note: If anybody copies the above code, note that you can delete the cmp, since the subs has the s flag set. If doing so, remember to set instruction count to 3 instead of 4. This will give you a better time resolution.
You can't use these processors like you would a PIC, the timing doesn't work like that. I have demonstrated this here many times you can look around, maybe will do it again here, but not right now.
First off these are pipelined so you average performance is one thing, but and once in a loop and things like caching and branch prediction learning and other factors have settled then you can get consistent performance, for that implementation. Ignore any documentation related to clocks per instruction for a pipelined processor now matter how shallow, that is the first problem in understanding why the timing doesn't work as expected.
Alignment plays a role and folks are tired of me beating this drum but I have demonstrated it so many times. You can search for fetch in the cortex-m0 TRM and you should immediately see that this will affect performance based on alignment. If the chip vendor has compiled the core for 16 bit only then that would be predictable or more predictable (ignoring other factors). But if they have compiled in the other features and if prefetching is happening as described, then the placement of the loop in the address space can affect the loop by plus or minus a fetch affecting the total time to complete the loop, which is measurable with or without a scope.
Branch prediction, which didn't show up in the arm docs as arm doing it but the chip vendors are fully free to do this.
Caching. While a cortex-m0+ if this is an STM32 or perhaps other brands as well, there is or may be a cache you can't turn off. Not uncommon for the flash to be half the speed of the processor thus flash wait state settings, but often the zero wait state means zero additional and it takes two clocks to get one fetch done or at least is measurable that execution in flash is half the speed of execution in ram with all other settings the same (system clock speed, etc). ST has a pretty good prefetch/caching solution with some trademarked name and perhaps a patent who knows. And rarely can you turn this off or defeat it so the first time through or the time entering the loop can see a delay and technically a pre-fetcher can slow down the loop (see alignment).
Flash, as mentioned depending on the chip vendor and age of the part it is quite common for the flash to be half speed of the core. And then depending on your clock rates, when you read about the flash settings in the chip doc where it shows what the required wait states were relative to system clock speed that is a key performance indicator both for the flash technology and whether or not you should really be raising the system clock up too high, the flash doesn't get any faster it has a speed limit, sram from my experience can keep up and so far I don't see them with wait states, but flashes used to be two or three settings across the range of clock speeds the part supports, the newer released parts the flashes are tending to cover the whole range for the slower cores like the m0+ but the m7 and such keep getting higher clock rates so you would still expect the vendors to need wait states.
Interrupts/exceptions. Are you running this on an rtos, are there interrupts going on are you increasing and/or guaranteeing that this gets interrupted with a longer delay?
Peripheral timing, the peripherals are not expected to respond to a load or store in a single clock they can take as long as they want and depending on the clocking system and chip vendors IP, in house or purchased, the peripheral might not run at the processor clock rate and be running at a divided rate making things slower. Your code no doubt is calling this function for a delay, and then outside this timing loop you are wiggling a gpio pin to see something on a scope which leads to how you conducted your benchmark and additional problems with that based on factors above and this one.
And other factors I have to remember.
Like high end processors like the x86, full sized ARMs, etc the processor no longer determines performance. The chip and motherboard can/do. You basically cannot feed the pipe constantly there are stalls going on all over the place. Dram is slow thus layers of caching trying to deal with it but caching helps sometimes and hurts others, branch predictors hurt as much as they help. And so on but it is heavily driven by the system outside the processor core as to how well you can feed the core, and then you get into the core's properties with respect to the pipeline and its own fetching strategy. Ideally using the width of the bus rather than the size of the instruction, transaction overhead so multiple widths of the bus is even more ideal that one width, etc.
Causing tight loops like this on any core to have a jerky motion and or be inconsistent in timing when the same machine code is used at different alignments. Now granted for size/power/etc the m0+ has a tiny pipe, but it still should show the affects of this. These are not pics or avrs or msp430s no reason to expect a timing loop to be consistent. At best you can use a timing loop for things like spi and i2c bit banging where you need to be greater than or equal to some time value, but if you need to be accurate or within a range, it is technically possible per implementation if you control many of the factors, but it is often not worth the effort and you have this maintenance issue now or readability or understandability of the code.
So bottom line there is no reason to expect consistent timing. If you happened to get consistent/linear timing, then great. The first thing you want to do is check that when you changed and re-built the code to use a different value for the loop that it didn't affect alignment of this loop.
You show a loop like this
loop:
subs r0,#1
cmp r0,#0
bne loop
on a tangent why the cmp, why not just
loop:
subs r0,#1
bne loop
But second you then claim to be measuring this on a scope, which is good because how you measure things plays into the quality of the benchmark often the benchmark varies because of how it is measured the yardstick is the problem not the thing being measured, or you have problems with both then the measurement is much more inconsistent. Had you used systick or other to measure this depending on how you did that the measurement itself can cause the variation, and even if you used gpio to toggle a pin that can and probably is affecting this as well. All other things held constant simply changing the loop count depending on the immediate and the value used could push you between a thumb and thumb2 instruction changing the alignment of some loop.
What you have shown implies you have this timing loop which can be affected by a number of system issues, then you have wrapped that with some other loop itself being affected, plus possibly a call to a gpio library function which can be affected by these factors as well from a performance perspective. Using inline assembly and the style in which you wrote this function that you posted implies you have exposed yourself and can easily see a wide range of performance differences in running what appears to be the same code, or even actually the code under test being the same machine code.
Unless this is a microchip PIC, not PIC32, or a very very short list of other specific brand and family of chips. Ignore the cycle counts per instruction, assume they are wrong, and don't try for accurate timing unless you control the factors.
Use the hardware, if for example you are trying to use the ws8212/neopixel leds and you have a tight window for timing you are not going to be successful or will have limited success using instruction timing. In that specific case you can sometimes get away with using a spi controller or timers in the part to generate accurately timed (far more than you can ever do with software timers managing the bit banging or otherwise). With a PIC I was able to generate tv infrared signals with the carrier frequency and ons and off using timed loops and nops to make a highly accurate signal. I repeated that for one of these programmable led things for a short number of them on a cortex-m using a long linear list of instructions and relying on execution performance it worked but was extremely limited as it was compile time and quick and dirty. SPI controllers are a pain compared to bit banging but another evening with the SPI controller and could send any length of highly accurately timed signals out.
You need to change your focus to using timers and/or on chip peripherals like uart, spi, i2c in non-normal ways to generate whatever signal this is you are trying to generate. Leave timed loops or even timer based loops wrapped by other loops for the greater than or equal cases and not for the within a range of time cases. If unable to do it with one chip, then look around at others, very often when making a product you have to shop for the components, across vendors, etc. Push comes to shove use a CPLD or a PAL or GAL or something like that to get highly accurate but custom timing. Depending on what you are doing and what your larger system picture looks like the ftdi usb chips with mpsse have a generic state machine that you can program to generate an array of signals, they do i2c, spi, jtag, swd etc with this generic programmable system. But if you don't have a usb host then that won't work.
You didn't specify a chip and I have a lot of different chips/boards handy but only a small fraction of what is out there so if I wanted to do a demo it might not be worth it, if mine has the core compiled one way I might not be able to get it to demonstrate a variation where the same exact core from arm compiled another way on another chip might be easy. I suspect first off a lot of your variation is because you are making calls within a bigger loop, call to delay call to state change the gpio, and you are recompiling that for the experiments. Or worse as shown in your question if you are doing a single pass and not a loop around the calls, then that can maximize the inconsistency.
We have an application which runs on PIC24H, we would like to port it to another MCU, preferably ARM Cortex. Application is extremely time critical, meaning that we need extremely deterministic code behaviour. In short, there are pulses which are obtained via special hardware to GPIO pins, data is analyzed right away. Processing of data is not complex(we don't need a beefy cpu/mcu to do it). After analyzing the data GPIO output pins are written to their values.
App in 3 short lines:
process input pins
determine pattern within processing of input pins
based on the received pattern write output pins
PIC24H is working at 40MHz, we can toggle the pin in 25ns, we would be grateful with at least 2x speed for future upgrades. So MCU which can run deterministic code and toggle pins with at least 80MHz (12.5ns) would be just fine. We don't need toggling of the pins at constant fast rate, we need a mcu which can toggle it in less than 25ns. We can't waste cycles while toggling, if one cycle is off we loose synchronization. Everything must be done in one cycle precision(or two but constant two cycles), so code should be 100% deterministic.
Please let me know if I'm missing something or if what we need can be done using some other methods on Cortex-M. Just keep in mind that if one cycle is lost(due cache or similar) we loose signal sync and app will not do it's work right or at all.
Thanks!
Br
According to this blog post, the interrupt latency for Cortex-M ranges from 12 to 16 cycles (assuming you are not using FPU registers) with best-case memories. M0 and M0+ are slower than M3/M4/M7. On top of this, you need to add the GPIO access times (and watch out for different clock frequencies between the core and the peripherals. Cortex-M7 will suppport higher clock speeds than M3/M4.
It still isn't clear how many cycles are consumed in recognising a pattern, and how an interrupt is useful in doing this - generally a low latency interface function like this would be an obvious target for dedicated hardware, but since you have an existing software solution it seems the problem is mis-specified.
Providing you avoid accessing any 'slow' peripherals which might stall the bus, the interrupt latency should be deterministic - any specific device should have documentation which covers this.
NXP have an application note which describes some of the detail of how to measure what is going on.
we are currently evaluating to use an external SRAM for C/C++ heap storage on our platform using a STM32F439BI microcontroller.
The problem
Using the SRAM as storage for heap results in random hardfaults which are raised from buserrors/imprecice buserrors.
Without placing the heap on the SRAM, memory tests run successfully on the whole SRAM (8 bit/16 bit and 32 bit accesses).
Connecting a debugger I can observe these errors sometimes before a hardfault occurs. Most often a word is read from the SRAM and the CPU register fills with addresses of the following format: 0x-1F3-1F3 (- is most often '0', sometimes 'A' or '6'). The pattern '1F3' persists. If the same address is read again some lines further down the correct value is read (some other address in 0x60000000 space).
If I stop the program on a breakpoint at some point early in the program and step a few lines, I get these errors more frequently.
Further details
The SRAM is connected using the FMC/FSMC peripheral on FMC bank 1 and SRAM bank 1 and is therefore memory-mapped to address 0x60000000.
All settings for GPIO pins and FMC configuration are set from the startup file before main() executes or static objects are created.
The SRAM is the following: CY7C1041GN30
We connect all 16 data pins, all 18 address pins, BHE, BLE, OE, WE and CE to our controller. All pins are configured as push-pull-alternate-function, pull-up, AF_12 (FMC), very high speed. We enable clocks for all necessary pins and the clock for FMC. Note: Initially we started out without pull-up/down showing the same symptoms.
The controller runs with a clock speed of 168 MHz
As stated above, a memory test runs successfully
We use DMA for SPI, I2C and ADC data transfers
We frequently use interrupts, including external (pin) interrupts
We use the following timing settings:
AddressSetupTime: 2
AddressHoldTime: 4
DataSetupTime: 4
BusTurnAroundDuration: 1
CLKDivision: 2
DataLatency: 2
We configure the FMC as follows:
NSBank FMC_NORSRAM_BANK1,
DataAddressMux FMC_DATA_ADDRESS_MUX_DISABLE,
MemoryType FMC_MEMORY_TYPE_SRAM,
MemoryDataWidth FMC_NORSRAM_MEM_BUS_WIDTH_16,
BurstAccessMode FMC_BURST_ACCESS_MODE_DISABLE,
WaitSignalPolarity FMC_WAIT_SIGNAL_POLARITY_LOW,
WrapMode FMC_WRAP_MODE_DISABLE,
WaitSignalActive FMC_WAIT_TIMING_BEFORE_WS,
WriteOperation FMC_WRITE_OPERATION_ENABLE,
WaitSignal FMC_WAIT_SIGNAL_DISABLE,
ExtendedMode FMC_EXTENDED_MODE_DISABLE,
AsynchronousWait FMC_ASYNCHRONOUS_WAIT_DISABLE,
WriteBurst FMC_WRITE_BURST_DISABLE,
ContinuousClock FMC_CONTINUOUS_CLOCK_SYNC_ASYNC,
WriteFifo 0,
PageSize 0
We spend a lot of time of experimenting with longer timings and compared all the settings to examples including this one: Using STM32L476/486 FSMC peripheral
to drive external memories (although this one is for the STM32L4, I am fairly certain it applies to this controller as well)
Findings on similar problems
The problem sounds very similar to this errata sheet entry: "2.3.4 Corruption of data read from the FMC" but it also says the error is fixed in our revision of the controller (3)
I hope someone out there has seen this strange behaviour before and can help us. After over one week of debugging we expect some kind of error in the controller when interrupts/DMA accesses occur while the CPU accesses the SRAM (when we use it as heap, it is accessed very frequently). Hopefully you can shed some light on this topic.
Sorry for not getting back to you, internet.
Yes, we found out what the issue was (at least in our case). Problem was that the J-Link debugger we use is causing problems if it hangs above the power electronics on our pcb (it is mounted vertically). If we guide the ribbon cable out at the top (only digital electronics) the error disappears. So our guess is, that some noise from the electronics was caught up by the cable and directly injected into the JTAG port, which caused failures inside the MCU.
Just got a confirmation from ST, that there is a bug in the STM32F469 FMC that might cause incorrect values if the write fifo is disabled. The workaround is to have the fifo enabled. It is the same issue as in this F7 processor https://www.st.com/resource/en/errata_sheet/dm00145382.pdf
I've used PICs before and now I'm using with STM32F415.
On a time-critical part of my code I need to put a very exact delay to adjust the period of the DAC-DMA that are working together to create a periodic analog signal.
The delay I want to add goes from 0 to 63 clock cycles (If I were able to do 10-63 clock cycles it would be OK aswell). In PIC24F assembly, there's the instruction "REPEAT" which allows me to repeat the next instruction a certain number of times. That would work great for me as I'd be able to just do:
REPEAT #0xNUMBER
NOP
I'm trying to find something similar with the STM32F4, but I had no luck searching in the instruction set, Reference Manual, and on the Internet in general.
I've already tried to use for/while loops in C and a timer dedicated to it, but the extra instructiosn required consume too much time (40-50 cycles depending on the way I program it).
If someone has an idea or knows how to do it, It would be very useful for me.
Thanks a lot.
English is not my mother-tongue language so I'm sorry for any possible mistakes. Let me know and I'll try to improve it :)
EDIT 1 (23-jul-17)
Thanks to everyone answering, I've been very busy and couldn't answer every one of you individually.
I'll try #berendi solution of gated clocks, it seems as the best fit for my application.
I'm learning a lot of things about the STM32 I didn't know, thank you everyone!
I stop the timer that makes de DMA-DAC work, do the delay and then
enable it again.
So, if I'm understanding it correctly, you have Timer A controlling your DAC, triggering a conversion at each counter overflow, and you'd like to delay it for a variable number of clock cycles.
Most (if not all) timers of the STM32F4 support gated mode slave operation, where you can select another timer (Timer B) as a master, and Timer A counts only as long as the trigger output of Timer B is low. In other words, Timer A will stop counting on a rising edge from Timer B, and resume counting on a falling edge. Now, configure Timer B to output a single pulse when enabled, where the pulse width is the delay you want, then Timer A will be delayed for the exact duration of the pulse.
See the chapters on One-pulse mode, Timers and external trigger synchronization, and the description of the CR1, CR2, and SMCR registers in the reference manual.
NOP is not a very good solution for the delay. Use barrier instructions instead as the execution time is exactly as stated in the ARM documentation (3, 4 or 5 cycles depending what instruction and the core version). You can place n consecutive barriers to archive the delay you need
On a PIC you can do this and it is a very common solution, the execution time was deterministic. Outside architectures like that, and older chips that were also deterministic (before clones came out) that would be okay as well. But in general that is not how you do a delay, it is not deterministic, you can get an "at least this long" for a tuned loop, but you cant get "exactly this long" even tuned, or should never expect to. That is why there are timers, multiple usually, in mcu designs and that is what you use for measuring time. For the problem you are trying to solve that is the solution here, one timer or cascaded timers if you really need that.
Arm does not have an x86 like repeat instruction your smallest loop is going to be two instructions and I have countless times demonstrated that on the same chip this loop can vary in speed, so tune it, add a line of code and the delay properties of this loop change
here:
sub r0,#1
bne here
for classic (gas) syntax, for unified syntax use subs instead of sub.
You are also on an stm32 where they have a buried cache on the instruction side that you cannot turn off nor control, it generally gives you no wait state performance, certainly for things like this but obviously they dont have a cache the size of the flash so pre-fetch cycles have to happen somewhere, and you have to expect that sometimes you are going to have to feel that prefetch when you jump into this loop.
I am trying to fix an bug found in a mature program for Fujitsu MB90F543. The program works for nearly 10 years so far, but it was discovered, that under some special circumstances it fails to do two things at it's very beginning. One of them is crucial.
After low and high level initialization (ports, pins, peripherials, IRQ handlers) configuration data is read over SPI from EEPROM and status LEDs are turned on for a moment (to turn them a data is send over SPI to a LED driver).
When those special circumstances occur first and only first function invoking just a few EEPROM reads fails and additionally a few of the LEDs that should, don't turn on.
The program is written in C and compiled using Softune v30L32.
Surprisingly it is sufficient to add single __asm(" NOP ") in low level hardware init to make the program work as expected under mentioned circumstances. It is sufficient to turn off 'Control optimization of pointer aliasing' in Optimization settings. Adding just a few lines of code in various places helps too.
I have compared (DIFFed) ASM listings of compiled program for a version with and without __asm(" NOP ") and with both aforementioned optimizer settings and they all look just fine.
The only warning Softune compiler has been printing for years during compilation is as follows:
*** W1372L: The section is placed outside the RAM area or the I/O area (IOXTND)
I do realize it's rather general question, but maybe someone who has a bigger picture will be able to point out possible cause.
Have you got an idea what may cause such a weird behaviour? How to locate the bug and fix it?
During the initialization a few long (about 20ms) delay loops are used. They don't help although they were increased from about 2ms, yet single NOP in any line of the hardware initialization function and even before or after the function helps.
Both the wait loops works. I have checked it using an oscilloscope. (I have added LED turn on before and off after).
I have checked timming hypothesis by slowing down SPI clock from 1MHz to 500kHz. It does not change anything. Slowing down to 250kHz makes watchdog resets, as some parts of the code execute too long (>25ms).
One more thing. I have observed that adding local variables in any source file sometimes makes the problem disappear or reappear. The same concerns initializing uninitialized local variables. Adding a few extra lines of a code in any of the files helps or reveals the problem.
void main(void)
{
watchdog_init();
// waiting for power supply to stabilize
wait; // about 45ms
hardware_init();
clear_watchdog();
application_init();
clear_watchdog();
wait; // about 20ms
test_LED();
{...}
}
void hardware_init (void)
{
__asm("NOP"); // how it comes it helps? - it may be in any line of the function
io_init(); // ports initialization
clk_init();
timer_init();
adc_init();
spi_init();
LED_init();
spi_start();
key_driver_init();
can_init();
irq_init(); // set IRQ priorities and global IRQ enable
}
Could be one of many things but two spring to mind.
Timing.
Maybe the wait is not long enough for power to stabilize and not everything is synced to the clock. The NOP gets everything back in sync.
Alignment.
Perhaps the NOP gets your instructions aligned on a 32 or 64 bit boundary expected by the hardware. (we used to do this a lot on mainframe assemblers as IO operations often expected things to be on double word boundarys).
The problem was solved. It was caused by a trivial bug.
EEPROM's nHOLD and nCS signals were not initialized immediately after MCU's reset, but before the first use of the EEPROM. As a result they were 0's, so active.
This means EEPROM was selected, but waiting on hold. Meantime other transfer using SPI started. After 6 out of 8 CLK pulses EEPROM's nHOLD I/O pin was initialized and brought high. EEPROM was no longer on hold so it clocked in last two bits of a data for an other peripheral. Every subsequent operation on the EEPROM found it being having not synchronized CLK and MOSI.
When I have added NOP or anything other the moment of nHOLD 0->1 edge was shifted to happen after the last CLK pulse. Now CLK-MOSI were in sync.
All I have had to do was to initialize all the EEPROM's SPI lines, in
particular nHOLD and nCS right after the MCU reset.