FreeRtos osDelay is exactly three times too long - c

Using Nucleo STM32H723 board making an alive LED toggle in the StartDefaultTask.
osDelay(10000) gives exactly30 seconds of delay. I'm using timer7 as systick timer.
Bonus info:
FreeRTOSConfig.h
#define configCPU_CLOCK_HZ ( SystemCoreClock )
"SystemCoreClock" is initially set to 64MHz (HSI) but during start-up initialized to 216MHz (SYSClk). If I edit "FreeRTOSConfig.h" and set "configCPU_CLOCK_HZ" to 64000000 osDelay is now correct, but I would like to be able to generate the clock and rtos config with CubeIDE.
Maybe someone can tell me what I am doing wrong...

I am not familiar with that board, but I think the OS thinks SysTick ticks # 1/3 speed of its actual (hardware) speed. You have to sync that value to the OS. Check your FreeRTOSConfig.h file for configSYSTICK_CLOCK_HZ and related parameters. As for the CubeIDE/FreeRTOS integration, check the suitable extensions/expansions of the IDE. Other than that, you have to edit files manually...

Related

Timer supply to CPU in QEMU

I am trying to emulate the clock control for STM32 machine with CPU cortex m4. It is provided in the STM32 reference manual the clock supplied to the core is by the HCLK.
The RCC feeds the external clock of the Cortex System Timer (SysTick) with the AHB clock (HCLK) divided by 8. The SysTick can work either with this clock or with the Cortex clock (HCLK), configurable in the SysTick control and status register.
Now Cortex m4 is already emulated by QEMU and I am using the same for STM32 emulation. My confusion is should i supply the clock frequency of "HCLK" I have developed for STM32 to send clock pulses to cortex m4 or cortex -m4 itself manages to have its own clock with HCLK clock frequency 168MHz? or the clock frequency is different ?
If I have to pass this frequency to cortex m4, how do i do that?
QEMU's emulation does not generally try to emulate actual clock lines which send pulses at megahertz rates (this would be incredibly inefficient). Instead when the guest programs a timer device the model of the timer device sets up an internal QEMU timer to fire after the appropriate duration (and the handler for that then raises the interrupt line or does whatever is necessary for emulating the hardware behaviour). The duration is calculated from the values the guest has written to the device registers together with a value for what the clock frequency should be.
QEMU doesn't have any infrastructure for handling things like programmable clock dividers or a "clock tree" that routes clock signals around the SoC (one could be added, but nobody has got around to it yet). Instead timer devices are usually either written with a hard-coded frequency, or may be written to have a QOM property that allows the frequency to be set by the board or SoC model code that creates them.
In particular for the SysTick device in the Cortex-M models the current implementation will program the QEMU timer it uses with durations corresponding to a frequency of:
1MHz, if the guest has set the CLKSOURCE bit to 1 (processor clock)
something which the board model has configured via the 'system_clock_scale' global variable (eg 25MHz for the mps2 boards), if the guest has set CLKSOURCE to 0 (external reference clock)
(The system_clock_scale global should be set to NANOSECONDS_PER_SECOND / clk_frq_in_hz.)
The 1MHz is just a silly hardcoded value that nobody has yet bothered to improve upon, because we haven't run into guest code that cares yet. The system_clock_scale global is clunky but works.
None of this affects the speed of the emulated QEMU CPU (ie how many instructions it executes in a given time period). By default QEMU CPUs will run "as fast as possible". You can use the -icount option to specify that you want the CPU to run at a particular rate relative to real time, which sort of implicitly sets the 'cpu frequency', but this will only sort of roughly set an average -- some instructions will run much faster than others, in a not very predictable way. In general QEMU's philosophy is "run guest code as fast as we can", and we don't make any attempt at anything approaching cycle-accurate or otherwise tightly timed emulation.
Update as of 2020: QEMU now has some API and infrastructure for modelling clock trees, which is documented in docs/devel/clocks.rst in the source tree. This is basically a formalized version of the concepts described above, to make it easier for one device to tell another "my clock rate is 20MHz now" without hacks like the "system_clock_scale" global variable or ad-hoc QOM properties.
Systick is supplied via multiplexer and you can choose the AHB bus clock or divided by 8 system timer clock
An old thread and an oft asked question so this should help some of you trying to emulate cortex systems.
If using a .dtb when booting then in your .dts one can add to the 'timers' block a line of clock-frequency = <value>; and recompile it. This will indeed increase the speed of cortex processors. Clearly, value is some large number.

How to Benchmark Runtime on Cortex-M4

I'm pretty new to ARM and am trying to get timing results for functions written in C for a Cortex-M4 processor. Would any of you be able to tell me what steps I need to take to get timing results?
I've been running my code on Keil uVision, but I'm unable to use the program's Performance Analyzer during a real-environment debug. From what I've read it seems that the Performance Analyzer only works outside of simulated debug sessions if one is using proprietary connector from Keil.
Set a pin high at the start of the function you wish to time, set it low at the end, and measure the pulse width with an oscilloscope.
Dending on which Cortex M4 you're using there may be a cycle count register DWT->CYCCNT, but the inclusion of such is vendor defined. Details can be found in the Cortex M4 Technical Reference Manual. Your processors datasheet, reference manual and programming manual should provide more information if required.
Alternatively, if you have a fast timer, such as the SysTick running from the processor clock, you could initialise the count to 0x00FFFFFF, start it downcounting at the beginning of your function and stop it at the end, you can then work out the time taken as (0x00FFFFFF - SysTick->CVR) * (1 / SysTick Frequency) .

FreeRTOS slow systick

I am experiencing a problem with FreeRTOS where it seems the systick() rate is 1/2 the expected rate. All timer or task delay functions take about 2X the time. This was verified in versions 8.2.0 and 8.2.3 using a STM32F100 processor.
There is another posting that looks very similar. This developer is using a MSP430 and claims the tick rate is 400Hz when expecting 1000Hz tick rate.
The RCC register configuration appears to be correct. If I create a non-FreeRTOS project where the systick is correct, it has the same RCC configuration as in the FreeRTOS version.
Suggestions?
When I read:
I created a very simple task that delays for 4 seconds and reports the
actual number of elapsed ticks. The ticks are correct but the actually
delay is around 8 seconds.
When reading this my thought was, if the delay is correct in the number of ticks, but the time is different, then this is simply a case of the CPU clock running at a different frequency to that which you think it is. Perhaps configCPU_CLOCK_HZ is wrong. However, then you write:
#undef OS_USE_TRACE_SEMIHOSTING
#define OS_USE_TRACE_ITM
you mention semihosting. Are you using semihosting? If so, don't, it will mess up your timing as it will stop the CPU while outputing to the host - and that might be the problem you are seeing.

whether Single-shot timer stops automatically?

I'm implementing a Timer 1 (which is basically a comparator & capture timer ) in comparator mode with single-shot mode operation? There's an option for starting the timer in continuous mode too.
My question is when I start the timer in single shot mode , after it reaches a mentioned count & compares, it will generate an interrupt flag but then does it mean that the timer is also stopped?
or do I need to stop it explicitly in single -shot mode too? I think it makes sense only in continuous mode?
I'm currently checking only the generated interrupt flag & assuming the timer is stopped & clearing the interrupt flag for further operation & the n come out of my function.
however, there is a control bit in the control register of the timer which can be toggled to make it run or stop? Should I just check the bit after the interrupt flag has generated or do I need to reset this control bit too? Which means I should have an explicit function to stop the timer as well?
Additional Information -
I'm using NXP (Philips ) controller.
Thank you in advance,
Prateek
I just got to read in the NXP datasheet that yes, If any timer, started in single shot (one- shot) mode will stop automatically.
Btw, If any one of you have any explanation kindly, put it below.
Thank you.
To understand microcontroller timers, just have to first realize that there is generally just one single main timer running. When enabled, this timer counts up until it overflows and then starts over.
When you start a "hardware timer", you only set up a register with a timer value which holds the value main_timer + delay. The hardware compares this register with the main timer at every tick, and when they match, it triggers an interrupt, sets a port or whatever you have configured it to do. Typically, you'd have to set up your timer register anew after that.
More more specific answers you have to specify the MCU family and part number used. NXP has made everything from ancient 8051 to modern ARM Cortex, and the timer peripheral hardware will be different for every MCU family.

Keil IDE stopwatch not working in Debug mode

I have been using ST F4-Discovery board for some time, as many of other friends. We all have the same problem. We are using Keil IDE (used different versions from 4.3 up to 4.7). Whenever we time anything using breakpoints and stopwatch, it works perfectly when in simulation mode. However, when we are debugging on-board and run the same code, the stopwatch never reports correct timing. It is actually random. Does anyone know what the problem is?
Thanks
Stopwatch is based on the internal register SEC. There seems to be a bug that if the register window is not showing then the stopwatch values are not updated. When debug is running select View|Register window and make sure you can see the SEC register value updating. The stopwatch in the status bar should now be updating too.
To solve the problem of the stopwatch, go to: Tarjet options - debug - setting - trace - core clock and adjust the frequency to 72MHz or core of your processor.
I found the answer much later in time. It has to do with the internal debug circuitry. By default, the timer peripherals do not stop when we hit a breakpoint in debug mode but continue counting. This is why we keep getting random measurment intervals between timer interrupt instances using the stop watch. In order to get accurate timing, we need the debug circuity to force the timer peripheral to stop counting once we reach a breakpoint and resume later once we step over it. This can be done by writing this code:
SET_BIT(DBGMCU->APB1FZ, DBGMCU_APB1_FZ_DBG_TIM3_STOP);
Which instructs timer 3 on APB1 bus to stop counting at breakpoints.

Resources