I am trying to understand how timer on the internal clock works.
I've attached my CubeMX config below.
For now, I have set the main clock to 480 MHz which is the maximum for this STM32H743ZI chip.
I am using TIM 2 so I am looking at APB1.
there from the clock tree I see that it is currently so to 240 MHz for Timer Clocks, and 120 MHz for Peripheral Clocks.
My first question is that why is it using the 120MHz where it clearly says "240 MHz" for "timer clocks on APB1"? I have verified the frequency with an oscilloscope.
It is a long shot but according to what I read is that the maximum timer clock for this chip is 200 MHz. So 240 being greater than 200, perhaps the chip is automatically applying a /2 divider? I would think the clock configuration tree would say something, instead of allowing the "240 MHz" to be applied?
My second question:
For the sake argument, let's say that the maximum timer clock really is 200 MHz, how do I go about setting 200 MHz as clock frequency for APB1? I am sure there is no way to achieve 200 MHz without compromising on the max MCU clock and bring it down from 480 to 400 MHz perhaps?
Can I trust the automatic prescaler/multiplier adjustments that CubeMX is doing whenever I change something?
How did you checked it with the scope? It is inside the chip?
The timer clock is 240MHz, other peripheral clocks connected to the APB1 are 120MHz.
according to what I read is that the maximum timer clock for this chip
is 200 MHz.
Where did you read that? official documentation says something different
I think this question does not make too much sense
You see the values, you can check them manually if you wish, but in this case computers are usually are better than humans. Personally I did not have any problems with Cube. BTW I use it only for the clocks (I am too lazy to calculate all the prescallers by hand), but I program it bare register way.
Related
I working with EFM microcontroller (Silicon Labs)
I need to make a beep every x seconds, when the device in EM3 mode.
I tried so many ways without success.
Please try to help me with code example (I'm HW man, not a SW haha)
Thanks,
Gal.
Refer to page 8 on the datasheet (thanks to #user694733)
EM3 mode description:
still full CPU and RAM retention, as well as Power-on Reset, Pin reset and Brown-out Detection, with a consumption of only 0.6 μA. The low-power ACMP, asynchronous external interrupt, PCNT, and I2C can wake-up the device.
So these are your options. One of these things can wake up the microcontroller. All of them are external inputs. So the microcontroller cannot wake itself up in this mode. This makes sense because all clocks are stopped.
If you had an outside clock connected to the PCNT you could use that to wake it up.
If you want the microcontroller to wake itself up, then you need EM2 mode or less:
In EM2 the high frequency oscillator is turned off, but with the 32.768 kHz
oscillator running, selected low energy peripherals (LCD, RTC, LETIMER,
PCNT, LEUART, I2C, WDOG and ACMP) are still available
In EM2 mode the microcontroller may wake itself up using the RTC (real-time clock), LETIMER (low-energy timer), WDOG (watchdog timer) or PCNT (pulse counter, which can be set to count pulses of the 32.768kHz clock).
The datasheet recommends using the Real-Time Clock or Low Energy Timer (RTC or LETIMER) modules.
... however, if we pay attention, we see the datasheet mentions something called the ULFRCO, Ultra-Low-Frequency RC Oscillator, which runs at approximately 1000 Hz. By searching for the keyword ULFRCO, we see that it does still run in EM3 mode, and it can be used as input for the WDOG. On page 89 we see this listed as a feature of EM3 mode.
So, you may configure the WDOG to reset the system after a few seconds. When the microcontroller resets due to watchdog timeout, it wakes up. You should not be afraid of using a system reset. The RMU_RSTCAUSE allows you to see that the system was reset because of the watchdog timer (not because it was first turned on or the reset pin was used). Memory contents are probably still there, but all peripherals are reset. As long as you can deal with peripherals being reset, you can probably make this work. You might even be able to use a little bit of assembly programming to jump back to exactly the point where the program left off.
I am using TIM1 on a H743ZI with 3 PWM channels.
I am trying to maximize the PWM resolution so I need to maximize the clock speed on TIM1.
the datasheet (screenshot below) gives 120MHz and 240MHz values for Max interface clock and Max timer clock.
What is the difference between the 2? I have the clocks setup as shown below, with 120MHz on APB2's peripheral clocks and 240MHz on APB2's Timer clocks.
I need a 24KHz frequency on the PWM channels so I set the ARR to 4999 which confirms the H743 is using the 120MHz value (and not the 240MHz one).
Is it because the I am using the timer in a hardware related manner - hence the "peripheral clock"?
of course, my follow up question, would be whether or not I could use the HRTIM instead?
Every timer consists of the counter which is fed by the timer clock and the control unit which is responsible for interfacing with the bus (core and another peripherals) which is fed by the interface clock.
More general all peripherals have a digital control part. This part is fed by the bus clock (the bus the particular peripheral is connected to). Many peripherals have more than one clock - for example ADC where the digital controller form the bus clock, and the analogue part fed from another clock source.
I'm a bit lost with STM32L486 clock management.
I want to change the clock frequency at run-time. Typically I want to be in Low-Power Run/Sleep mode most of the time, and at full frequency the rest of the time.
I know how to set up SysClk either at 80MHz using PLL or at 1MHz using MSI for example.
However the problem is that changing Sysclk is messing up most peripherals setup. For example the USART is not working anymore if I change the clock.
Is it a common practice to do that ( changing the frequency at runtime ) ?
The peripherals I need to use are: LPTIM ( no problem since they can be clocked independantly from SysClk ), ADC, AES accelerator, USART, TIM, SPI.
On STM32L4xx it is not so hard, if you look on "Clock tree" figure in datasheet, many peripherals which are clock dependent (USART, LPTIM, I2C, ..) can be driven with other clock sources than BUS clock, there is also possible to use LSE or internal HSI.
Although internal HSI is not crystal controlled is from my experience enough accurate for UART, also in bigger range of temperatures, but you can tune frequency of this oscillator by comparing its frequency with an external and more accurate clock during runtime, or use auto-buadrate detection.
I'm new to ARM MCUs (STM32F411), and I have been trying to find my way around the peripherals using STM's HAL library and STM32Cube.
I've already configured my board in order to use some peripherals:
Timer 2 for running an interrupt with a certain frequency
Timer 3 for running PWMs on 3 channels of it.
ADC with 4 channels, into DMA mode, for reading some analog input.
Let us suppose, now, that the PWM's whole period is 100 ms and its duty cycle is 50% (50 ms PWM on and 50 ms PWM off).
I would like to trigger an interrupt after a certain time of the PWM on level, let us say 50% of it.
Hence, I would like to run an interrupt at 25 ms in order to use the ADC for sampling it's analog inputs.
Do you have any suggestion on how could I implement such a kind of interrupt?
Thank you in advance for your help!
Since the ADC of the STM32F411 is used in Regular mode (not Injected mode) and only three channels out of four are used to generate PWM on Timer 3, the fourth channel can be used to trigger the ADC.
Hence Timer 3 is configured as follows:
CH1 used for Output Compare mode 0 (TIM3->CCMR1.OC1M = 0)
CH2, CH3, CH4 used for PWM outputs
Therefore TIM3->CCR1 is loaded to a value that gives 25% of duty, then it will generate TIM3_CH1 events that can be used to trigger ADC start-of-conversion at 25% of your TIM3 timebase.
I am experiencing a problem with FreeRTOS where it seems the systick() rate is 1/2 the expected rate. All timer or task delay functions take about 2X the time. This was verified in versions 8.2.0 and 8.2.3 using a STM32F100 processor.
There is another posting that looks very similar. This developer is using a MSP430 and claims the tick rate is 400Hz when expecting 1000Hz tick rate.
The RCC register configuration appears to be correct. If I create a non-FreeRTOS project where the systick is correct, it has the same RCC configuration as in the FreeRTOS version.
Suggestions?
When I read:
I created a very simple task that delays for 4 seconds and reports the
actual number of elapsed ticks. The ticks are correct but the actually
delay is around 8 seconds.
When reading this my thought was, if the delay is correct in the number of ticks, but the time is different, then this is simply a case of the CPU clock running at a different frequency to that which you think it is. Perhaps configCPU_CLOCK_HZ is wrong. However, then you write:
#undef OS_USE_TRACE_SEMIHOSTING
#define OS_USE_TRACE_ITM
you mention semihosting. Are you using semihosting? If so, don't, it will mess up your timing as it will stop the CPU while outputing to the host - and that might be the problem you are seeing.