I'm trying to get the STM32F446 running at full speed, following this tutorial: https://www.youtube.com/watch?v=GJ_LFAlOlSk&t=826s i did everything he does, but the clock speed of my timers is DEADLY slow, like literally, when blinking an LED with pre-scalar of 9 and ARR of 20, it is easily visible by eye.. Wt* am i doing wrong ?
void setup_clock(void)
{
// Enables HSE and waits until it is ready
*RCC_CR |= (1 << RCC_CR_HSEON);
while (!(*RCC_CR & (1 << RCC_CR_HSERDY)));
// Set the power enable clock and voltage regulator
*RCC_APB1ENR |= (1 << RCC_APB1ENR_PWREN);
*PWR_CR |= PWR_CR_VOS(PWR_CR_VOS_SCALEM1);
// Configure flash
*FLASH_ACR = (1 << FLASH_ACR_DCEN) | (1 << FLASH_ACR_ICEN) | (1 << FLASH_ACR_PRFTEN);
*FLASH_ACR |= FLASH_ACR_LATENCY(5);
// Configures HCLK, PCLK1, PCLK2
*RCC_CFGR &= ~RCC_CFGR_HPRE_MASK;
*RCC_CFGR |= RCC_CFGR_HPRE(RCC_CFGR_HPRE_NODIV); // HCLK 180Mhz
*RCC_CFGR &= ~RCC_CFGR_PPRE1_MASK;
*RCC_CFGR |= RCC_CFGR_PPRE1(RCC_CFGR_PPRE1_DIV4); // PCLK1 45Mhz
*RCC_CFGR &= ~RCC_CFGR_PPRE2_MASK;
*RCC_CFGR |= RCC_CFGR_PPRE2(RCC_CFGR_PPRE2_DIV2); // PCLK2 90Mhz
// Configures the main PLL
*RCC_PLLCFGR = RCC_PLLCFGR_PLLN(180) |
RCC_PLLCFGR_PLLP(RCC_PLLCFGR_PLLP_2) |
RCC_PLLCFGR_PLLR(2) |
RCC_PLLCFGR_PLLM(4) |
(1 << RCC_PLLCFGR_PLLSRC);
// Enable PLL
*RCC_CR |= (1 << RCC_CR_PLLON);
while (!(*RCC_CR & (1 << RCC_CR_PLLRDY)));
// Use PLL as clock source
*RCC_CFGR &= ~RCC_CFGR_SW_MASK;
*RCC_CFGR |= RCC_CFGR_SW(RCC_CFGR_SW_PLL_P);
while ((*RCC_CFGR & RCC_CFGR_SWS_MASK) != RCC_CFGR_SWS(RCC_CFGR_SWS_PLL));
// Sets the CLOCK Ready status LED
*GPIO_ODR(STATUS_BASE) |= (1 << STATUS_CLKREADY);
}
Below is a complete working project for the NUCLEO_F44RE using gnu tools, everything you need to build and run.
Differences.
I am starting off in the default power mode (well looks like so ar you yes?), so conservatively set the flash divisor to 8 (9 clocks). (can try this after, I would personally set for 8 first get it working then work back to 5).
I am neither using I nor D cache.
I set the system to HSE then set the PLL to use it as well. You skip that and that is probably fine as the HSE is up and ready (to be used by the PLL).
this line
*RCC_CFGR &= ~RCC_CFGR_SW_MASK;
switches the clock to HSI and then
*RCC_CFGR |= RCC_CFGR_SW(RCC_CFGR_SW_PLL_P);
switches the clock to PLL. Need to make up your mind, do not use/abuse the registers in this way as Lundin commented. You should do clean read-modify-writes, read, zero the bits that need to be zeroed (or all of them in the field) set the bits to be set, then write to the register. Use temporary variables for this. Or some flavor of
reg = (reg&this) | that;
but certainly not
reg &= this;
reg |= that;
In general. I doubt that is your problem though...Just a comment by a couple/few of us.
You have PLLQ at an invalid state. Might be a problem, just try it.
I am building for cortex-m0 out of habit/portability of code, can change that easily.
Before PJ brings this up
*RCC_APB1ENR |= (1 << RCC_APB1ENR_PWREN);
*PWR_CR |= PWR_CR_VOS(PWR_CR_VOS_SCALEM1);
Is risky you need to examine the compiled output and it can vary based on compiler, version, phase of the moon. If the str to RCC_APB1ENR is immediately followed by the LDR of PWR_CR, that may not work. What I did see doing experiments based on PJ's comment on another ticket was, that the gpio which was the case there, you can for some reason read the MODER register with the peripheral off so an str of the enable an ldr of the MODER works, then the instructions to do the modify and write are more than enough time for the write. But if you jam the moder register and specifically depending on your compiler and settings, it can optimize those as two back to back stores, I was able to cause this with one compiler and not another. (change settings though and they fix and fail, etc). The GET32/PUT32 thing I do insures there is no problem with touching the peripheral before the enable has had time to settle. YMMV.
flash.s
.cpu cortex-m0
.thumb
.thumb_func
.global _start
_start:
.word 0x20001000
.word reset
.thumb_func
reset:
bl notmain
b hang
.thumb_func
hang: b .
.thumb_func
.globl PUT32
PUT32:
str r1,[r0]
bx lr
.thumb_func
.globl GET32
GET32:
ldr r0,[r0]
bx lr
flash.ld
MEMORY
{
rom : ORIGIN = 0x08000000, LENGTH = 0x1000
}
SECTIONS
{
.text : { *(.text*) } > rom
}
notmain.c
void PUT32 ( unsigned int, unsigned int );
unsigned int GET32 ( unsigned int );
void dummy ( unsigned int );
#define RCCBASE 0x40023800
#define RCC_AHB1ENR (RCCBASE+0x30)
#define RCC_CR (RCCBASE+0x00)
#define RCC_PLLCFGR (RCCBASE+0x08)
#define RCC_CFGR (RCCBASE+0x08)
#define FLASH_ACR 0x40023C00
#define GPIOABASE 0x40020000
#define GPIOA_MODER (GPIOABASE+0x00)
#define GPIOA_BSRR (GPIOABASE+0x18)
//PA5
#define STK_CSR 0xE000E010
#define STK_RVR 0xE000E014
#define STK_CVR 0xE000E018
static void clock_init ( void )
{
unsigned int ra;
//switch to external clock.
ra=GET32(RCC_CR);
ra|=1<<16;
PUT32(RCC_CR,ra);
while(1) if(GET32(RCC_CR)&(1<<17)) break;
if(1)
{
ra=GET32(RCC_CFGR);
ra&=~3;
ra|=1;
PUT32(RCC_CFGR,ra);
while(1) if(((GET32(RCC_CFGR)>>2)&3)==1) break;
}
//HSE ready
}
static void pll_init ( void )
{
unsigned int ra;
//clock_init();
ra=GET32(FLASH_ACR);
ra&=(~(0xF<<0));
ra|=( 8<<0);
PUT32(FLASH_ACR,ra);
//poll this?
ra=GET32(RCC_CFGR);
ra&=(~(0x3<<13));
ra|=( 4<<13); //180/90 = 2
ra&=(~(0x3<<10));
ra|=( 5<<10); //180/45 = 4
PUT32(RCC_CFGR,ra);
//HSE 8Mhz
//PLLM It is recommended to select a frequency of 2 MHz to limit
// PLL jitter.
//PLLN input is 2, want >=50 and <=432 so between 25 and 216
//PLLM 4, PLLN 180, VCO 360, PLLP 2
//PLLM 8/4 = 2
//PLLN 2 * 180 = 360
//PLLP 360 / 2 = 180
//PLLR 2?
//PLLQ 180 / 48 = 3.75 so 4.
ra=0;
ra|=2<<28; //PLLR
ra|=4<<24; //PLLQ dont care
ra|=1<<22; //PLLSRC HSE
ra|=2<<16; //PLLP
ra|=180<<6; //PLLN
ra|=4<<0; //PLLM
PUT32(RCC_PLLCFGR,ra);
ra=GET32(RCC_CR);
ra|=1<<24;
PUT32(RCC_CR,ra);
while(1) if(GET32(RCC_CR)&(1<<25)) break;
ra=GET32(RCC_CFGR);
ra&=~3;
ra|=2;
PUT32(RCC_CFGR,ra);
while(1) if(((GET32(RCC_CFGR)>>2)&3)==2) break;
}
static void led_init ( void )
{
unsigned int ra;
ra=GET32(RCC_AHB1ENR);
ra|=1<<0; //enable GPIOA
PUT32(RCC_AHB1ENR,ra);
ra=GET32(GPIOA_MODER);
ra&=~(3<<(5<<1)); //PA5
ra|= (1<<(5<<1)); //PA5
PUT32(GPIOA_MODER,ra);
}
static void led_on ( void )
{
PUT32(GPIOA_BSRR,((1<<5)<< 0));
}
static void led_off ( void )
{
PUT32(GPIOA_BSRR,((1<<5)<<16));
}
void do_delay ( unsigned int sec )
{
unsigned int ra,rb,rc,rd;
rb=GET32(STK_CVR);
for(rd=0;rd<sec;)
{
ra=GET32(STK_CVR);
rc=(rb-ra)&0x00FFFFFF;
if(rc>=16000000)
{
rb=ra;
rd++;
}
}
}
int notmain ( void )
{
unsigned int rx;
led_init();
PUT32(STK_CSR,0x00000004);
PUT32(STK_RVR,0xFFFFFFFF);
PUT32(STK_CSR,0x00000005);
for(rx=0;rx<5;rx++)
{
led_on();
while(1) if((GET32(STK_CVR)&0x200000)!=0) break;
led_off();
while(1) if((GET32(STK_CVR)&0x200000)==0) break;
}
clock_init();
for(rx=0;rx<5;rx++)
{
led_on();
while(1) if((GET32(STK_CVR)&0x200000)!=0) break;
led_off();
while(1) if((GET32(STK_CVR)&0x200000)==0) break;
}
pll_init();
while(1)
{
led_on();
while(1) if((GET32(STK_CVR)&0x200000)!=0) break;
led_off();
while(1) if((GET32(STK_CVR)&0x200000)==0) break;
}
return(0);
}
build
arm-linux-gnueabi-as --warn --fatal-warnings -mcpu=cortex-m0 flash.s -o flash.o
arm-linux-gnueabi-gcc -Wall -O2 -ffreestanding -mcpu=cortex-m0 -mthumb -c notmain.c -o notmain.o
arm-linux-gnueabi-ld -nostdlib -nostartfiles -T flash.ld flash.o notmain.o -o notmain.elf
arm-linux-gnueabi-objdump -D notmain.elf > notmain.list
arm-linux-gnueabi-objcopy -O binary notmain.elf notmain.bin
(you can naturally change the cortex-m0s to cortex-m4s).
copy notmain.bin to the nucleo card and watch the user led change speeds. faster, half slower, much much faster.
Hmm...
when VOS[1:0] = '0x11, the maximum value of f HCLK is 168 MHz. It can be extended to 180 MHz by activating the over-drive mode. The over-drive mode is not available when VDD ranges from 1.8 to 2.1 V (refer to Section 5.1.3: Voltage regulator for details on how to activate the over-drive mode).
and
11: Scale 1 mode (reset value)
(so no need to mess with that)
and
Entering Over-drive mode
It is recommended to enter Over-drive mode when the application is not running critical
tasks and when the system clock source is either HSI or HSE. To optimize the configuration
time, enable the Over-drive mode during the PLL lock phase.
To enter Over-drive mode, follow the sequence below:
Note:
1. Select HSI or HSE as system clock.
2. Configure RCC_PLLCFGR register and set PLLON bit of RCC_CR register.
3. Set ODEN bit of PWR_CR register to enable the Over-drive mode and wait for the
ODRDY flag to be set in the PWR_CSR register.
4. Set the ODSW bit in the PWR_CR register to switch the voltage regulator from Normal
mode to Over-drive mode. The System will be stalled during the switch but the PLL
clock system will be still running during locking phase.
5. Wait for the ODSWRDY flag in the PWR_CSR to be set.
6. Select the required Flash latency as well as AHB and APB prescalers.
7. Wait for PLL lock.
8. Switch the system clock to the PLL.
9. Enable the peripherals that are not generated by the System PLL (I2S clock, SAI1 and
SAI2 clocks, USB_48MHz clock....).
So I am running at room temperature the chip is nowhere near close to max temp so likely why it works fine being overclocked as I have done here. (technically it is not complete needs to either be 168 or set for overdrive).
If you want 180 vs 168 you should do these steps as documented.
I suspect you are not running your part near max temp either so you should be able to get away with 180 as well. Try removing your pwr register stuff see if that helps, make your flash delay longer, etc. Change to 168mhz, etc.
Did you try for 180 out of the gate or did you try some more reasonable speeds first that are not pushing any edges, like something less than 45mhz then something between 45 and 90 then 90 plus then work to 180?
EDIT
The Flash memory interface accelerates code execution with a system of instruction prefetch and cache lines.
Main features
• Flash memory read operations
• Flash memory program/erase operations
• Read / write protections
• Prefetch on I-Code
• 64 cache lines of 128 bits on I-Code
• 8 cache lines of 128 bits on D-Code
CubeMx has a very handy clock configuration tool. I do not use HAL but this tool saves a lot of time.
As I see you try to reinvent the wheel by using own registers definitions. Use standard CMSIS ones as creating own ones does not make any sense.
It is not possible to have 180MHz clock and use the USB at the same time as you cant get 48MHz required by the USB peripheral.
Here you have some possible settings:
25MHz external osc:
8Mhz external osc:
18MHz internal osc
Related
Implemented interrupt function on TIMER1 on PIC16F877A MCU on PIC-DIP40 development board. Configured the timer Prescaler to 1 and auto preload value to 55536 so that the interrupt time is 0.01s. Using a counter of 100 to count 1s interval. The Fosc is 4Mhz. So my calculation is :
interrupt time = (4 / Fosc) * (65536 - 55536) = (4/4000000) * (65536 - 55536) = 0.01 s
And used a counter of 100 to generate a 1s interval.
Currently, I have no oscilloscope to test the actual 1s interval so, I am blinking an LED (LED2) on the timer interrupt and another LED (LED1) on the same time interval 1s using __delay_ms(1000); function.
So as expected the two LEDs will blink synchronously (Turn ON and OFF at the same Time). But for some first iterations, they blink synchronously. After some iterations, there is a clear difference in time between their blinking time (Turning ON and OFF time). After several minutes the difference is almost 1s. So the timer interrupt is not working as expected.
So is my calculation wrong for interrupt time or I am missing something in the timer1 configuration?
The overall goal is to generate a 1s time interval and test the validity without using an oscilloscope.
Here is my code :
// CONFIG
#pragma config FOSC = HS // Oscillator Selection bits (HS oscillator)
#pragma config WDTE = OFF // Watchdog Timer Enable bit (WDT disabled)
#pragma config PWRTE = OFF // Power-up Timer Enable bit (PWRT disabled)
#pragma config BOREN = OFF // Brown-out Reset Enable bit (BOR disabled)
#pragma config LVP = OFF // Low-Voltage (Single-Supply) In-Circuit Serial Programming Enable bit (RB3 is digital I/O, HV on MCLR must be used for programming)
#pragma config CPD = OFF // Data EEPROM Memory Code Protection bit (Data EEPROM code protection off)
#pragma config WRT = OFF // Flash Program Memory Write Enable bits (Write protection off; all program memory may be written to by EECON control)
#pragma config CP = OFF // Flash Program Memory Code Protection bit (Code protection off)
#include <xc.h>
#include <pic16f877a.h>
#define _XTAL_FREQ 4000000
#define LED1_ON PORTDbits.RD7 = 0
#define LED1_OFF PORTDbits.RD7 = 1
#define LED2_ON PORTDbits.RD6 = 0
#define LED2_OFF PORTDbits.RD6 = 1
#define LED2_TOGGLE PORTDbits.RD6 = ~PORTDbits.RD6
uint16_t preloadValue = 55536 ;
uint16_t counter = 0 ;
uint16_t secCounter1 = 100 ;
void io_config() {
TRISD &= ~((1 << _PORTD_RD7_POSITION) | (1 << _PORTD_RD6_POSITION)) ; //RD7 and RD6 are output LEDs
}
void timer1_init(){
TMR1 = preloadValue ; //loading the preload value
T1CON &= ~((1 << _T1CON_T1CKPS1_POSN) | (1 << _T1CON_T1CKPS0_POSN) | (1 << _T1CON_TMR1CS_POSN)) ; //prescalar is 1 clock is Fosc
T1CONbits.TMR1ON = 1 ; //timer 1 is ON
LED2_ON ;
}
void interrupt_en_configure(){
INTCON |= (1 << _INTCON_GIE_POSITION) | (1 << _INTCON_PEIE_POSITION) ; //global and peripheral interrupt on
PIE1 |= _PIE1_TMR1IE_MASK ; //timer 1 interrupt enable
TMR1IF = 0 ; //clearing interupt flag
}
void __interrupt() ISR(){
if(TMR1IF){
counter ++ ;
if (counter == secCounter1){
counter = 0 ;
LED2_TOGGLE ;
}
TMR1 = preloadValue ;
TMR1IF = 0 ;
}
}
void main(void) {
io_config();
interrupt_en_configure() ;
timer1_init() ;
while (1) {
LED1_ON ;
__delay_ms(1000);
LED1_OFF ;
__delay_ms(1000);
}
}
You should not expect them to operate synchronously for the following reasons:
First you do not know how __delay_ms() is implemented or any "promises" of precision it may make - it is certainly not using TIMER1, because you are controlling that. In fact the documentation gives some implementation details, and you really cannot expect precision.
Secondly, even if __delay_ms() were both accurate and synchronous, you are invoking it in a loop with the software overhead of the loop, function call and whatever you are doing to toggle the LED. That is a few cycles on every iteration that do not affect the interrupt interval which is locked to the hardware, and independent of the software timing.
The issue of precision of __delay_ms() is in fact addressed in this Microchip support article where it starts:
If an accurate delay is required, or if there are other tasks that can be performed during the delay, then using a timer to generate an interrupt is the best way to proceed.
In this case you should trust your code over the library provided delay which is intentionally crude (because it does not use up a valuable H/W timer resource).
__delay_ms() delays by running an empty loop, but it commonly cannot be exact. You would need to look into the actual machine code that is run to calculate the real delay. BTW, this is not rocket science and a great learning task. (Been there, done that.)
Now the rest of your loop (LED switching, looping) adds to this. Therefore, your pure software driven blinker is not exact.
However, your interrupt driven blinker is not, too. You reset the timer at the end of the ISR, after several clock cycles have passed. You need to take this into account, and don't forget the interrupt latency. Even worse, depending on the conditional statement, the reset happens at different times after the timer overflow.
Producing exact timing is difficult, especially with such a simple device.
The solution is to avoid software at all for the reset of the timer. Please read chapter 8 of the data sheet and use the capture/compare/PWM module to reset the timer on the appropriate value.
The worst thing that could still happen is some jitter, just because the ISR might have different latencies. But the timer runs as exactly as your system's crystal. In average your LED will blink correctly.
Anyway, if your timing requirements are not that hard, consider to live with some inaccuracy. Then use the most simple solution you like best.
I'm learning Embedded System by following this tutorial. In their attached code for LED blinking on TM4C123, they created the variable ulLoop which made me confused, since they just asigned the click enabling byte to ulLoop but never used it afterwards. However, I tried deleting the line writing ulLoop = SYSCTL_RCGCGPIO_R; and the LED stop blinking, as they said in the tutorial "The uloop variable and the statement containing uloop is present there only to halt 3 clock cycles before moving to peripherals, which is a must while working with TIVA."
I cannot understand what did they mean by "halt 3 clock cycles" and "moving to peripherals", and why it needs to halt 3 clock cycles, not 4, or 5 cycles, or not at all. In addition, if I know nothing about what's mentioned in the tutorial regarding the magic variable, just finding out the program not working, how am I supposed to know where the problem is without further information, since during building there is 0 errors and warnings. Pealse pardon with me if the question is not asked in a right way or sounds silly.
#define SYSCTL_RCGCGPIO_R (*(( volatile unsigned long *)0x400FE608 ) )
#define GPIO_PORTF_DATA_R (*(( volatile unsigned long *)0x40025038 ) )
#define GPIO_PORTF_DIR_R (*(( volatile unsigned long *)0x40025400 ) )
#define GPIO_PORTF_DEN_R (*(( volatile unsigned long *)0x4002551C ) )
#define GPIO_PORTF_CLK_EN 0x20
#define GPIO_PORTF_PIN1_EN 0x02
#define LED_ON1 0x02
#define GPIO_PORTF_PIN2_EN 0x04
#define LED_ON2 0x04
#define GPIO_PORTF_PIN3_EN 0x08
#define LED_ON3 0x08
#define DELAY_VALUE 1000000
volatile unsigned long j=0;
static void Delay(void){
for (j=0; j<DELAY_VALUE ; j++);
}
int main ( void )
{
volatile unsigned long ulLoop ; // I don't understand why creating this variable
SYSCTL_RCGCGPIO_R |= GPIO_PORTF_CLK_EN ;
ulLoop = SYSCTL_RCGCGPIO_R; // But if not adding this line the LED won't blink
GPIO_PORTF_DIR_R |= GPIO_PORTF_PIN1_EN ;
GPIO_PORTF_DEN_R |= GPIO_PORTF_PIN1_EN ;
GPIO_PORTF_DIR_R |= GPIO_PORTF_PIN2_EN ;
GPIO_PORTF_DEN_R |= GPIO_PORTF_PIN2_EN ;
GPIO_PORTF_DIR_R |= GPIO_PORTF_PIN3_EN ;
GPIO_PORTF_DEN_R |= GPIO_PORTF_PIN3_EN ;
// Loop forever .
while (1)
{
GPIO_PORTF_DATA_R &= LED_ON3;
GPIO_PORTF_DATA_R &= LED_ON2;
GPIO_PORTF_DATA_R |= LED_ON1;
Delay();
GPIO_PORTF_DATA_R &= LED_ON1;
GPIO_PORTF_DATA_R &= LED_ON2;
GPIO_PORTF_DATA_R |= LED_ON3;
Delay();
GPIO_PORTF_DATA_R &= LED_ON3;
GPIO_PORTF_DATA_R &= LED_ON1;
GPIO_PORTF_DATA_R |= LED_ON2;
Delay();
}
}
Since in this the previous line
SYSCTL_RCGCGPIO_R |= GPIO_PORTF_CLK_EN ;
The program is enabling the clock, this line
ulLoop = SYSCTL_RCGCGPIO_R;
is just a dummy code that gives a little bit of time to the microcontroller clock to stabilize.
You will find this valid for any microcontroller you will be working with, that after clock setting you must allow some time for the clock to stabilize.
Now, Why 3 clock cycles? This info you should find when reading the microcontroller datasheet in which it shall be specified how many clock cycles are needed to stabilize the clock?
Why not 5 or more? Of course, you don't want to waste so many clock cycles in this operation, so the rest of the program can be executed as soon as possible.
How does this dummy line correspond to 3 clock cycles?
ulLoop = SYSCTL_RCGCGPIO_R;
Well, this is really different from one controller to another or more specifically from compiler to the other. The compiler does translate this c-code line into assembly code. each assembly line takes one clock cycle for execution. So it seems like whoever wrote this code, just looked out at the generated assembly code from the compiler and finds out this line is translated to 3 assembly instructions.
how am I supposed to know where the problem is without further information
In the embedded system world, this can be achieved by debugging. Some of the issues are really hard to debug especially when it's something in the controller initialization sequence.
You should be very careful when initializing the controller (clock, peripherals) by following the datasheet instructions/recommendations.
OK so I have been attempting to create some code using a MSP430FR5994 TI launch pad that utilizes Timer0 and 3 separate compare registers to trigger 3 separate isr's. I have successfully got one to work however as soon as I add another compare register the CCIFE flag sets and never competes the execution of the second isr. I have watched the code in the debugger on both CCstudio and IAR same thing happens in both, the set up registers are correct and the TA0R registers is counting and will trigger the first isr based on the TA0CCR0 but all other compare regs R1 2 3 etc will not trigger and execute successfully. The code is below, idea's on what I am doing wrong would be much appreciated.
#include "msp430.h"
#include <stdbool.h>
#define COUNT_1 12000
#define COUNT_2 800
int main( void )
{
// Stop watchdog timer to prevent time out reset
WDTCTL = WDTPW + WDTHOLD;
PM5CTL0 &= ~LOCKLPM5;
P1DIR |= BIT0 + BIT1;
P1OUT = BIT0 + BIT1;
//set up and enable timer A or TA0 for continous mode
TA0CCR0 = COUNT_1;
TA1CCR1 = COUNT_2;
TA0CTL = TASSEL__ACLK + MC_2; //set the max period for 16bit timer operation
TA1CTL = TASSEL__ACLK + MC_2;
TA0CCTL0 = CCIE; //enable compare reg 0
TA1CCTL1 = CCIE; //enable compare reg 1
//TA0CTL |= TAIE;
_BIS_SR( GIE); //ENABLE GLOBAL INTERRRUPTS
//set the max period for 16bit timer operation
while(true){}
}
#pragma vector= TIMER0_A0_VECTOR //compare interrupt 0 flahse red led
__interrupt void TIMER0_A0(void) {
P1OUT ^= BIT1 ;
}
#pragma vector = TIMER1_A1_VECTOR //compare interrupt 1 flashes green led
__interrupt void TIMER1_A1(void) {
P1OUT ^= BIT0;
}
The User's Guide says in section 25.2.6.1:
The TAxCCR0 CCIFG flag is automatically reset when the TAxCCR0 interrupt
request is serviced.
However, this does not happen for the other CCRx interrupts, because multiple ones use the same interrupt vector.
Section 25.2.5.2 says:
The highest-priority enabled interrupt generates a number in the TAxIV register (see register description). […]
Any access, read or write, of the TAxIV register automatically resets the highest-pending interrupt flag.
So you always have to read the TAxIV register (and with three or more CCRs, you need it to find out which CCR triggered the interrupt):
__interrupt void TIMER1_A1(void) {
switch (TA1IV) {
case TAIV__TACCR1:
P1OUT ^= BIT0;
break;
case TAIV__TACCR2:
...
break;
}
}
I tried to implement a classic blink example on an STM32L476RG Nucleo board.
According to the STM32L4x datasheet: the LD2 is connected to the GPIOA PORT 5 (PA5).
The PA5 uses the AHB2 bus.
Note: I used Keil uVision 5; I created a New uVision Project with STM32L476RGTx target.
In the "Manage Run-Time Environment" dialog box I selected:
CMSIS >> Core (flag)
Device >> Startup (flag)
Here the code:
#include "stm32l4xx.h" // Device header
//#include <stdint.h>
//#define MASK(x) ((uint32_t) (1<<(x))) // bitmasking
void delayMs(int delay);
int main(void){
// RCC->AHB2RSTR |=1;
// RCC->AHB2RSTR &=~1;
// RCC->AHB2ENR |= MASK(0); //bitwise OR. Enable GPIOA clock
RCC->AHB2ENR |= 1;
//GPIOA->MODER |= MASK(10);
GPIOA->MODER |= 0x400;
while(1){
//GPIOA->ODR |= MASK(4);
GPIOA->ODR |= 0x20;
delayMs(500);
//GPIOA->ODR &= ~MASK(4);
GPIOA->ODR &= ~0x20;
delayMs(500);
}
}
void delayMs(int delay){
int i;
for(;delay>0; delay --){
for (i=0; i<3195;i++);
}
}
The Build output returns:
Build started: Project: blinknew
*** Using Compiler 'V5.06 update 6 (build 750)', folder: 'C:\Keil_v5\ARM\ARMCC\Bin'
Build target 'Target 1'
compiling main.c...
linking...
Program Size: Code=520 RO-data=408 RW-data=0 ZI-data=1632
".\Objects\blinknew.axf" - 0 Error(s), 0 Warning(s).
Build Time Elapsed: 00:00:09
and when I download it, Keil uV 5 returns:
Load "C:\\Users\\gmezz\\OneDrive\\Documenti\\Bare_Metal\\Blinknew\\Objects\\blinknew.axf"
Erase Done.
Programming Done.
Verify OK.
Flash Load finished at 22:37:52
The LED should blink with a period of 1 s, but in reality, nothing happens.
Honestly, I don't understand what is going wrong.
Someone can help me?
GM
I may be wrong, but according to the reference manual (RM0351) section 6.2.19, you should wait 2 clock cycles after enabling the peripheral clock, before accessing its registers. Try introducing a short delay after RCC->AHB2ENR |= 1; line. In your case, I think MODER register is not getting the correct value.
I also suggest checking the actual values of registers with a debugger.
I'm trying to write my own driver for USART_TX on an STM32L476RG Nucleo Board.
Here the datasheet and the reference manual.
I'm using Keil uVision 5 and I set in the Manage dialog:
CMSIS > Core
Device > Startup
Xtal=16MHz
I want to create a single character transmitter. According to the manual instructions in Sec. 40 p 1332 I wrote this code:
// APB1 connects USART2
// The USART2 EN bit on APB1ENR1 is the 17th
// See alternate functions pins and label for USART2_TX! PA2 is the pin and AF7 (AFRL register) is the function to be set
#include "stm32l4xx.h" // Device header
#define MASK(x) ((uint32_t) (1<<(x)));
void USART2_Init(void);
void USART2_Wr(int ch);
void delayMs(int delay);
int main(void){
USART2_Init();
while(1){
USART2_Wr('A');
delayMs(100);
}
}
void USART2_Init(void){
RCC->APB1ENR1 |= MASK(17); // Enable USART2 on APB1
// we know that the pin that permits the USART2_TX is the PA2, so...
RCC->AHB2ENR |= MASK(0); // enable GPIOA
// Now, in GPIOA 2 put the AF7, which can be set by placing AF7=0111 in AFSEL2 (pin2 selected)
// AFR[0] refers to GPIOA_AFRL register
// Remember: each pin asks for 4 bits to define the alternate functions. see pg. 87
// of the datasheet
GPIOA->AFR[0] |= 0x700;
GPIOA->MODER &= ~MASK(4);// now ... we set the PA2 directly with moder as alternate function "10"
// USART Features -----------
//USART2->CR1 |=MASK(15); //OVER8=1
USART2->BRR = 0x683; //USARTDIV=16Mhz/9600?
//USART2->BRR = 0x1A1; //This one works!!!
USART2->CR1 |=MASK(0); //UE
USART2->CR1 |=MASK(3); //TE
}
void USART2_Wr(int ch){
//wait when TX buffer is empty
while(!(USART2->ISR & 0x80)) {} //when data is transfered in the register the ISR goes 0x80.
//then we lock the procedure in a while loop until it happens
USART2->TDR =(ch & 0xFF);
}
void delayMs(int delay){
int i;
for (; delay>0; delay--){
for (i=0; i<3195; i++);
}
}
Now, the problem:
The system works, but not properly. I mean: if I use RealTerm at 9600 baud-rate, as configured by 0x683 in USART_BRR reg, it shows me wrong char but if I set 2400 as baud rate on real term it works!
To extract the 0x683 in USART_BRR reg i referred to Sec. 40.5.4 USART baud rate generation and it says that if OVER8=0 the USARTDIV=BRR. In my case, USARTDIV=16MHz/9600=1667d=683h.
I think that the problem lies in this code row:
USART2->BRR = 0x683; //USARTDIV=16Mhz/9600?
because if I replace it as
USART2->BRR = 0x1A1; //USARTDIV=16Mhz/9600?
THe system works at 9600 baud rate.
What's wrong in my code or in the USARTDIV computation understanding?
Thank you in advance for your support.
Sincerely,
GM
The default clock source for the USART is PCLK1 (figure 15) PCLK1 is SYSCLK / AHB_PRESC / AHB1_PRESC. If 0x1A1 results in a baud rate of 9600, that suggests PCLK1 = 4MHz.
4MHz happens to be the default frequency of your processor (and PCLK1) at start-up when running from the internal MSI RC oscillator. So the most likely explanation is that you have not configured the clock tree, and are not running from the 16MHz HSE as you believe.
Either configure your clock tree to use the 16MHz source, or perform your calculations on the MSI frequency. The MSI precision is just about good enough over normal temperature range to maintain a sufficiently accurate baud rate, but it is not ideal.