USB Interrupt Masks not loading STM32L151CC - c

I'm currently encountering a strange issue with the STM USB libraries. I am able to successfully load firmware onto the STM32L152D-EVAL board (which uses an STM32L152ZD), however, I am unable to modify the same code to work on my form-factor board, which uses the aforementioned STM32L151CC.
After stepping through the code with the debugger (a ULINK2, using the KEIL uVision4 IDE), I noticed that the code would crash while setting the interrupt mask in the function USB_SIL_Init()
uint32_t USB_SIL_Init(void)
{
/* USB interrupts initialization */
/* clear pending interrupts */
_SetISTR(0);
wInterrupt_Mask = IMR_MSK;
/* set interrupts mask */
_SetCNTR(wInterrupt_Mask);
return 0;
}
More specifically, _SetCNTR(wInterrupt_Mask); is what gives me the error. I haven't changed the value of IMR_MSK between either board. It's value is given as
#define IMR_MSK (CNTR_CTRM | CNTR_WKUPM | CNTR_SUSPM | CNTR_ERRM | CNTR_SOFM \
| CNTR_ESOFM | CNTR_RESETM )
which is 0xBF00
_SetCNTR is defined as follows
#define _SetCNTR(wRegValue) (*CNTR = (uint16_t)wRegValue)
With CNTR being defined as
/* Control register */
#define CNTR ((__IO unsigned *)(RegBase + 0x40))
And RegBase is
#define RegBase (0x40005C00L) /* USB_IP Peripheral Registers base address */
I'm currently looking through STM's documentation on this, but I can't seem to find anything specifically relating to the default states for the two different chips. I'm guessing it has something to do with the base address, however the Datasheet shows that this is the correct address.
Can anyone help me out on this?
Thanks!

Related

STM32 bare metal USB implementation

UPDATE
For anyone interested, here is a step-by-step instruction and explanation on how to build a bare metal USB-Stack, how to tackle such a project and what you need to know for each step: STM32USB#GitHub
TLDR:
I have a STM32G441 and want to implement a USB driver without the use of any HAL Libraries, just using CMSIS - for learning experience, for space and because what I want to do would require to change the hal anyway.
But I can't get this thing to receive anything. I'm stuck trying to get the Device Address, which is never handed to the code. The hal middleware works just fine, so it's not a HW issue.
What I'm doing
I'm enabling the USB clock (correctly as I assume, because it can send ACK signals using my Logic Analyzer), power up the USB peripheral as defined in the datasheet, enable all the necessary Interrupts and handle the reset event by initializing the BTable and Endpoint 0. Now I expect to receive a CTR-Interrupt which never appears.
Reference Manual
Clock
The μC runs on a 25MHz HSE clock. The USB periphery runs on the PLL Q clock at ~48MHz, RCC settings were verified with the CubeMX clock configurator. AHB runs at half speed, because I get a bus error hard fault if I try to run it at full speed, but that's another question. The System Clock is set to 143.75MHz.
RCC->CR |= RCC_CR_HSEON | RCC_CR_HSION;
// Configure PLL (R=143.75, Q=47.92)
RCC->CR &= ~RCC_CR_PLLON;
while (RCC->CR & RCC_CR_PLLRDY) {
}
RCC->PLLCFGR |= RCC_PLLCFGR_PLLSRC_HSE | RCC_PLLCFGR_PLLM_0 | (23 << RCC_PLLCFGR_PLLN_Pos) | RCC_PLLCFGR_PLLQ_1;
RCC->PLLCFGR |= RCC_PLLCFGR_PLLREN | RCC_PLLCFGR_PLLQEN;
RCC->CR |= RCC_CR_PLLON;
// Select PLL as main clock, AHB/2 > otherwise Bus Error Hard Fault
RCC->CFGR |= RCC_CFGR_HPRE_3 | RCC_CFGR_SW_PLL;
// Select & Enable IO Clocks (PLL > USB, ADC; HSI16 > UART)
RCC->CCIPR = RCC_CCIPR_CLK48SEL_0 | RCC_CCIPR_ADC12SEL_1 | RCC_CCIPR_USART1SEL_1 | RCC_CCIPR_USART2SEL_1 | RCC_CCIPR_USART3SEL_1 | RCC_CCIPR_UART4SEL_1;
RCC->AHB2ENR |= RCC_AHB2ENR_ADC12EN | RCC_AHB2ENR_GPIOAEN | RCC_AHB2ENR_GPIOBEN | RCC_AHB2ENR_GPIOCEN;
RCC->APB1ENR1 |= RCC_APB1ENR1_USBEN | RCC_APB1ENR1_UART4EN | RCC_APB1ENR1_USART3EN | RCC_APB1ENR1_USART2EN;
RCC->APB2ENR |= RCC_APB2ENR_USART1EN;
// Enable DMAMUX & DMA1 Clock
RCC->AHB1ENR |= RCC_AHB1ENR_DMAMUX1EN | RCC_AHB1ENR_DMA1EN;
USB Memory
As far as I know, the USB BTable and endpoint buffers need to be placed in the USB-SRAM, not in regular SRAM. I've added some linker directives to create a section for that, which seems to work just fine according to the memory analyzer. Mem2Usb just recalculates the offset from absolute to relative to the USB-SRAM offset.
#define __USB_MEM __attribute__((section(".usbbuf")))
#define __USBBUF_BEGIN 0x40006000
#define __MEM2USB(X) (((int)X - __USBBUF_BEGIN))
First question: The access is only allowed to be 16 Bytes wide. But, contrary to e.g. STM32F103 there is no need for padding as it seems. The memory tool has some problems displaying this region, because it is only handling WORD access while the tool uses DWORD access, but copying the memory allocated by the HAL word by word also shows no padding. Is that correct? So I should be able to use all 1024 bytes, not just seeing them but only having 512. This is also the reason why mem2usb does not divide the address by 2.
Then I create some structures for the BTable and the zero-endpoint. The BTable ends up at 0x40006000 by default. Endpoint 0 has a rx and a tx buffer with max 64 bytes as per USB spec. The alignments are taken from the Reference manual. The memory is not automatically zeroed out.
typedef struct {
unsigned short ADDR_TX;
unsigned short COUNT_TX;
unsigned short ADDR_RX;
unsigned short COUNT_RX;
} USB_BTABLE_ENTRY;
__ALIGNED(8)
__USB_MEM
static USB_BTABLE_ENTRY BTable[8] = {0};
__ALIGNED(2)
__USB_MEM
static char EP0_Buf[2][64] = {0};
Initialization
Enabling the NVIC, then power up, wait 1μs until clock is stable as per datasheet, then clear reset state, clear pending interrupts, enable interrupts and last enable the internal pull up to start enumeration.
NVIC_SetPriority(USB_HP_IRQn, 0);
NVIC_SetPriority(USB_LP_IRQn, 0);
NVIC_SetPriority(USBWakeUp_IRQn, 0);
NVIC_EnableIRQ(USB_HP_IRQn);
NVIC_EnableIRQ(USB_LP_IRQn);
NVIC_EnableIRQ(USBWakeUp_IRQn);
USB->CNTR &= ~USB_CNTR_PDWN;
// Wait 1μs until clock is stable
SysTick->LOAD = 100;
SysTick->VAL = 0;
SysTick->CTRL = 1;
while ((SysTick->CTRL & SysTick_CTRL_COUNTFLAG_Msk) == 0) {
}
SysTick->CTRL = 0;
USB->CNTR &= ~USB_CNTR_FRES;
USB->ISTR = 0;
USB->CNTR |= USB_CNTR_RESETM | USB_CNTR_CTRM | USB_CNTR_WKUPM | USB_CNTR_SUSPM | USB_CNTR_ESOFM;
USB->BCDR |= USB_BCDR_DPPU;
USB Reset
Now the host sends a reset signal, which is triggered correctly. During the reset signal, I initialize the BTable and EP0. I set EP0 to ACK on RX and NACK on TX requests, as do other bare metal USB examples and the HAL (they are toggle, not write, but the register is in a known state of 0x00 as the hardware resets them on a reset). Lastly I put the USB peripheral in enable mode and reset the device address to 0.
if ((USB->ISTR & USB_ISTR_RESET) != 0) {
USB->ISTR = ~USB_ISTR_RESET;
// Enable EP0
USB->BTABLE = __MEM2USB(BTable);
BTable[0].ADDR_TX = __MEM2USB(EP0_Buf[0]);
BTable[0].COUNT_TX = 0;
BTable[0].ADDR_RX = __MEM2USB(EP0_Buf[1]);
BTable[0].COUNT_RX = (1 << 15) | (1 << 10);
USB->EP0R = USB_EP_CONTROL | (2 << 4) | (3 << 12);
USB->CNTR = USB_CNTR_CTRM | USB_CNTR_RESETM;
USB->DADDR = USB_DADDR_EF;
}
Debugging shows that the BTable is indeed at 0x40006000 and the Buffer address is written (I assume) correctly. The EP0 register was compared to a working HAL implementation and they are the same at that point.
Here I'm stuck
I expect the host to send the device address next (it doesn't, it sends a sleep and a wakeup and then another reset first), which will trigger the CRT interrupt (which is masked). Point is, it never does. And I don't know why. The host sends the request just fine and the device sends an ACK on that request just fine (logic analyzer), but the CRT is never triggered. Any ideas what else I can try or where to look?
Update
I've now compared the messages from my implementation with the HAL ones. The interrupt now handles the exact same messages in the exact same order and the USB-Registers also contain exactly the same values for every request. I've changed the BTable and USB-SRAM layout to contain the exact same values as the HAL after the Reset-Interrupt.
I had to implement the SUSP and WKUP for this to work, which was probably one of the things thats missing. Now they both behave exactly the same. It turns out, the problem is that I never receive a proper SOF-Package. The HAL gets its first SOF directly after the second reset (HW-Reset > 2x ESOF > SUSP > WKUP > RESET > (Optional 1 ESOF) > SOF), while mine gets an ERR instead of the SOF.
Looks like the error is not related to the USB registers or USB-SRAM. Next step will be to compare all registers I can think of as relevant between the two implementations. Maybe I forgot a clock?
Spend almost a week. Just to figure out I misconfigured my 48MHz clock source...
RCC->CCIPR = RCC_CCIPR_CLK48SEL_0 | ...
This sets the CLK48SEL to Reserved (01), not the PLLQ-Clock (10)...
RCC->CCIPR = RCC_CCIPR_CLK48SEL_1 | ...
Now I get the SOF packages and the CTR alright. May that question serve as a USB bare metal reference in the future.

ADC giving strange output for PIC16F15325

I was trying to read analog voltage on pin RC3 on PIC16F15325. I have 3.23V across potentiometer and its output is nearly 1.65V which goes to pin RC3 of PIC microcontroller. For configuration and libraries I used MPLAB Code Cofigurator. Code is as follows:
#include "mcc_generated_files/mcc.h"
void main(void)
{
// initialize the device
SYSTEM_Initialize();
EUSART1_Initialize();
ADC_Initialize();
adc_result_t val1 = 0;
// When using interrupts, you need to set the Global and Peripheral Interrupt Enable bits
// Use the following macros to:
// Enable the Global Interrupts
//INTERRUPT_GlobalInterruptEnable();
// Enable the Peripheral Interrupts
//INTERRUPT_PeripheralInterruptEnable();
// Disable the Global Interrupts
//INTERRUPT_GlobalInterruptDisable();
// Disable the Peripheral Interrupts
//INTERRUPT_PeripheralInterruptDisable();
while (1)
{
// Add your application code
val1 = ADC_GetConversion(19); // selected channel RC3
printf("Value - %hu \n",val1);
DELAY_milliseconds(1000);
}
}
For voltage I mentioned above, I expected value near "511". Anything beyond 1023 [i.e. (2^10) - 1] is strange as PIC has 10-bit ADC.However uotput I get is:
Kindly, help me solving this issue.
Actually, output was correct but was looking strange only because of format. Just now I looked into datasheet which said about 2 ways of ADC result formatting.
PIC16(L)F15325/45 Datasheet page-228
and inside ADC_Initialize(); ADCON1 = 0x10 which means ADFM is left shift. I just did val1 = val1 >> 6; after ADC_GetConversion(19); and it worked as expected.

Can't access timer 4 pic32 with TMR4, T4CON etc

I've looked in the pic32ms.h file and it seems there are no definitions for timer 4. For timer 2 it has the following:
/*
* Timer2 registers
*/
#define T2CON PIC32_R (0x0800)
#define T2CONSET PIC32_R (0x0808)
#define TMR2 PIC32_R (0x0810)
#define PR2 PIC32_R (0x0820)
I've tried adding lines for timer 4 with the correct addresses but it does not solve the problem. So what I want to do instead (if there's no better sollution) is to be able to call the address without using predefined values. Timer 4 has the virtual address 0x0C00 to 0x0C20. How to access these addresses and setup the timer?
The solution was to set the address as a volatile unsigned int pointer which could then be used to access timer 4:
volatile unsigned int *T4CON = 0x****0C00;
However I ended up only using timer 2 by changing the way I handled overflow flags so that it could be detected by different components in the code.

Where can I find the linker command file for the MSP430G2553?

I'm using the MSPGCC to compile and link my programs. I'd like to see how the hardware addresses are assigned in the linker command file. Inside the header file for my device I found these lines:
/* External references resolved by a device-specific linker command file */
#define SFR_8BIT(address) extern volatile unsigned char address
#define SFR_16BIT(address) extern volatile unsigned int address
Further on in the file I found lines like this under the GPIO section:
SFR_8BIT(P1IN); /* Port 1 Input */
SFR_8BIT(P1OUT); /* Port 1 Output */
SFR_8BIT(P1DIR); /* Port 1 Direction */
SFR_8BIT(P1IFG); /* Port 1 Interrupt Flag */
What I'd like to see is how P1IN is defined. I'm trying to get a better understanding of what it is so I can use it.
I realize it can be used like this:
P1OUT &= 0xF7; // clear bit 3
I'd like to find the linker file so I can better understand how the address is being assigned. I know I can just look at the data sheet to see what it is, but I'd like to know how the linker is finding it.
They are defined in the file msp430g2553.cmd.
/************************************************************
* DIGITAL I/O Port1/2 Pull up / Pull down Resistors
************************************************************/
P1IN = 0x0020;
P1OUT = 0x0021;
...
PS: I'm using CCS. The file is located at path\to\ccs\ccs_base\msp430\include along with the header file msp430g2553.h.

Writing Device Drivers for Microcontrollers, where to define IO Port pins?

I always seem to encounter this dilemma when writing low level code for MCU's.
I never know where to declare pin definitions so as to make the code as reusable as possible.
In this case Im writing a driver to interface an 8051 to a MCP4922 12bit serial DAC.
Im unsure how/where I should declare the pin definitions for The CS(chip select) and LDAC(data latch) for the DAC. At the moment there declared in the header file for the driver.
Iv done a lot of research trying to figure out the best approach but havent really found anything.
Im basically want to know what the best practices... if there are some books worth reading or online information, examples etc, any recommendations would be welcome.
Just a snippet of the driver so you get the idea
/**
#brief This function is used to write a 16bit data word to DAC B -12 data bit plus 4 configuration bits
#param dac_data A 12bit word
#param ip_buf_unbuf_select Input Buffered/unbuffered select bit. Buffered = 1; Unbuffered = 0
#param gain_select Output Gain Selection bit. 1 = 1x (VOUT = VREF * D/4096). 0 =2x (VOUT = 2 * VREF * D/4096)
*/
void MCP4922_DAC_B_TX_word(unsigned short int dac_data, bit ip_buf_unbuf_select, bit gain_select)
{
unsigned char low_byte=0, high_byte=0;
CS = 0; /**Select the chip*/
high_byte |= ((0x01 << 7) | (0x01 << 4)); /**Set bit to select DAC A and Set SHDN bit high for DAC A active operation*/
if(ip_buf_unbuf_select) high_byte |= (0x01 << 6);
if(gain_select) high_byte |= (0x01 << 5);
high_byte |= ((dac_data >> 8) & 0x0F);
low_byte |= dac_data;
SPI_master_byte(high_byte);
SPI_master_byte(low_byte);
CS = 1;
LDAC = 0; /**Latch the Data*/
LDAC = 1;
}
This is what I did in a similar case, this example is for writing an I²C driver:
// Structure holding information about an I²C bus
struct IIC_BUS
{
int pin_index_sclk;
int pin_index_sdat;
};
// Initialize I²C bus structure with pin indices
void iic_init_bus( struct IIC_BUS* iic, int idx_sclk, int idx_sdat );
// Write data to an I²C bus, toggling the bits
void iic_write( struct IIC_BUS* iic, uint8_t iicAddress, uint8_t* data, uint8_t length );
All pin indices are declared in an application-dependent header file to allow quick overview, e.g.:
// ...
#define MY_IIC_BUS_SCLK_PIN 12
#define MY_IIC_BUS_SCLK_PIN 13
#define OTHER_PIN 14
// ...
In this example, the I²C bus implementation is completely portable. It only depends on an API that can write to the chip's pins by index.
Edit:
This driver is used like this:
// main.c
#include "iic.h"
#include "pin-declarations.h"
main()
{
struct IIC_BUS mybus;
iic_init_bus( &mybus, MY_IIC_BUS_SCLK_PIN, MY_IIC_BUS_SDAT_PIN );
// ...
iic_write( &mybus, 0x42, some_data_buffer, buffer_length );
}
In one shop I worked at, the pin definitions were put into a processor specific header file. At another shop, I broke the header files into themes associated with modules in the processor, such as DAC, DMA and USB. A master include file for the processor included all of these themed header files. We could model different varieties of the same processor by include different module header files in the processor file.
You could create an implementation header file. This file would define I/O pins in terms of the processor header file. This gives you one layer of abstraction between your application and the hardware. The idea is to loosely couple the application from hardware as much as possible.
If only the driver needs to know about the CS pin, then the declaration should not appear in the header, but within the driver module itself. Code re-use is best served by hiding data at the most restrictive scope possible.
In the event that an external module needs to control CS, add an access function to the device driver module so that you have single point control. This is useful if during debugging you need to know where and when an I/O pin is being asserted; you only have one point to apply instrumentation or breakpoints.
The answer with the run-time configuration will work for a decent CPU like ARM, PowerPC...but the author is running a 8051 here. #define is probably the best way to go. Here's how I would break it down:
blah.h:
#define CSN_LOW() CS = 0
#define CSN_HI() CS = 1
#define LATCH_STROBE() \
do { LDAC = 0; LDAC = 1; } while (0)
blah.c:
#include <blah.h>
void blah_update( U8 high, U8 low )
{
CSN_LOW();
SPI_master_byte(high);
SPI_master_byte(low);
CSN_HI();
LATCH_STROBE();
}
If you need to change the pin definition, or moved to a different CPU, it should be obvious where you need to update. And it's also helps when you have to adjust the timing on the bus (ie. insert a delay here and there) as you don't need to change all over the place. Hope it helps.

Resources