UPDATE
For anyone interested, here is a step-by-step instruction and explanation on how to build a bare metal USB-Stack, how to tackle such a project and what you need to know for each step: STM32USB#GitHub
TLDR:
I have a STM32G441 and want to implement a USB driver without the use of any HAL Libraries, just using CMSIS - for learning experience, for space and because what I want to do would require to change the hal anyway.
But I can't get this thing to receive anything. I'm stuck trying to get the Device Address, which is never handed to the code. The hal middleware works just fine, so it's not a HW issue.
What I'm doing
I'm enabling the USB clock (correctly as I assume, because it can send ACK signals using my Logic Analyzer), power up the USB peripheral as defined in the datasheet, enable all the necessary Interrupts and handle the reset event by initializing the BTable and Endpoint 0. Now I expect to receive a CTR-Interrupt which never appears.
Reference Manual
Clock
The μC runs on a 25MHz HSE clock. The USB periphery runs on the PLL Q clock at ~48MHz, RCC settings were verified with the CubeMX clock configurator. AHB runs at half speed, because I get a bus error hard fault if I try to run it at full speed, but that's another question. The System Clock is set to 143.75MHz.
RCC->CR |= RCC_CR_HSEON | RCC_CR_HSION;
// Configure PLL (R=143.75, Q=47.92)
RCC->CR &= ~RCC_CR_PLLON;
while (RCC->CR & RCC_CR_PLLRDY) {
}
RCC->PLLCFGR |= RCC_PLLCFGR_PLLSRC_HSE | RCC_PLLCFGR_PLLM_0 | (23 << RCC_PLLCFGR_PLLN_Pos) | RCC_PLLCFGR_PLLQ_1;
RCC->PLLCFGR |= RCC_PLLCFGR_PLLREN | RCC_PLLCFGR_PLLQEN;
RCC->CR |= RCC_CR_PLLON;
// Select PLL as main clock, AHB/2 > otherwise Bus Error Hard Fault
RCC->CFGR |= RCC_CFGR_HPRE_3 | RCC_CFGR_SW_PLL;
// Select & Enable IO Clocks (PLL > USB, ADC; HSI16 > UART)
RCC->CCIPR = RCC_CCIPR_CLK48SEL_0 | RCC_CCIPR_ADC12SEL_1 | RCC_CCIPR_USART1SEL_1 | RCC_CCIPR_USART2SEL_1 | RCC_CCIPR_USART3SEL_1 | RCC_CCIPR_UART4SEL_1;
RCC->AHB2ENR |= RCC_AHB2ENR_ADC12EN | RCC_AHB2ENR_GPIOAEN | RCC_AHB2ENR_GPIOBEN | RCC_AHB2ENR_GPIOCEN;
RCC->APB1ENR1 |= RCC_APB1ENR1_USBEN | RCC_APB1ENR1_UART4EN | RCC_APB1ENR1_USART3EN | RCC_APB1ENR1_USART2EN;
RCC->APB2ENR |= RCC_APB2ENR_USART1EN;
// Enable DMAMUX & DMA1 Clock
RCC->AHB1ENR |= RCC_AHB1ENR_DMAMUX1EN | RCC_AHB1ENR_DMA1EN;
USB Memory
As far as I know, the USB BTable and endpoint buffers need to be placed in the USB-SRAM, not in regular SRAM. I've added some linker directives to create a section for that, which seems to work just fine according to the memory analyzer. Mem2Usb just recalculates the offset from absolute to relative to the USB-SRAM offset.
#define __USB_MEM __attribute__((section(".usbbuf")))
#define __USBBUF_BEGIN 0x40006000
#define __MEM2USB(X) (((int)X - __USBBUF_BEGIN))
First question: The access is only allowed to be 16 Bytes wide. But, contrary to e.g. STM32F103 there is no need for padding as it seems. The memory tool has some problems displaying this region, because it is only handling WORD access while the tool uses DWORD access, but copying the memory allocated by the HAL word by word also shows no padding. Is that correct? So I should be able to use all 1024 bytes, not just seeing them but only having 512. This is also the reason why mem2usb does not divide the address by 2.
Then I create some structures for the BTable and the zero-endpoint. The BTable ends up at 0x40006000 by default. Endpoint 0 has a rx and a tx buffer with max 64 bytes as per USB spec. The alignments are taken from the Reference manual. The memory is not automatically zeroed out.
typedef struct {
unsigned short ADDR_TX;
unsigned short COUNT_TX;
unsigned short ADDR_RX;
unsigned short COUNT_RX;
} USB_BTABLE_ENTRY;
__ALIGNED(8)
__USB_MEM
static USB_BTABLE_ENTRY BTable[8] = {0};
__ALIGNED(2)
__USB_MEM
static char EP0_Buf[2][64] = {0};
Initialization
Enabling the NVIC, then power up, wait 1μs until clock is stable as per datasheet, then clear reset state, clear pending interrupts, enable interrupts and last enable the internal pull up to start enumeration.
NVIC_SetPriority(USB_HP_IRQn, 0);
NVIC_SetPriority(USB_LP_IRQn, 0);
NVIC_SetPriority(USBWakeUp_IRQn, 0);
NVIC_EnableIRQ(USB_HP_IRQn);
NVIC_EnableIRQ(USB_LP_IRQn);
NVIC_EnableIRQ(USBWakeUp_IRQn);
USB->CNTR &= ~USB_CNTR_PDWN;
// Wait 1μs until clock is stable
SysTick->LOAD = 100;
SysTick->VAL = 0;
SysTick->CTRL = 1;
while ((SysTick->CTRL & SysTick_CTRL_COUNTFLAG_Msk) == 0) {
}
SysTick->CTRL = 0;
USB->CNTR &= ~USB_CNTR_FRES;
USB->ISTR = 0;
USB->CNTR |= USB_CNTR_RESETM | USB_CNTR_CTRM | USB_CNTR_WKUPM | USB_CNTR_SUSPM | USB_CNTR_ESOFM;
USB->BCDR |= USB_BCDR_DPPU;
USB Reset
Now the host sends a reset signal, which is triggered correctly. During the reset signal, I initialize the BTable and EP0. I set EP0 to ACK on RX and NACK on TX requests, as do other bare metal USB examples and the HAL (they are toggle, not write, but the register is in a known state of 0x00 as the hardware resets them on a reset). Lastly I put the USB peripheral in enable mode and reset the device address to 0.
if ((USB->ISTR & USB_ISTR_RESET) != 0) {
USB->ISTR = ~USB_ISTR_RESET;
// Enable EP0
USB->BTABLE = __MEM2USB(BTable);
BTable[0].ADDR_TX = __MEM2USB(EP0_Buf[0]);
BTable[0].COUNT_TX = 0;
BTable[0].ADDR_RX = __MEM2USB(EP0_Buf[1]);
BTable[0].COUNT_RX = (1 << 15) | (1 << 10);
USB->EP0R = USB_EP_CONTROL | (2 << 4) | (3 << 12);
USB->CNTR = USB_CNTR_CTRM | USB_CNTR_RESETM;
USB->DADDR = USB_DADDR_EF;
}
Debugging shows that the BTable is indeed at 0x40006000 and the Buffer address is written (I assume) correctly. The EP0 register was compared to a working HAL implementation and they are the same at that point.
Here I'm stuck
I expect the host to send the device address next (it doesn't, it sends a sleep and a wakeup and then another reset first), which will trigger the CRT interrupt (which is masked). Point is, it never does. And I don't know why. The host sends the request just fine and the device sends an ACK on that request just fine (logic analyzer), but the CRT is never triggered. Any ideas what else I can try or where to look?
Update
I've now compared the messages from my implementation with the HAL ones. The interrupt now handles the exact same messages in the exact same order and the USB-Registers also contain exactly the same values for every request. I've changed the BTable and USB-SRAM layout to contain the exact same values as the HAL after the Reset-Interrupt.
I had to implement the SUSP and WKUP for this to work, which was probably one of the things thats missing. Now they both behave exactly the same. It turns out, the problem is that I never receive a proper SOF-Package. The HAL gets its first SOF directly after the second reset (HW-Reset > 2x ESOF > SUSP > WKUP > RESET > (Optional 1 ESOF) > SOF), while mine gets an ERR instead of the SOF.
Looks like the error is not related to the USB registers or USB-SRAM. Next step will be to compare all registers I can think of as relevant between the two implementations. Maybe I forgot a clock?
Spend almost a week. Just to figure out I misconfigured my 48MHz clock source...
RCC->CCIPR = RCC_CCIPR_CLK48SEL_0 | ...
This sets the CLK48SEL to Reserved (01), not the PLLQ-Clock (10)...
RCC->CCIPR = RCC_CCIPR_CLK48SEL_1 | ...
Now I get the SOF packages and the CTR alright. May that question serve as a USB bare metal reference in the future.
Related
I am trying to read a SDP610 sensiron differential pressure sensor via a Texas Instruments msp430.
I am having the issue of the sensor not acknowledging the command and thus, not communicating the pressure value itself. Note I have confirmed that the sensor works by hooking it up to an arduino via an opensource library and, I can see the data via this. Note my IDE is code composer. My chips is MSP430FR2311 (a launch pad breakout board).
My hardware setup is 4 wires. Vcc(3.3V), Ground(0V), SDK and SCL. The SDK and SCL lines are pulled to VCC with a 4.7Kohm resistor as per specification.
I have the following code for my MSP430 see below:
However, I do not see the response of the sensor via a logic analyser. Here is my capture. You will have to click the link. Note the top line is clock and bottom is the data.
MSP430 output.
The logic flow for reading the sensor from the datasheet and from the arduino code is as follows:
Write address of the device to the I2C line(8 bit h81)
Wait for slave acknowledge
Write command for reading (8 bit hF1)
Wait for slave acknowledge
Slave holds the master
Slave outputs 3 bytes (2 data one msb and 1 lsb then a check sum)
acknowledge
This is the datasheet for the sensor
Any tips to why the sensor is not responding.
CODE
void Read_Diff_pressure(void)
{
int rx_byte;
UCB0CTL1 |= UCTXSTT+ UCTR; // Generating START + I2C transmit (write)
UCB0I2CSA = SDP610Address; // SDP610 7 bit address 0x40
UCB0TXBUF = SDP610Read; // sending the read command 0x78
while(!(UCB0IFG & UCTXIFG)); //wait until reg address got sent
while( UCB0CTL1 & UCTXSTT); //wait till START condition is cleared
UCB0CTL1 |= UCTXSTT; //generate RE-START
UCB0I2CSA = SDP610Address; // SDP610 7 bit address 0x40
UCB0CTL1 &=~ UCTR; //receive mode
while( UCB0CTL1 & UCTXSTT); //wait till START condition is cleared
rx_byte = UCB0RXBUF; //read byte
//while(!(UCB0IFG & UCRXIFG)); //wait while the Byte is being read
UCB0CTL1 |= UCTXNACK; //generate a NACK
UCB0CTL1 |= UCTXSTP; //generate stop condition
while(UCB0CTL1 & UCTXSTP); //wait till stop condition got sent```
Pressure_result = rx_byte;
}
void InitI2C_diff(void)
{
PAOUT |= I2C_SCL_PIN|I2C_SDA_PIN;//P1.2(SDA) - P1.3(SCL) as per silk screen defined in a header
PADIR |= I2C_SCL_PIN|I2C_SDA_PIN;
PASEL0 |= (I2C_SCL_PIN|I2C_SDA_PIN); // configure I2C pins (device specific)
UCB0CTLW0 |= UCSWRST; // put eUSCI_B in reset state
UCB0CTLW0 |= UCMODE_3 | UCSYNC | UCMST; // I2C master mode, SMCL
UCB0CTL1 = UCSSEL_2 + UCSWRST; //use SMCLK + still reset
UCB0BR0 = 10; // default SMCLK 1M/10 = 100KHz
UCB0BR1 = 0; //
UCB0I2CSA = SDP610Address; //The address of the device
UCB0CTLW0 &= ~UCSWRST; // eUSCI_B in operational state
//UCB0BRW = 64; // baudrate = SMCLK / 64
}
int main(void)
{
InitI2C_diff();//Init the i2c
while (1) { // Mainloop
Read_Diff_pressure();
delay(1000);//1 Second delay before re looping
}
}
A few parts were missing compared to an old Project implementation of mine (VCNL3020 + MSP430).
For example:
set the 7-bit addressing mode, single-master environment, I2C Master, synchronous mode,..Maybe I have overlooked it
Does the sensor need itself an init?
The Init Part of the I2C only looked like this:
void I2CInit( void )
{
P1SEL |= BIT6 + BIT7; // Assign I2C pins to USCI_B0
P1SEL2|= BIT6 + BIT7;
UCB0CTL1 |= UCSWRST; // Enable SW reset
UCB0CTL0 = UCMST + UCMODE_3 + UCSYNC; // 7-bit addressing, single-master environment, I2C Master, synchronous mode
UCB0CTL1 = UCSSEL_2 + UCSWRST; // Use SMCLK, keep SW reset
UCB0BR0 = 16; // fSCL = SMCLK/UCB0BR1
UCB0BR1 = 0;
UCB0I2CIE |= UCNACKIE; // Enable not-acknowledge interrupt
UCB0I2CSA=slave_adress;
UCB0CTL1 &= ~UCSWRST; // Clear SW reset, resume operation
IE2 |= UCB0TXIE + UCB0RXIE; // Enable TX&RX interrupts
}
To not make it unnecessary complicated, you could check my implementation on github and see if it helps Github Link I2C MSP430 Main
I hope this helps a bit- have fun!
I'm not sure what your hardware looks like, but your I2C pull-ups sound too large.I know of lot of app notes talk about about 4.7K, but I'd look at the rise time of the lines with an oscilloscope. If you don't have access to a scope, I'd use 1K or 2 K and see what happens.
I have a TIVA tm4c123G I have been trying to create a communication between it and my ADXL345 sensor using I2C protocol which I succeeded in writing and reading from the accelerometer the readings of the device address and the register values that I just wrote to which means everything is running fine. However I have tried this in step by step debugging in keil and it works fine but if I run the program it will give zeroes all the way and I have no idea why? Should I add delays between the write and read from registers or whats going wrong in my code?
Here is my code attached
I am using a clock of 80 MHZ for the system and I think this might be the problem however as the code goes too fast to the execution of a next send and there should be some delay? I am not sure I'm only guessing please help thanks !
also my connection for the adxl is
Vcc -> 3.3 volts
GND -> ground
CS -> 3.3 volts
SDO -> ground
SDA -> PB3
SCL -> PB2
#include "tm4c123gh6pm.h"
#include "stdint.h"
void EnableI2CModule0(void);
uint8_t ReadRegister(uint8_t RegisterAddress);
void PLL_Init(void);
void WriteRegister(uint8_t RegisterAddress,uint8_t Data);
volatile uint8_t X_Axis1,X_Axis2,Y_Axis1,Y_Axis2,Z_Axis1,Z_Axis2=0;
int main()
{
volatile long temp;
PLL_Init();
EnableI2CModule0();
temp=ReadRegister(0x00);
WriteRegister(0x2D,0x08);
temp=ReadRegister(0x2D);
WriteRegister(0x31,0x0B);
temp=ReadRegister(0x31);
while(1)
{
X_Axis1=ReadRegister(0x32);
X_Axis2=ReadRegister(0x33);
Y_Axis1=ReadRegister(0x34);
Y_Axis2=ReadRegister(0x35);
Z_Axis1=ReadRegister(0x36);
Z_Axis2=ReadRegister(0x37);
}
}
void PLL_Init(void){
// 0) Use RCC2
SYSCTL_RCC2_R |= 0x80000000; // USERCC2
// 1) bypass PLL while initializing
SYSCTL_RCC2_R |= 0x00000800; // BYPASS2, PLL bypass
// 2) select the crystal value and oscillator source
SYSCTL_RCC_R = (SYSCTL_RCC_R &~0x000007C0) // clear XTAL field, bits 10-6
+ 0x00000540; // 10101, configure for 16 MHz crystal
SYSCTL_RCC2_R &= ~0x00000070; // configure for main oscillator source
// 3) activate PLL by clearing PWRDN
SYSCTL_RCC2_R &= ~0x00002000;
// 4) set the desired system divider
SYSCTL_RCC2_R |= 0x40000000; // use 400 MHz PLL
SYSCTL_RCC2_R = (SYSCTL_RCC2_R&~ 0x1FC00000) // clear system clock divider
+ (4<<22); // configure for 80 MHz clock
// 5) wait for the PLL to lock by polling PLLLRIS
while((SYSCTL_RIS_R&0x00000040)==0){}; // wait for PLLRIS bit
// 6) enable use of PLL by clearing BYPASS
SYSCTL_RCC2_R &= ~0x00000800;
}
void EnableI2CModule0(void)
{
volatile int Delay=0;
SYSCTL_RCGCI2C_R|=0x00000001; //set i2c module 0 clock active
Delay=SYSCTL_RCGCI2C_R; //delay allow clock to stabilize
SYSCTL_RCGCGPIO_R |=0x00000002; //i2c module 0 is portB so activate clock for port B
Delay = SYSCTL_RCGCGPIO_R; //delay allow clock to stabilize
GPIO_PORTB_AFSEL_R|= 0x0000000C; //enable alternate functions for PB2 and PB3
GPIO_PORTB_ODR_R |= 0x00000008; //set PB3 (I2C SDA) for open drain
GPIO_PORTB_DEN_R |= 0xFF; //Enable digital on Port B
GPIO_PORTB_PCTL_R |=0x03;
I2C0_PP_R |= 0x01;
I2C0_MTPR_R |= 0x00000027; //set SCL clock
I2C0_MCR_R |= 0x00000010; //intialize mcr rigester with that value given in datasheet
}
uint8_t ReadRegister(uint8_t RegisterAddress)
{
volatile uint8_t result=0;
I2C0_MSA_R = 0x000000A6; //write operation
I2C0_MDR_R = RegisterAddress; //place data to send mdr register
I2C0_MCS_R = 0x00000007; //stop start run
while((I2C0_MCS_R &= 0x00000040)==1); //poll busy bit
I2C0_MSA_R = 0x000000A7; // read operation
I2C0_MCS_R = 0x00000007; // stop start run
while((I2C0_MCS_R &= 0x00000040)==1); //poll busy bit
result = I2C0_MDR_R;
return result;
}
void WriteRegister(uint8_t RegisterAddress,uint8_t Data)
{
I2C0_MSA_R = 0x000000A6; //write operation
I2C0_MDR_R = RegisterAddress; //place register address to set in mdr register
I2C0_MCS_R = 0x00000003; //burst send ( multiple bytes send )
while((I2C0_MCS_R &= 0x00000040)==1); //poll busy bit
I2C0_MDR_R = Data; //place data to be sent in mdr register
I2C0_MCS_R = 0x00000005; // transmit followed by stop state
while((I2C0_MCS_R &= 0x00000040)==1); //poll busy bit
}
Your WriteRegister and ReadRegister functions do not follow the flowcharts defined in the TM4C123G data sheet. Apart from not checking or handling the MCS ERROR flag, Figure 16-10 Master TRANSMIT of Multiple Data Bytes shows that when writing the MCS register, you should assert specific bits, while you are writing to all bits, You should instead perform a read-modify-write:
I2C0_MCS_R = 0x00000003; //burst send ( multiple bytes send )
should be:
// I2CMCS = ---0-011
uint32_t mcs = I2C0_MCS_R ;
msc &= ~0x00000014; // ---0-0--
mcs |= 0x00000003; // ------11
I2C0_MCS_R = mcs ;
And similarly:
I2C0_MCS_R = 0x00000005; // transmit followed by stop state
should be
// I2CMCS = ---0-101
mcs = I2C0_MCS_R ;
mcs &= ~0x00000012; // ---0--0-
mcs |= 0x00000005; // -----1-1
I2C0_MCS_R = mcs ;
ReadRegister() has a similar issue (although it is unlikely to be an issue in this case):
I2C0_MCS_R = 0x00000007; //stop start run
should strictly be:
// I2CMCS = ---00111
uint32_t mcs = I2C0_MCS_R ;
mcs &= ~0x00000018; // ---00---
mcs |= 0x00000007; // -----111
I2C0_MCS_R = mcs ;
The datasheet recommends for bits 31:5:
Software should not rely on the value of a reserved bit. To provide
compatibility with future products, the value of a reserved bit should
be preserved across a read-modify-write operation.
The above code does that, but in practice should not be necessary on this specific product, but is good practice in any case.
In any event you should add the recommended error handling code. It may be that no error flag is being set, but we don't know that unless you check for it, and doing so will at least assist debugging - rather then stepping the code, you can simply set a break-point on the error handling and then run at full-speed. This will narrow down the number of possibilities.
as #Clifford had explained that i should follow the flow charts and although his answer is completely correct it didn't give me any results (previously gave values in case of stepping into the function gave zeroes afterwards) but , i noticed something in the flow charts that i hadn't noticed before which contradicts with the initialization and configuration section in the data sheet
now as it says in step 11 that you should be polling the bus busy bit in the MCS register but this is wrong and contradicts with the flow charts , the flow charts are more correct as u should check if the bus is busy before sending anything and then check for the master busy bit before reading from the MDR register or moving on to execute and further steps
basically the correct steps in the initialization and configuration should be :
before step 10 poll the bus busy bit in case any other master is sending which can be omitted in case of a single master
after step 10 poll the busy bit before reading or going to any further step to conclude whether the sending has been completed and the master is idle or not
i'm sorry i feel like a complete idiot now for not reading the flow charts carefully but i followed another part which is the initialization and configuration part accepting a fact which wasn't there that both should imply the same thing .
i also found that it works correctly in the tivaware API following the flow charts and not that other section in the datasheet however i didn't want to use the Tivaware API as i am looking forward for problems like this which lead to a better understanding of how things work
thanks again for your help #Clifford cheers!
i have a tiva c micro controller the tm4c123gxl and i have been trying for a while now to use the I2C module on the board with a digital accelrometer with no result , i have been trying to set the MDR register with a certain value to send but it stays as 0
here is the code i am using for intialization till reaching part where i set the MDR register im using step by step debugging i run the code initially to the assignment step of I2C3_MDR_R = 0x2D;
void PortDInit(void)
{
volatile unsigned long delay=0;
SYSCTL_RCGCI2C_R|=0x8; //1-set clock of I2C of module 3
delay = SYSCTL_RCGC2_R; //2-delay to allow clock to stabilize
SYSCTL_RCGC2_R |= 0x00000008; //3-port D clock
delay = SYSCTL_RCGC2_R; //4-delay to allow clock to stabilize
GPIO_PORTD_AFSEL_R |= 0x03; //5-alternate function set for I2C mode
GPIO_PORTD_DEN_R |=0x03; //6-enable digital functionality for PA6 and PA7
GPIO_PORTD_ODR_R|=0x02; //7-enable open drain mode for I2CSDA register of port A
GPIO_PORTD_PCTL_R = 0x00000033; //8-set PCTL to I2C mode
I2C3_MCR_R= 0x00000010; // 9-intialize the i2c master
I2C3_MTPR_R = 0x00000007; // 10-number of system clock cycles in 1 scl period
I2C3_MSA_R = 0x3A // set slave address and read write bit
I2C3_MDR_R = 0x2D; // data to be sent BREAK POINT HERE using single step here yields MDR with same value = 0
I2C3_MCS_R = 0x00000003; // follow transmit condition
while(I2C3_MCS_R &= 0x40 == 1); // wait bus is busy sending data
if(I2C3_MCS_R&=0x04 ==1)
{
//handle error in communication
}
else
{
//success in transmission
}
what i have done to reach this code
carefully understood the I2C protocol how it works etc.
check the data sheet and follow the initalization steps mentioned there step by step which got me to this code
i know i should use tivaware library which will be easier but using
the registers helps me understand more of how everything is working ,
im still a student
at first i didnt have the digital enable line as it wasnt mentioned
to be activated for the I2C but its only logical it should be there
as we are using digital values i tried with both yielded the same
output mdr=0
i am using keil 4 as my IDE and im viewing the values of registers of
I2C module 3 to know whether data is placed in MDR or not
hope any one helps
thanks.
This is a long shot, but here goes:
in your comments, step 6 says
//6-enable digital functionality for PA6 and PA7
but it appears you are working on GPIO_PORTD...
maybe its a comment typo (you meant PD6 and PD7)
but just double check you are looking at the right pins...
Good luck!
So I'm having a problem getting a TI microcontroller to communicate with the Raspberry Pi B+. The exact microcontroller I'm using is the TI cc430f5137. The issue I'm having is that I just can't seem to get the Raspberry Pi to correctly receive the data I'm sending from the MSP430. For those who don't know, the 430 has 2 buffers for this purpose, a RX and TX, which allows the use of the UART module while code is still executing. I've enabled an interrupt for when I receive a byte, and I simply set a flag and send the same byte right back. It works up until I attempt to transmit.
The code sits and waits in an infinite loop until it receives it's first byte. At that point it simply saves the byte and flashes the LED if it's a 'T' (for testing). Upon returning to the loop, it detects that the saved byte has changed, and puts it in the buffer to send it back. Until this point, everything works perfectly. It receives the correct byte every time, letting me know my clocks are perfect, my interrupt is working, and my UART initialization is correct. Where it goes wrong is after sending the byte, it seems like there is some kind of internal loopback (this is an option but I made sure this is not the case) that is causing the interrupt to re-trigger, resulting in an infinite loop of transmitting and again receiving the same byte, but upon invoking this via the Pi I don't get back a loop of the same character, but instead a byte of random garbage that has no consistency or logic behind it. I analyzed the bits to see if the timing is just off and that doesn't seem to be the case. For reference, my Baud is a measly 1200, the voltage of both devices is definitely 3.3v, and I'm sure the Pi is working because when I short the RX and TX, I get back the byte without an issue. I switched to UART because SPI was giving me similar problems, and I can't think of any other protocol besides I2C that would help here. I am using an external 32768hz crystal. Also, I've tried this on two different microcontrollers, so its definitely the code that's the issue.
#include <msp430.h>
char temp;
char in;
int main(void) {
WDTCTL = WDTPW | WDTHOLD; // Stop watchdog timer
P1OUT = 0x00; // Make sure pins are tturned off
P1DIR = 0x01; // Led out
P1SEL |= BIT5 + BIT6; // UART as pin mode
UCSCTL6 &= ~BIT0; // Turn on XT1
P5SEL |= BIT0 + BIT1; // Select XT1 as pin function
UCA0CTL1 |= BIT0; // Set UART to reset mode
UCA0CTL1 |= BIT6; // Choose ACLK as source
UCA0BR0 = 27; // Set speed to 1200 Baud
UCA0MCTL = 0x02 << 1; // Set speed to 1200 Baud
UCA0CTL1 &= ~BIT0; // Turn UART on
UCA0IE = BIT0; // Enable RX interrupt
__enable_interrupt();
while(1)
{
if(in != 0)
{
UCA0TXBUF = in;
temp = in;
in = 0;
}
}
}
#pragma vector=USCI_A0_VECTOR
__interrupt void UCSIA0(void)
{
in = UCA0RXBUF;
if(in == 0x54)
P1OUT ^= BIT0;
}
Output from running minicom at 1200 on Pi, Sending 'T' one at a time:
UÔÿÿïÕuU_þýÿÿÿÿÿÿÕԯÿÿôÕüÿÝUõï\þþÿÿÕ¿ÿÿýýTÿýUÿÿÿïÿÿÿõÿýýÿõûÿ
assuming Pi is working currectly...
1.verify msp430 TX is woring: send every 1 sec known value and see if PI getting it currectly.
2.verify MSP430 RX working: send from Pi known value every 1 sec.
3.interrupt section:
your code dosent verfiy that RX interrupt is off.
you should filter interrupts generated only for the RX .
also, your code dont handle overrun/frame errors.
sharing "in" variable both for TX and RX (and both at interrupt and main loop section)-not good idea..
4.your output example suggests that you have baud rate mismatch issue.
if you send character 'T' and shoud get back 'T'. i expect to see 'TTTTTT...'
BTW this garbage may suggests that you forgot to connecting GND line between two MCUs...
I am trying to set up two timer interrupt routines with Teensy 2.0 Microcontroller (which is based on ATMEGA32U4 8 bit AVR 16 MHz) for independent control of two servo motors
After much trial - I was able to set one up on pin 7 of port C, but
How do I set up the second ISR to be initialized and called independently of the first?
Do I need to setup the second timer and, if so, what would such code look like?
Here is the setup code:
int main(void)
{
DDRE = 0xFF;
TCCR1A |= 1 << WGM12; // Configure timer 1 for CTC mode
TCCR1B = (1<<WGM12) | (1<<CS11) ;
OCR1A = 1000; // initial
TIMSK1 |= 1 << OCIE1A; // Output Compare A Match Interrupt Enable
sei(); // enable interrupts
// ...code that sets pulseWidth based on app logic variable.
// Not showing as its not important
}
ISR(TIMER1_COMPA_vect)
{
if (0 == pulseWidth)
{
return;
}
static uint8_t state = 0;
int dutyTotal = 20*1000;
if (0 == state)
{
PORTC |= 0b10000000;
OCR1A = pulseWidth;
state = 1;
}
else if (1 == state)
{
PORTC &= 0b01111111;
OCR1A = dutyTotal - pulseWidth;
state = 0;
}
}
While it's difficult to give a definitive answer without knowing more about your application (e.g. what kind of servo/motor, - I'm guessing model RC type with 1-2ms pule?) there are two approaches to solving the problem:
Firstly, In your code you seem to be manually generating a PWM signal by toggling PC7. You could add another output by increasing your number of states - you need one more than the number of servos to give the gap which sets the pulse repetition frequency. This is a common technique when you need to drive a lot of servos, since most RC servos don't care about pulse phasing or frequency (within limits), only the pulse width, so you can generate a bunch of different pulses one after the other on different outputs while only using one timer like this (in a sort of pseudo-code state diagram):
State 0:
Turn on output 1
Set timer TOP to pulse duration 1.
Go to state 1:
State 1:
Turn off output 1
Turn on output 2
Set timer TOP to pulse duration 1.
Go to state 2:
State 2:
Turn off output 2
Set timer TOP to pulse duration 3.
Go to state 0:
"Pulse duration 3" sets the PRF (pulse repetition frequency). If you want get fancy,
you can set this to 1/f-pd1-pd2, to give a constant frequency.
[ "TOP" is AVR-speak for the thing that sets the wrap (overflow) rate of the timer. See data sheet. ]
Secondly, there is a much easier way if you're only using two servos - use the hardware PWM functionality of the timer. The AVR timers have a built-in PWM function to do the pin-toggling for you. Timer1 on the mega32 has two PWM output pins, which could work great for your two servos and you then don't (necessarily) need an interrupt handler at all.
This is also the right solution if you are PWM driving motors directly (e.g. through an H-Bridge.)
To do this, you need to put the timer into PWM mode and enable the OC1A and OC1B output pins, e.g.
/*
* Set fast PWM mode on OC1A and OC1B with ICR1 as TOP
* (Mode 14)
*/
TCCR1A = (1 << WGM11) | (1 << COM1B1) | (1 << COM1A1);
TCCR1B = (3 << WGM12);
/*
* Clock source internal, pre-scale by 8
* (i.e. count rate = 2MHz for 16MHz crystal)
*/
TCCR1B |= (1 << CS11);
/*
* Set counter TOP value to set pulse repetition frequency.
* E.g. 50Hz (good for RC servos):
* 2e6/50 = 40000. N.B. This must be less than 65535.
* We count from t down to 0 so subtract 1 for true freq.
*/
ICR1 = 40000-1;
/* Enable OC1A and OC1B PWM output */
DDRB |= (1 << PB5) | (1 << PB6);
/* Uncomment to enable TIMER1_OVF_vect interrupts at 50Hz */
/* TIMSK1 = (1 << TOV1); */
/*
* Set both servos to centre (1.5ms pulse).
* Value for OCR1x is 2000 per ms then subtract one.
*/
OCR1A = 3000-1;
OCR1B = 3000-1;
Disclaimer - this code fragment compiles but I have not checked it on an actual device so you may need to double-check the register values. See the full datasheet at http://www.atmel.com/Images/doc7766.pdf
Also, you perhaps have some typos in your code, bit WGM12 doesn't exist in TCC1A (you've actually set bit 3, which is FOC1A - "force compare", see datasheet.) Also, you are writing DDRE to enable outputs on port E but toggling a pin on port C.
Halzephron, thank you so much for your answer. I dont have the high enough reputation to mark your answer, hopefully someone else will.
Details below:
You are absolutely right about being able to use a single IRS to control a number of servos - embarrassing I did not think of that, but given how well your simpler solution worked - I'll just use that.
... Also, you are writing DDRE to enable outputs on port E but toggling a pin on port C.
Thanks, I commented out that line, and it works the same - (so I dont need to enable output at all?)
Bit WGM12 doesn't exist in TCC1A (you've actually set bit 3, which is FOC1A - "force compare", see datasheet.)
I removed that too, but leaving the rest of my code unchanged - it results in servos moving slower, with way less torque and jittering as they do, even after arriving at desired position. Servo makes a weird "shaky" noise (frequency ~10-20, I'd say) and the arm it trembling, so for the reasons I don't understand - setting this bit seems necessary.
I suspected that a timer per motor is highly inelegant, and so I really like your second approach (built - in timer-generated PWM),
... This is also the right solution if you are PWM driving motors directly (e.g. through an H-Bridge.)
Very curious, why? I.e. what's the difference between using this method vs, say, timer-generated PWM.
In your code, I had to change below line to get 50HZ, otherwise I was getting 25HZ before (verified with a scope)
ICR1 = 20000-1; // was 40000 - 1;
One other thing I noticed with scope was that the timer-generated PWM code I have - produces less "rectangular" shape than the PWM code snippet you attached. It takes about 0.5 milliseconds for the signal to drop off to 0 with my code, and its absolutely instantaneous with yours (which is great). This solved another problem I had been bashing my head against: I could get analog servos to work fine with my code (IRS-generated PWM), but any digital servo I tried - just did not move, as if it was broken. I guess the shape of the signal is critical for digital servos, I never read this anywhere. Or maybe its something else I dont know.
As a side note, another weirdness I spent a bunch of time on - I uncommented the TIMSK1 = (1 << TOV1); line, thinking I always needed it - but what happened, my main function would get frozen, blocked forever upon first call to delay_ms(...) not sure what it is - but keeping it commented out unblocks my main loop (where I read the servo positin values from the USB HID using Teensy's sample code)
Again, many thanks for the help.