I'm writing a UART communication protocol for my stm32l152c-discovery. Briefly, the master sends the communication start packet to the slave via the user button and the slave, if it receives it correctly, sends an ack start packet and passes to the connected state. In the same way, if the master receives the ack start packet correctly, it goes into connected state. In this state, master and slave keep alive with keep alive packets that must be sent every TPOLL which can be from 100 to 5000 milliseconds. If the TSILENT timeout, which is TPOLL + 100 milliseconds, expires, the board disconnects. The event counter is managed by a timer which increments every millisecond.
I encounter a low solidity problem, in fact after a few packets exchanged one of the two boards disconnects, I tried with various TPOLL values. If I increase TSILENT from TPOLL + 100 to TPOLL +1000 the program works correctly. The problem is that I have no delay in the operations I perform in this state and all the functions to be performed take 12 milliseconds (also counting the 10 for UART reception).timer 3 is initialized before entering the connected state. I've posted parts of the code that might be of interest.
void TIM3_IRQHandler(void)
{
/* USER CODE BEGIN TIM3_IRQn 0 */
/* USER CODE END TIM3_IRQn 0 */
HAL_TIM_IRQHandler(&htim3);
/* USER CODE BEGIN TIM3_IRQn 1 */
GlobalTick++;
if(GlobalTick == 1)
{
TX_BufferReset();
SendPacket(KEEP_ALIVE);
HAL_UART_Transmit_DMA(&huart3, TX_Buffer.buffer, TX_Buffer.count);
}
if(GlobalTick - t_event_poll >= TPOLL)
{
TX_BufferReset();
SendPacket(KEEP_ALIVE);
HAL_UART_Transmit_DMA(&huart3, TX_Buffer.buffer, TX_Buffer.count);
t_event_poll = GlobalTick;
}
if(GlobalTick - t_event_silence > TSILENT)
{
state = STATE_DISCONNECTED;
ResetTime();
}
/* USER CODE END TIM3_IRQn 1 */
}
case STATE_CONNECTED:
RX_BufferReset();
HAL_UART_Receive(&huart3, RX_Buffer.buffer, MAX_PKT_BYTES, 10);
if(WaitFullPacket() == TRUE)
if (ReceivePacket() == CORRECT && RX_Buffer.buffer[3] == KEEP_ALIVE)
t_event_silence = GlobalTick;
break;
Related
Please help! I am using FSMC to connect a STM32F407 MCU with AD7606 to sample voltage value. MCU would send sampled values to PC using USB HS port after 1024 conversions. But when I inspect the values from PC, I found that readings from channel 0 occasionally contains data from other channels. For example, if connect channel 0 to 5v, connect channel 8 to 3.3v, connect other channels to ground. Then the printed value from channel 0 would contain 5v, 0v, 3.3v. The basic setup is as follows:
A 200KHZ PWM single is generated by TIM10 to act as CONVST signal for AD7606.
7606 will then issue a BUSY signal which I used as an external interrupt source.
In the Interrupt handler, An DMA request would be issued to read 8 16bit data
from FSMC address space to memory space. TIM10 PWM would be stopped if 1024
conversions has been done.
In the DMA XFER_CPLT call back, if 1024 conversions has been done, the converted
data would be sent out by USB HS port, and TIM10 PWM would be enabled again.
Some code blocks:
void HAL_GPIO_EXTI_Callback(uint16_t GPIO_Pin)
{
if(GPIO_Pin == GPIO_PIN_7)
{
// DMA data from FSMC to memory
HAL_DMA_Start_IT(&hdma_memtomem_dma2_stream0, 0x6C000000, (uint32_t)(adc_data + adc_data_idx) , 8);
adc_data_idx += 8;
if (adc_data_idx >= ADC_DATA_SIZE)
HAL_TIM_PWM_Stop(&htim10, TIM_CHANNEL_1);
}
}
void dma_done(DMA_HandleTypeDef *_hdma)
{
int i;
int ret;
// adc_data[adc_data_idx] would always contain data from
// channel 1, led1 wouldn't light if every thing is fine.
if (adc_data[adc_data_idx] < 0x7f00 )
HAL_GPIO_WritePin(led1_GPIO_Port, led1_Pin, GPIO_PIN_SET);
if (adc_data_idx >= ADC_DATA_SIZE)
{
if(hUsbDeviceHS.dev_state == USBD_STATE_CONFIGURED)
{
// if I don't call CDC_Transmit_HS, everything is fine.
ret = CDC_Transmit_HS((uint8_t *)(adc_data), ADC_DATA_SIZE * 2 );
if (ret != USBD_OK)
{
HAL_GPIO_WritePin(led1_GPIO_Port, led2_Pin, GPIO_PIN_SET);
}
}
adc_data_idx = 0;
HAL_TIM_PWM_Start(&htim10, TIM_CHANNEL_1);
}
}
It seems that a single USB transaction would take longer than 5us(one conversion time), so I stopped PWM signal to stop conversion...
If I only send the second half of the data buffer, there is no data mixture. It's very strange.
According to your description, I think the processing is correct, and the problem is at the CDC_Transmit_HS(); I have met the problem on the CDC_Transmit_FS(), which can't transmit more than 64 bytes data for original code, and need to modify some code, otherwise the some error occurs. Did you check the number of received data is correct?
Reference:
I can't receive more than 64 bytes on custom USB CDC class based STM32 device
I'm not sure your ADC_DATA_SIZE size; if it's larger than 64 bytes, maybe you can modify to smaller than 64 bytes and try again and check whether or not the data is correct. I am not sure if it is affected by this problem, but I think you can give it a try.
On the other hand, it may also be necessary to GND the ADC IN pins not used by AD7606 to avoid interference between channels.
Or you can try other communication (I2C, SPI, UART...etc) to send the data.
If there is no problem with other communication methods, there is a high chance that it is a problem with CDC_Transmit_HS(). If there are problems with other transmission methods, you may have to check whether there is a conflict between the ADC conversion time or the transmission time.
Hello I'm trying to read remaining capacity information of some battery which sends some informations with J1939 protocol. For that I am using PIC18F26K83, so as I have understood before I want to send data to battery to show that I want some spesific information I need to take address. However when I try to claim address from PGN 60928, transmission never occurs. Everything looks correct but TXREQ bit never gets zeroed.Related part of the code can be seen below:
PS: In hardware wise everything is fine I have 2 nodes which is enough for can bus and I put 120 ohm resistor between my Can transsiver Rx and Tx etc.
So my question is: for implementing J1939 protocol is it a necessity to claim an address even if it will not be used in the real vehicles system and are there any required steps before claiming an address? Just setting up the can bus and putting it into the normal mode and transmitting the name data to PGN 60928 would be enough?
//Address Claim
TXB0CON= 0b00000011; //Priority 0 clear TXREQ
//Sender ID in Extended Form
//PGN 60928 = 0b1110111000000000 (Address Claim)
TXB0SIDH= 0b11000111; //EID <28-1> Priority =6
TXB0SIDL= 0b01101010; //Extended Enable
TXB0EIDH= 0b00000000; //
TXB0EIDL= 0X80; //Source ID=128;
TXB0DLC = 0b00001000; //No RTR Data Length=
//Data to be sent
TXB0D0=0X80;
TXB0D1=0XFE;
TXB0D2=0XFF;
TXB0D4=0XFF;
TXB0D5=0XFF;
TXB0D6=0XFF;
TXB0D7=0XFF;
//Normal Mode
CANCON =0x00;
while(!(CANCON.B7==0 && CANCON.B6 ==0 && CANCON.B5==0)){
LATA2_BIT=1;
} // Wait for can to enter normal mode
LATA2_BIT=0;
TXB0CON = 0b00001011; //transmission request , priority 0
wait: if(TXB0CON.B3==1){
LATA2_BIT =1;
goto wait;} //Wait until Transmission is Succesfull
Filters might be a problem, set mask to 0x00 and try again
I'm more of a high level software guy but have been working on some embedded projects lately so I'm sure there's something obvious I'm missing here, though I have spent over a week trying to debug this and every 'MSP' related link in google is purple at this point...
I currently have an MSP430F5529 set up as an I2C slave device whose only responsibility currently is to receive packets from a master device. The master uses industry grade I2C and has been heavily tested and ruled out as the source of my problem here. I'm using Code composer as my IDE using the TI v15.12.3.LTS compiler.
What is currently happening is the master queries how many packets (of size 62 bytes) the slave can hold, then sends over a few packets which the MSP is just currently discarding. This is happening every 100ms on the master side and for the minimal example below the MSP will always just send back 63 when asked how many packets it can hold. I have tested the master with a Total Phase Aardvark and everything is working fine with that so I'm sure it's a problem on the MSP side. The problem is as follows:
The program will work for 15-20 minutes, sending over tens of thousands of packets. At some point the slave starts to hold the clock line low and when paused in debug mode, is shown to be stuck in the start interrupt. The same sequence of events is happening every single time to cause this.
1) Master queries how many packets the MSP can hold.
2) A packet is sent successfully
3) Another packet is attempted but < 62 bytes are received by the MSP (counted by logging how many Rx interrupts I receive). No stop condition is sent so master times out.
4) Another packet is attempted. A single byte is sent before the stop condition is sent.
5) Another packet is attempted to be sent. A start interrupt, then a Tx interrupt happens and the device hangs.
Ignoring the fact that I'm not handling the timeout errors on the master side, something very strange is happening to cause that sequence of events, but that's what happens every single time.
Below is the minimal working example which is reproducing the problem. My particular concern is with the SetUpRx and SetUpTx functions. The examples that the Code Composer Resource Explorer gives only has examples of Rx or Tx, I'm not sure if I'm combining them in the right way. I also tried removing the SetUpRx completely, putting the device into transmit mode and replacing all calls to SetUpTx/Rx with mode = TX_MODE/RX_MODE, which did work but still eventually holds the clock line low. Ultimately I'm not 100% sure on how to set this up to receive both Rx and Tx requests.
#include "driverlib.h"
#define SLAVE_ADDRESS (0x48)
// During main loop, set mode to either RX_MODE or TX_MODE
// When I2C is finished, OR mode with I2C_DONE, hence upon exit mdoe will be one of I2C_RX_DONE or I2C_TX_DONE
#define RX_MODE (0x01)
#define TX_MODE (0x02)
#define I2C_DONE (0x04)
#define I2C_RX_DONE (RX_MODE | I2C_DONE)
#define I2C_TX_DONE (TX_MODE | I2C_DONE)
/**
* I2C message ids
*/
#define MESSAGE_ADD_PACKET (3)
#define MESSAGE_GET_NUM_SLOTS (5)
static volatile uint8_t mode = RX_MODE; // current mode, TX or RX
static volatile uint8_t rx_buff[64] = {0}; // where to write rx data
static volatile uint8_t* rx_data = rx_buff; // used in rx interrupt
static volatile uint8_t tx_len = 0; // number of bytes to reply with
static inline void SetUpRx(void) {
// Specify receive mode
USCI_B_I2C_setMode(USCI_B0_BASE, USCI_B_I2C_RECEIVE_MODE);
// Enable I2C Module to start operations
USCI_B_I2C_enable(USCI_B0_BASE);
// Enable interrupts
USCI_B_I2C_clearInterrupt(USCI_B0_BASE, USCI_B_I2C_TRANSMIT_INTERRUPT);
USCI_B_I2C_enableInterrupt(USCI_B0_BASE, USCI_B_I2C_START_INTERRUPT + USCI_B_I2C_RECEIVE_INTERRUPT + USCI_B_I2C_STOP_INTERRUPT);
mode = RX_MODE;
}
static inline void SetUpTx(void) {
//Set in transmit mode
USCI_B_I2C_setMode(USCI_B0_BASE, USCI_B_I2C_TRANSMIT_MODE);
//Enable I2C Module to start operations
USCI_B_I2C_enable(USCI_B0_BASE);
//Enable master trasmit interrupt
USCI_B_I2C_clearInterrupt(USCI_B0_BASE, USCI_B_I2C_RECEIVE_INTERRUPT);
USCI_B_I2C_enableInterrupt(USCI_B0_BASE, USCI_B_I2C_START_INTERRUPT + USCI_B_I2C_TRANSMIT_INTERRUPT + USCI_B_I2C_STOP_INTERRUPT);
mode = TX_MODE;
}
/**
* Parse the incoming message and set up the tx_data pointer and tx_len for I2C reply
*
* In most cases, tx_buff is filled with data as the replies that require it either aren't used frequently or use few bytes.
* Straight pointer assignment is likely better but that means everything will have to be volatile which seems overkill for this
*/
static void DecodeRx(void) {
static uint8_t message_id = 0;
message_id = (*rx_buff);
rx_data = rx_buff;
switch (message_id) {
case MESSAGE_ADD_PACKET: // Add some data...
// do nothing for now
tx_len = 0;
break;
case MESSAGE_GET_NUM_SLOTS: // How many packets can we send to device
tx_len = 1;
break;
default:
tx_len = 0;
break;
}
}
void main(void) {
//Stop WDT
WDT_A_hold(WDT_A_BASE);
//Assign I2C pins to USCI_B0
GPIO_setAsPeripheralModuleFunctionInputPin(GPIO_PORT_P3, GPIO_PIN0 + GPIO_PIN1);
//Initialize I2C as a slave device
USCI_B_I2C_initSlave(USCI_B0_BASE, SLAVE_ADDRESS);
// go into listening mode
SetUpRx();
while(1) {
__bis_SR_register(LPM4_bits + GIE);
// Message received over I2C, check if we have anything to transmit
switch (mode) {
case I2C_RX_DONE:
DecodeRx();
if (tx_len > 0) {
// start a reply
SetUpTx();
} else {
// nothing to do, back to listening
mode = RX_MODE;
}
break;
case I2C_TX_DONE:
// go back to listening
SetUpRx();
break;
default:
break;
}
}
}
/**
* I2C interrupt routine
*/
#pragma vector=USCI_B0_VECTOR
__interrupt void USCI_B0_ISR(void) {
switch(__even_in_range(UCB0IV,12)) {
case USCI_I2C_UCSTTIFG:
break;
case USCI_I2C_UCRXIFG:
*rx_data = USCI_B_I2C_slaveGetData(USCI_B0_BASE);
++rx_data;
break;
case USCI_I2C_UCTXIFG:
if (tx_len > 0) {
USCI_B_I2C_slavePutData(USCI_B0_BASE, 63);
--tx_len;
}
break;
case USCI_I2C_UCSTPIFG:
// OR'ing mode will let it be flagged in the main loop
mode |= I2C_DONE;
__bic_SR_register_on_exit(LPM4_bits);
break;
}
}
Any help on this would be much appreciated!
Thank you!
I am working on a project in C on the raspberry pi 2 in which the pi is polling a microcontroller via SPI when the microcontroller asserts a particular pin.
There are two functions that are intended to be executed in this fashion. One of the functions - LNK_pollNetwork() - checks to see if the request pin is high and - if it is - then it downloads the data until the pin is low again, then parses. The other function - LNK_generateNetStat() - sends a byte requesting the network status and then downloads data until the pin is low again.
It seems that if I place a 'printf' statement just after the poll byte in LNK_generateNetStat(), everything works fine. If I remove the printf statement, then the program goes haywire and it is clear to me that the other function is executing almost in parallel with this function.
Both functions are on the same thread... or so I believe.
void LNK_generateNetStat(){
uint8_t statusSerialString[(NETSTAT_HEADER_BYTES
+ (MAX_NUM_OF_NODES
* NETSTAT_FIELD_WIDTH_BYTES))];
uint8_t i = 0;
SPI_transfer(0x86);
/* if this printf is executed, all works normally */
if(verbose)
printf("Network status polled");
while(bcm2835_gpio_lev(REQ_PIN) == HIGH){
statusSerialString[i] = SPI_transfer(0xfe);
i++;
}
/* parsing code below this line */
/* ... */
}
The other function simply polls the request pin and, if it is high, then starts pulling data until the pin is low:
void LNK_pollNetwork(){
if(bcm2835_gpio_lev(REQ_PIN) == HIGH){
int i = 0;
uint8_t payload[PAYLOAD_MAX_LENGTH];
SPI_transfer(0xff); // dummy read - allows slave to load the buffer
while(bcm2835_gpio_lev(REQ_PIN) == HIGH){
payload[i] = SPI_transfer(0xff);
i++;
}
/* payload parsing below this line */
/* ... */
}
}
Both of these should be executed sequentially. There is a higher level task manager that executes LNK_pollNetwork() every 1ms and LNK_generateNetStat() every 1250ms.
I know that LNK_generateNetStat() is being pre-empted because I have used different values in the SPI polling routine for each function in order to identify what is going on. The 0x86 executes normally and should begin polling with bytes numbered 0xfe, but I am seeing 0xff bytes in many cases and - sometimes - they are intermixed. I'm using a logic analyzer to observe.
Thoughts?
I am transmitting data from my PIC24H microcontroller over 460Kbaud UART to a bluetooth radio module. Under most conditions, this flow works just fine and the bluetooth module uses CTS and RTS lines to manage flow control when its internal data buffers are full. However, there is a bug of some kind in the bluetooth module that resets it when data is continuously sent to it without any breaks, which happens if my data gets backed up in another bottleneck.
It would be nice if the module worked properly, but that's out of my control. So it seems that my only option is to do some data throttling on my end to make sure I don't exceed the data throughput limits (which I know roughly by experimentation).
My question is how to implement data rate throttling?
My current UART implementation is a RAM circular FIFO buffer 1024 bytes long that the main loop writes data to. A peripheral interrupt is triggered by the PIC when the last byte has been sent out by the UART hardware and my ISR reads the next byte from the buffer and sends it to the UART hardware.
Here's an idea of the source code:
uart_isr.c
//*************** Interrupt Service routine for UART2 Transmission
void __attribute__ ((interrupt,no_auto_psv)) _U2TXInterrupt(void)
{
//the UART2 Tx Buffer is empty (!UART_TX_BUF_FULL()), fill it
//Only if data exists in data buffer (!isTxBufEmpty())
while(!isTxBufEmpty()&& !UART_TX_BUF_FULL()) {
if(BT_CONNECTED)
{ //Transmit next byte of data
U2TXREG = 0xFF & (unsigned int)txbuf[txReadPtr];
txReadPtr = (txReadPtr + 1) % TX_BUFFER_SIZE;
}else{
break;
}
}
IFS1bits.U2TXIF = 0;
}
uart_methods.c
//return false if buffer overrun
BOOL writeStrUART(WORD length, BYTE* writePtr)
{
BOOL overrun = TRUE;
while(length)
{
txbuf[txWritePtr] = *(writePtr);
//increment writePtr
txWritePtr = (txWritePtr + 1) % TX_BUFFER_SIZE;
if(txWritePtr == txReadPtr)
{
//write pointer has caught up to read, increment read ptr
txReadPtr = (txReadPtr + 1) % TX_BUFFER_SIZE;
//Set overrun flag to FALSE
overrun = FALSE;
}
writePtr++;
length--;
}
//Make sure that Data is being transmitted
ensureTxCycleStarted();
return overrun;
}
void ensureTxCycleStarted()
{
WORD oldPtr = 0;
if(IS_UART_TX_IDLE() && !isTxBufEmpty())
{
//write one byte to start UART transmit cycle
oldPtr = txReadPtr;
txReadPtr = (txReadPtr + 1) % TX_BUFFER_SIZE;//Preincrement pointer
//Note: if pointer is incremented after U2TXREG write,
// the interrupt will trigger before the increment
// and the first piece of data will be retransmitted.
U2TXREG = 0xFF & (unsigned int)txbuf[oldPtr];
}
}
Edit
There are two ways that throttling could be implemented as I see it:
Enforce a time delay in between UART byte to be written that puts an upper limit on data throughput.
Keep a running tally of bytes transmitted over a certain time frame and if the maximum number of bytes is exceeded for that timespan create a slightly longer delay before continuing transmission.
Either option would theoretically work, its the implementation I'm wondering about.
Maybe a quota approach is what you want.
Using a periodic interrupt of relevant timescale, add a quota of "bytes to be transmitted" to a global variable to a point that you don't go over some level adjusted for the related deluge.
Then just check if there is quota before you come to send a byte. On new transmission there will be an initial deluge but later the quota will limit the transmission rate.
~~some periodic interrupt
if(bytes_to_send < MAX_LEVEL){
bytes_to_send = bytes_to_send + BYTES_PER_PERIOD;
}
~~in uart_send_byte
if(bytes_to_send){
bytes_to_send = bytes_to_send - 1;
//then send the byte
If you have a free timer, or if you can use an existing one, you could do some kind of "debounce" of the bytes sent.
Imagine you have this global var, byte_interval and you have a timer overflowing (and triggering the ISR) every microsecond. Then it could look something like this:
timer_usec_isr() {
// other stuff
if (byte_interval)
byte_interval--;
}
And then in the "putchar" function, you could have something like:
uart_send_byte(unsigned char b) {
if (!byte_interval) { // this could be a while too,
// depends on how you want to structure the code
//code to send the byte
byte_interval = AMOUNT_OF_USECS;
}
}
I'm sorry to not look much into your code so I could be more specific.
This is just an idea, I don't know if it fits for you.
First, there's two types of serial flow control in common use.
CTS/RTS handshaking ('hardware flow control')
XON/XOFF ('software flow control')
You say CTS is on, but you might want to see if XON/XOFF can be enabled in some way.
Another approach if you can configure it is simply to use a lower baud rate. This obviously depends on what you can configure on the other end of the link, but it's usually the easiest way of fixing problems when devices aren't able to cope with higher speed transfers.
Timer approach which adds delay to Tx at specific time:
Configure a free running timer at an appropriate periodic rate.
In the timer ISR, toggle a bit in a global state variable (delayBit)
In the UART ISR, if delayBit is high and delayPostedBit is low, then exit the TX ISR without clearing the TX interrupt flag and set a bit in a global state variable (delayPostedBit). If delayBit is low, then clear delayPostedBit. The result is to cause a delay equal to one ISR schedule latency, since the ISR will be entered again. This is not a busy-wait delay so won't affect the timing of the rest of the system.
Adjust the period of the timer to add latency at appropriate intervals.