I am trying to influence payload inside BLE advertisement beacons. My dev board is NUCLEO WB16CC based on a dual core STM32WB15CC.
Whenever I try to aci_gap_update_adv_data(sizeof(manuf_data), manuf_data); the program hangs between the // LED lines which I have added:
static void SendCmd(uint16_t opcode, uint8_t plen, void *param)
{
// vorac
HAL_GPIO_WritePin(0x48000400, ((uint16_t)0x0020), 0); // LED1
pCmdBuffer->cmdserial.cmd.cmdcode = opcode;
HAL_GPIO_WritePin(0x48000400, ((uint16_t)0x0001), 0); // LED2
pCmdBuffer->cmdserial.cmd.plen = plen;
memcpy( pCmdBuffer->cmdserial.cmd.payload, param, plen );
hciContext.io.Send(0,0);
return;
}
pCmdBuffer is not null it actually points to PLACE_IN_SECTION("MB_MEM1") ALIGN(4) static TL_CmdPacket_t BleCmdBuffer;.
If that is changed to an ordinary global variable, the problem shifts down to the hciContext.io.Send(0,0); line.
I suspect some problem with the inter process communication but can't phantom where to look. Suggestions?
Sounds to me that the value in pCmdBuffer is not aligned, and the processor requires strict alignment.
Related
I'm trying to understand how the STM32F091VB manages the send of data via serial protocol with the function HAL_UART_Transmit_IT()
At the moment I've a function called in the main() that creates the packet and send it via serial; it is something like this:
tx1[0] = STX;
tx1[1] = 0xFF;
tx1[2] = 0x80;
tx1[3] = 0x80;
DE_TAST_HIGH;
HAL_UART_Transmit_IT(&huart3, tx1, 8);
Now, the data I'm sending is quite small so the code run pretty fast and I'm trying to understand what's going to happen if I try to send a huge packet via serial protocol.
For istance, if my tx1[] is 100byte the HAL_UART_Transmit_IT() function block the CPU waiting while the full packet is sent to the serial port or it works more like a separate process where I tell the micro to send that packet and, while sending it it also process the remaining part of my code/main function?
I've tried to search on the micro datasheet to see if there was something about this process but I had no luck. I've read the stm32f0xx_hal_uart.c and it confirms that it is sent via interrupt in a non blocking mode but I would like to have some more in depth documentation about it
First of all you need to understand how the HAL_UART_Transmit_IT is meant to be used. We can get some help from STM FAQ.
The function is "non blocking" because when you call it it will do some configuration of the interrupts and then return. The buffer will not be transmitted during the call to your function, instead the heavy lifting is deferred to a later stage.
We can further have a look at the source code, to get a proof from what I said (note I kept only the juicy parts).
Blocking
HAL_StatusTypeDef HAL_UART_Transmit(UART_HandleTypeDef *huart, uint8_t *pData, uint16_t Size, uint32_t Timeout)
{
uint16_t* tmp;
uint32_t tickstart = 0U;
// [ ... ]
huart->TxXferSize = Size;
huart->TxXferCount = Size;
while(huart->TxXferCount > 0U)
{
// [ ... ]
// This is were the actual HW regs are accessed, starting the transfer
huart->Instance->DR = (*pData++ & (uint8_t)0xFF);
}
}
// [ ... ]
return HAL_OK
}
Non Blocking
HAL_StatusTypeDef HAL_UART_Transmit_IT(UART_HandleTypeDef *huart, uint8_t *pData, uint16_t Size)
{
huart->pTxBuffPtr = pData;
huart->TxXferSize = Size;
huart->TxXferCount = Size;
/* Enable the UART Transmit data register empty Interrupt */
// This is the only part were HW regs are accessed. What is happening here??
SET_BIT(huart->Instance->CR1, USART_CR1_TXEIE);
return HAL_OK;
}
The _IT function only activates one interrupt, based also on the datasheet:
This means we will receive an interrupt whenever the TX buffer is free. Who is actually sending the data then?
With the help of the FAQs and reading the source code, we find that void HAL_UART_IRQHandler(UART_HandleTypeDef *huart) does something like this:
/* UART in mode Transmitter ------------------------------------------------*/
if(((isrflags & USART_SR_TXE) != RESET) && ((cr1its & USART_CR1_TXEIE) != RESET))
{
UART_Transmit_IT(huart);
return;
}
Which in turn calls the UART_Transmit_IT
static HAL_StatusTypeDef UART_Transmit_IT(UART_HandleTypeDef *huart)
{
uint16_t* tmp;
/* Check that a Tx process is ongoing */
if(huart->gState == HAL_UART_STATE_BUSY_TX)
{
huart->Instance->DR = (uint8_t)(*huart->pTxBuffPtr++ & (uint8_t)0x00FF);
if(--huart->TxXferCount == 0U)
{
/* Disable the UART Transmit Complete Interrupt */
CLEAR_BIT(huart->Instance->CR1, USART_CR1_TXEIE);
/* Enable the UART Transmit Complete Interrupt */
SET_BIT(huart->Instance->CR1, USART_CR1_TCIE);
}
return HAL_OK;
}
else
{
return HAL_BUSY;
}
}
This function transmits only one byte! It then decrements the counter for the transmission (remember, all the information was set into the uart handler) and if it reaches 0, finally the complete interrupt is called.
Interrupts
Note that StmCube does the peripheral initialization and interrupt linking for you, but if you program from scratch you need to remember to write and register UART_IRQ_Handler
You can find here the code navigator to review my snippets and investigate further.
I am writing a C program on Eclipse to communicate from my ARM Cortex M4-F microcontroller in I2C with its master, another MCU.
In my I2C library, I use a static global variable to store all the parameters of the communication (address, lenghts, data buffers). The issue is that a part (an array containing the data to be transmitted, which are 8 bits integers) of this variable gets modified when the interrupt (Start condition followed by the slave's address on the I2C bus) happens, even before executing the code I put the handler. It gets assigned to 8, whatever the initial value.
I tried to put breakpoints basically everywhere, and a watchpoint on the variable, the changes arises seemingly from nowhere, not in the while loop, and before the call to my_I2C_Handler(), so the interrupt is the cause apparently.
I also tried setting the variable as volatile, but that changed nothing.
I noticed one interesting thing: putting a printf of the array's data during my_I2C_Init() or my_SlaveAsync(), like so:
printf("%d\n", req.tx_data[0]);
corrects this problem, but why? I want to remove all prints after debugging.
#include <stdint.h>
#include "my_i2c.h"
void I2C1_IRQHandler(void)
{
printf("\nI2C Interrupt\n");
my_I2C_Handler(MXC_I2C1); // MXC_I2C1 is a macro for the registry used
}
int main(void)
{
int error = 0;
printf("\nStarting I2C debugging\n");
// Setup the I2C
my_I2C_Shutdown(MXC_I2C1);
my_I2C_Init(MXC_I2C1);
NVIC_EnableIRQ(I2C1_IRQn); // Enable interrupt
my_I2C_SlaveAsync(MXC_I2C1); // Prepare to receive communication
while (1)
{
LED_On(0);
LED_Off(0);
}
printf("\nDone testing\n");
return 0;
}
The structure of the request containing the parameters of the communication is like this:
typedef struct i2c_req i2c_req_t;
struct i2c_req {
uint8_t addr; // I2C 7-bit Address
unsigned tx_len; // Length of tx data
unsigned rx_len; // Length of rx
unsigned tx_num; // Number of tx bytes sent
unsigned rx_num; // Number of rx bytes sent
uint8_t *tx_data; // Data for mater write/slave read
uint8_t *rx_data; // Data for master read/slave write
};
Is declared like so in the beginning of the file:
static i2c_req_t req;
and assigned this way in my_I2C_Init():
uint8_t rx[1] = {0};
uint8_t tx[1] = {12};
req.addr = 0xAA;
req.tx_data = tx;
req.tx_len = 1;
req.rx_data = rx;
req.rx_len = 1;
req.tx_num = 0;
req.rx_num = 0;
Many thanks for your help
I am receiving/reading data from a GPS module sent via USART3 to the STM32F091.
The data gets there just fine which I confirm by sending it to my PC COM3 port and feeding it to 'u-center' (GPS evaulation software).
My problem is that I want to evaluate the data myself in my C program, and for that purpose I feed it into a Ring Buffer, however, every character of the GPS signal is written multiple times to the buffer, instead of one by one.
For example
GGGGGGGPPPPPPPPSSSSSSSS instead of GPS
I am unsure what I'm doing wrong, maybe it's something really obvious I'm overlooking after staring at this code so long.
Here's the relevant code.
stm32f0xx_it.c
#include <main.h>
void USART3_8_IRQHandler(void)
{
if (USART_FLAG_RXNE != RESET)
{
uint16_t byte = 0;
/* Data reception */
/* Clear Overrun Error Flag, necessary when RXNE is used */
USART_GetITStatus(USART3, USART_IT_ORE);
/* Read from Receive Data Register and put into byte */
byte = USART_ReceiveData(USART3);
(*pRXD3).wr = ((*pRXD3).wr + 1) % (*pRXD3).max;
(*pRXD3).Buffer[(*pRXD3).wr] = byte;
/* Send Data to PC, and reset Transmission Complete Flag */
USART_GetITStatus(USART1, USART_IT_TC);
USART_SendData(USART1, byte);
return;
}
return;
}
uartGPS.h
....
struct GPSuart
{
BYTE Buffer[255];
WORD max;
WORD re;
WORD wr;
};
....
main.h
....
extern volatile BYTE B_ser_txd_3[255];
extern volatile BYTE B_ser_rxd_3[255];
extern volatile struct GPSuart TXD_uart_3;
extern volatile struct GPSuart RXD_uart_3;
extern volatile struct GPSuart *pRXD3;
extern volatile struct GPSuart *pTXD3;
....
Let me know if I should provide additional information.
This:
if (USART_FLAG_RXNE != RESET)
does not test a flag, that code is inspecting the flag constant itself, which is not what you meant.
You need more code, to access the UART's status register and check the flag:
if (USART_GetFlagStatus(USARTx, USART_FLAG_RXNE) != RESET)
I'm currently developing a system which involves sending a request string to a sensor device connected via UART to an Atmel SAML21 Xplained Pro board. I'm testing with an Arduino board as the "sensor device", but eventually, it'll be for a Rotronic HC-2 sensor.
The process goes something like this:
MCU sends string { 99RDD} over UART to sensor
-> delay of up to 500ms
-> Response string of 99 bytes sent back via UART
-> Response transmitted to virtual com port on embedded debugger
My issue is that for some reason, I'm either not getting anything sent back, or it's sending back the variable request_msg
I know that the response from the sensor should be 99 bytes of ASCII, and I've tested both the actual sensor, and the Arduino test board over serial connectors to ensure that the readings are coming back correctly.
The software is using Atmel ASF v4.0, which is great when it works, but the documentation is fairly flaky, so I was hoping someone with more experience could point me as to where I'm going wrong in the code.
I have the following code for my main application:
#include "atmel_start.h"
#include "atmel_start_pins.h"
#include <string.h>
static uint8_t example_hello_world[14] = "Hello World!\n";
static uint8_t example_error_msg[13] = "UART Error!\n";
static uint8_t request_msg[24] = "Sending Sensor Request\n";
static uint8_t rotronic_ascii[8] = "{ 99RDD}";
volatile static uint32_t data_arrived = 0;
volatile static uint32_t reading_received = 0;
static void tx_cb_EDBG_COM(const struct usart_async_descriptor *const io_descr)
{
/* Transfer completed */
gpio_toggle_pin_level(LED0);
}
static void rx_cb_EDBG_COM(const struct usart_async_descriptor *const io_descr)
{
/* Receive completed */
data_arrived = 1;
}
static void err_cb_EDBG_COM(const struct usart_async_descriptor *const io_descr)
{
/* error handle */
io_write(&EDBG_COM.io, example_error_msg, 13);
}
static void tx_cb_COM1(const struct usart_async_descriptor *const io_descr)
{
/* Transfer completed */
gpio_toggle_pin_level(LED0);
}
static void rx_cb_COM1(const struct usart_async_descriptor *const io_descr)
{
/* Receive completed */
reading_received = 1;
}
static void err_cb_COM1(const struct usart_async_descriptor *const io_descr)
{
/* error handle */
io_write(&COM1.io, example_error_msg, 13);
}
int main(void)
{
volatile uint8_t recv_char[99];
atmel_start_init();
// Setup the EDBG Serial Port
usart_async_register_callback(&EDBG_COM, USART_ASYNC_TXC_CB, tx_cb_EDBG_COM);
usart_async_register_callback(&EDBG_COM, USART_ASYNC_RXC_CB, rx_cb_EDBG_COM);
usart_async_register_callback(&EDBG_COM, USART_ASYNC_ERROR_CB, err_cb_EDBG_COM);
usart_async_enable(&EDBG_COM);
// Send a test string to ensure EDBG Serial is working
io_write(&EDBG_COM.io, example_hello_world, 14);
// Setup the Rotronic [Arduino] Serial Port
usart_async_register_callback(&COM1, USART_ASYNC_TXC_CB, tx_cb_COM1);
usart_async_register_callback(&COM1, USART_ASYNC_RXC_CB, rx_cb_COM1);
usart_async_register_callback(&COM1, USART_ASYNC_ERROR_CB, err_cb_COM1);
usart_async_enable(&COM1);
while (1) {
if (reading_received == 0)
{
// Delay for a Bit
delay_ms(5000);
// Notify the EDBG COM Port
io_write(&EDBG_COM.io, request_msg, 24);
// Send the Rotronic ASCII
io_write(&COM1.io, rotronic_ascii, 8);
}
// Check if Reading has been Received
if (reading_received == 1)
{
while (io_read(&COM1.io, &recv_char, 99) == 99)
{
// Write what's on the buffer from the receiver
io_write(&EDBG_COM.io, recv_char, 99);
}
// Reset the flag
reading_received = 0;
}
}
}
You seem to be coding for ASFv3 - v4 will trigger your receive callback for any incoming bytes, not only once when your buffer is full (and you have received every 99 characters).
That means that io_read will most probably never return 99 (because it was only a partial read of your message) and you will most probably never send anything back.
Note the docs say (Scroll down to "different read function behavior..."):
In ASFv4 a data reception type callback in a driver with a ring buffer is triggered for every received data.
The UART apparently is a driver with a ring buffer.
You need to repeatedly call io_read and sum up the number of received bytes until you have got 99. Only then proceed. The ASF docs have an example for that. Make sure you copy code from there that fits your version.
I am using the board Nucleo F401Re based on micro-controller STM32F401RET6. I connected to the board a Micro SD slot, and interested in writing data to the SD Card and read data from it. I used the software STM32CubeX to generate code and in particular the SD library with built-in functions. I tried to write a simple code which writes an array to a specific array and tries to read the same data afterwords. The code is as follows:
int main(void)
{
/* Reset of all peripherals, Initializes the Flash interface and the Systick. */
HAL_Init();
/* Configure the system clock */
SystemClock_Config();
/* Initialize all configured peripherals */
MX_GPIO_Init();
MX_USART2_UART_Init();
MX_SDIO_SD_Init();
char buffer[14] = "Hello, world\n";
uint32_t to_send[512] ; // Te
uint32_t to_receive[512];
uint64_t address = 150;
HAL_SD_WriteBlocks(&hsd, to_send, address, 512, 1);
HAL_SD_ReadBlocks(&hsd, to_receive, address, 512, 1);
while (1)
{
HAL_UART_Transmit(&huart2, (uint8_t *)buffer, 14, 1000);
HAL_UART_Transmit(&huart2, (uint8_t *)to_receive, 512, 1000);
}
The code stopps in the middle of the function HAL_Init() and I'm getting the following message:
The stack pointer for stack 'CSTACK' (currently 0x1FFFFD30) is outside the stack range (0x20000008 to 0x20000408)
This message doesn't appear when I don't use the functions HAL_SD_WriteBlocks(), or HAL_SD_ReadBlocks(). If someone already had this problem and knows how to fix it, some help would save me. If needed I can add the rest of the code.
You're using too much stack space. You can adjust the allocated stack space in the linker script and increase it if needed.
However you can probably avoid that by writing your code differently. In your example above, you're allocating large buffers (4kB) on the stack. Don't do that unless absolutely necessary. I'm referring to this:
int main(void) {
// ...
uint32_t to_send[512];
uint32_t to_receive[512];
// ...
}
Instead, allocate your buffers like this:
uint32_t to_send[512];
uint32_t to_receive[512];
int main(void) {
// ...
}