Bluetooth Low energy gatt data sending too much information - c

I have an issue with my code being that I pass 51 bytes to a ble characteristic and, then call a function to send this to my phone via a pr established GATT connection however, it sends far more data then 51 bytes.
I have a function called ble_cus_send_csv it takes in several parameters. These are my custom structure, my data, the data length and the handler.
I send into it 51 chars which I believe is 51bytes. I take the sizeof() this and, it gives 51.
I then run my ble_cus_send_csv function and it runs and outputs my 51 bytes of data with far more bytes afterwards.
I have attached my send function below and my output. It should just output a fixed 51 bytes.
I am using a Nordic NRF52840-dk board inside segger studio. The code is written in C.
Thanks,
My ble_cus_send_csv function:
uint32_t ble_cus_send_csv(ble_cus_t * p_cus,
uint8_t * p_data,
uint16_t * p_length,
uint16_t conn_handle)
{
ble_gatts_hvx_params_t hvx_params;
NRF_LOG_INFO("Sending CSV.\r\n");
if (p_cus == NULL)
{
return NRF_ERROR_NULL;
}
uint32_t err_code = NRF_SUCCESS;
// Send value if connected and notifying.
if ((p_cus->conn_handle != BLE_CONN_HANDLE_INVALID)) //Setup the parameters to pass into the characteristic value
{
memset(&hvx_params, 0, sizeof(hvx_params));
hvx_params.handle = p_cus->custom_value_handles.value_handle;
hvx_params.p_data = p_data;
hvx_params.p_len = p_length;
hvx_params.type = BLE_GATT_HVX_NOTIFICATION;
err_code = sd_ble_gatts_hvx(conn_handle, &hvx_params);//Set the characteristic
NRF_LOG_INFO("sd_ble_gatts_hvx result: %x. \r\n", err_code);
}
else
{
err_code = NRF_ERROR_INVALID_STATE;
NRF_LOG_INFO("sd_ble_gatts_hvx result: NRF_ERROR_INVALID_STATE. \r\n");
}
return err_code;
}
P_length in the debugger

Related

Sensor value through UART in ESP32 correction

I am using ESP-IDF SDK to develop a small project to take sensor data through UART. I am following the data sheet which is provided by the manufacturer to parse and calculate the value of different parameter. But the output on serial is not correct and every time I am getting different output which is wrong.
Code:-
#include "freertos/FreeRTOS.h"
#include "freertos/task.h"
#include "esp_system.h"
#include "esp_log.h"
#include "driver/uart.h"
#include "string.h"
#include "driver/gpio.h"
static const int RX_BUF_SIZE = 1024;
#define TXD_PIN (GPIO_NUM_4)
#define RXD_PIN (GPIO_NUM_5)
#define DELAY_IN_MS(t) (((portTickType)t*configTICK_RATE_HZ)/(portTickType)1000)
void init(void) {
const uart_config_t uart_config = {
.baud_rate = 4800,
.data_bits = UART_DATA_8_BITS,
.parity = UART_PARITY_DISABLE,
.stop_bits = UART_STOP_BITS_1,
.flow_ctrl = UART_HW_FLOWCTRL_DISABLE,
.source_clk = UART_SCLK_APB,
};
uart_driver_install(UART_NUM_1, RX_BUF_SIZE * 2, 129, 0, NULL, 0);
uart_param_config(UART_NUM_1, &uart_config);
uart_set_pin(UART_NUM_1, TXD_PIN, RXD_PIN, UART_PIN_NO_CHANGE, UART_PIN_NO_CHANGE);
}
static void rx_task(void *arg)
{
static const char *RX_TASK_TAG = "RX_TASK";
esp_log_level_set(RX_TASK_TAG, ESP_LOG_INFO);
uint8_t data[7] = {0};
uint8_t PR=0,spo2=0,temprature=0;
while (1) {
const int rxBytes = uart_read_bytes(UART_NUM_1, data, 7, 1);
if (rxBytes == 7) {
printf("The rxbytes %d and %s\n",rxBytes,data);
PR = (data[3] & 0x7F) + ((data[2] & 0x40)<<1);
spo2 = (data[4] & 0x7F);
temprature = (data[5] & 0x7F);
printf("PR is %d , Spo2 is %d , temperature is %d \n",PR,spo2,temprature);
ESP_LOGI(RX_TASK_TAG, "Read %d bytes: '%s'", rxBytes, data);
ESP_LOG_BUFFER_HEXDUMP(RX_TASK_TAG, data, rxBytes, ESP_LOG_INFO);
memset(data,0,7);
}
}
}
void app_main(void)
{
init();
xTaskCreate(rx_task, "uart_rx_task", 1024*2, NULL, configMAX_PRIORITIES, NULL);
}
The output of the program on serial monitor is:-
The data sheet provided by the manufacturer is:-
https://drive.google.com/file/d/1lPATxeXXreVZkg9Ufg9BnyCrl4EsbJAj/view?usp=sharing
Correct me if I misaligned any data format to calculate the values.
Assembling messages from an unreliable channel (serial) means you can't really rely on them always arriving in the order you expect without any issues, so you have to take precautions that you don't get junk.
The code assumes that it will always receive these 7-byte messages in 7-byte chunks, and it doesn't always work that way. Line noise or timeouts could cause a proper message to be received in multiple chunks (say, 4 bytes then 3 bytes), or it could cause bytes to be lost.
To see if this is part of the problem, add logging on every read, not just on the ones that you expect:
static void rx_task(void *arg)
{
...
while (1) {
const int rxBytes = uart_read_bytes(UART_NUM_1, data, 7, 1);
// Log ALL reads, not just the ones you expect
ESP_LOGI(RX_TASK_TAG, "Read %d bytes: '%s'", rxBytes, data);
ESP_LOG_BUFFER_HEXDUMP(RX_TASK_TAG, data, rxBytes, ESP_LOG_INFO);
if (rxBytes == 7) {
///
}
}
}
This will probably confirm my hunch.
In any case, you can't ever rely on the fixed-size messages because if it gets out of sync once, it won't ever recover. This means you have to build in your own protections.
Reading the data sheet for the sensor, it says that the first byte of every 7-byte message has the high bit set, so this is perfect for resynchronization: you ignore everything until you get that start byte, then read 6 more bytes, then you have a full message.
So you end up needing two buffers: one for the message you're assembling, and one for doing raw I/O from the sensor, copying to the real message buffer as you verify the sync.
A quick-and-dirty method would look like this:
static void rx_task(void *arg)
{
static const char *RX_TASK_TAG = "RX_TASK";
esp_log_level_set(RX_TASK_TAG, ESP_LOG_INFO);
// sensor message we're trying to build
uint8_t message[7] = {0};
uint8_t *msgnext = message;
while (1) {
uint8_t inbuf[7];
const int rxBytes = uart_read_bytes(UART_NUM_1, inbuf, sizeof inbuf, 1);
ESP_LOGI(RX_TASK_TAG, "Read %d bytes: '%s'", rxBytes, inbuf);
ESP_LOG_BUFFER_HEXDUMP(RX_TASK_TAG, inbuf, rxBytes, ESP_LOG_INFO);
// error/timeout? do something?
if (rxBytes <= 0) continue;
for (int i = 0; i < rxBytes; i++)
{
const uint8_t b = inbuf[i];
if (b & 0x80)
{
// First byte of a message, reset the buffer
msgnext = message;
*msgnext++ = b;
}
else if (msgnext == message)
{
// not synced yet, ignore this byte
continue;
}
else
{
*msgnext++ = b;
if ((msgnext - message) == sizeof message)
{
// WE FOUND A FULL MESSAGE
uint8_t PR = (message[3] & 0x7F) + ((message[2] & 0x40)<<1);
uint8_t spo2 = (message[4] & 0x7F);
uint8_t temperature = (message[5] & 0x7F);
printf("PR is %d, Spo2 is %d, temperature is %d\n",
PR,spo2,temperature);
msgnext = message; // reset to empty the buffer
}
}
}
}
}
The idea is your raw I/O is done into inbuf, and it starts by looking for the sync byte (with the high bit set), and that tells you to start copying data to the real sensor buffer message. Once you get 7 bytes, it shows the result and resets the buffer.
And even if you have a few bytes of valid message data, if another SYNC byte comes in, it assumes the previous message was messed up, so it throws it away and starts a new fresh buffer.
You can add more here, such as support for timeouts, or detecting/logging when a partial message is discarded, but in no case can you avoid this data-framing layer.
Also, it's not necessary that the I/O buffer inbuf to be the same as the message size, and it might make sense to read from the UART in one-byte chunks; in a multi-tasking operating system I probably wouldn't do this, but in the ESP environment it might make sense - dunno. That would simplify the looping some.
EDIT Looking at your actual data dumps, it's clear that your messages are not framed properly because even though you have 7 bytes, the SYNC byte (with the high bit set) is found somewhere in the middle, but not the same place each time. Clearly this is a framing issue.

How to properly utilize masks to send index information to perf event output?

According to the documentation for bpf_perf_event_output found here: http://man7.org/linux/man-pages/man7/bpf-helpers.7.html
"The flags are used to indicate the index in map for which the value must be put, masked with BPF_F_INDEX_MASK."
In the following code:
SEC("xdp_sniffer")
int xdp_sniffer_prog(struct xdp_md *ctx)
{
void *data_end = (void *)(long)ctx->data_end;
void *data = (void *)(long)ctx->data;
if (data < data_end) {
/* If we have reached here, that means this
* is a useful packet for us. Pass on-the-wire
* size and our cookie via metadata.
*/
/* If we have reached here, that means this
* is a useful packet for us. Pass on-the-wire
* size and our cookie via metadata.
*/
__u64 flags = BPF_F_INDEX_MASK;
__u16 sample_size;
int ret;
struct S metadata;
metadata.cookie = 0xdead;
metadata.pkt_len = (__u16)(data_end - data);
/* To minimize writes to disk, only
* pass necessary information to userspace;
* that is just the header info.
*/
sample_size = min(metadata.pkt_len, SAMPLE_SIZE);
flags |= (__u64)sample_size << 32;
ret = bpf_perf_event_output(ctx, &my_map, flags,
&metadata, sizeof(metadata));
if (ret)
bpf_printk("perf_event_output failed: %d\n", ret);
}
return XDP_PASS;
}
It works as you would expect and stores the information for the given CPU number.
However, suppose I want all packets to be sent to index 1.
I swap
__u64 flags = BPF_F_INDEX_MASK;
for
__u64 flags = 0x1ULL;
The code compiles correctly and throws no errors, however no packets get saved at all anymore. What am I doing wrong if I want all of the packets to be sent to index 1?
Partial answer: I see no reason why the packets would not be sent to the perf buffer, but I suspect the error is on the user space code (not provided). It could be that you do not “open” the perf event for all CPUs when trying to read from the buffer. Have a look at the man page for perf_event_open(2): check that the combination of values for pid and cpu allows you to read data written for CPU 1.
As a side note, this:
__u64 flags = BPF_F_INDEX_MASK;
is misleading. The mask should be used to mask the index, not to set its value. BPF_F_CURRENT_CPU should be used instead, the former only happens to work because the two enum attributes have the same value.

Getting raw data using libusb

I'm doing reverse engineering about a ultrasound probe on the Linux side. I want to capture raw data from an ultrasound probe. I'm programming with C and using the libusb API.
There are two BULK IN endpoints in the device (2 and 6). The device is sending 2048 bytes data, but it is sending data as 512 bytes with four block.
This picture is data flow on the Windows side, and I want to copy that to the Linux side. You see four data blocks with endpoint 02 and after that four data blocks with endpoint 06.
But there is a problem about timing. The first data block of endpoint 02's and first data block of endpoint 06's are close to each other acoording to time. But in data flow they are not in sequence.
I see that the computer is reading the first data blocks of endpoint 02 and 06. After that, the computer is reading the other three data blocks of endpoint 02 and endpoint 06. But in USB Analyzer, the data flow is being viewed according to the endpoint number. The sequence is different according to time.
On the Linux side, I write code like this:
int index = 0;
imageBuffer2 = (unsigned char *) malloc(2048);
imageBuffer6 = (unsigned char *) malloc(2048);
while (1) {
libusb_bulk_transfer(devh, BULK_EP_2, imageBuffer2, 2048, &actual2, 0);
libusb_bulk_transfer(devh, BULK_EP_6, imageBuffer6, 2048, &actual6, 0);
//Delay
for(index = 0; index <= 10000000; index ++)
{
}
}
So that result is in picture as below
In other words, in my code all reading data is being read in sequence according to time and endpoint number. My result is different from the data flow on the Windows side.
In brief, I have two BULK IN endpoints, and they are starting read data close according to time. How is it possible?
It's not clear to me whether you're using a different method for getting the data on Windows or not, I'm going to assume that you are.
I'm not an expert on libusb by any means, but my guess would be that you are overwriting you data with each call, since you're using the same buffer each time. Try giving your buffer a fixed value before using the transfer method, and then evaluate the result.
If it is the case, I believe something along the lines of the following would also work in C:
imageBuffer2 = (unsigned char *) malloc(2048);
char *imageBuffer2P = imageBuffer2;
imageBuffer6 = (unsigned char *) malloc(2048);
char *imageBuffer6P = imageBuffer6;
int dataRead2 = 0;
int dataRead6 = 0;
while(dataRead2 < 2048 || dataRead6 < 2048)
{
int actual2 = 0;
int actual6 = 0;
libusb_bulk_transfer(devh, BULK_EP_2, imageBuffer2P, 2048-dataRead2, &actual2, 200);
libusb_bulk_transfer(devh, BULK_EP_6, imageBuffer6P, 2048-dataRead6, &actual6, 200);
dataRead2 += actual2;
dataRead6 += actual6;
imageBuffer2P += actual2;
imageBuffer6P += actual6;
usleep(1);
}

Send UDP on specific connection in µIP

I'm using uIP on a Tiva C Launchpad board and want to send UDP Packages. But it seems that the uip_buf is not filled when i call the uip_udp_periodic function.
The code looks like this:
uint8_t my_udp_buf = {0x00, 0xAA, 0xBB, 0xCC};
uint32_t my_udp_buf_len = 4;
void main(main){
[...]
uip_ipaddr_t addr;
struct uip_udp_conn *c;
uip_ipaddr(&addr, 172,16,23,1);
c = uip_udp_new(&addr, HTONS(12345)); // setting up a new UDP connection to 172.16.23.1:12345 here
[...]
while(42==42){
uip_udp_conn = c; // set the current connection to our udp connection
uip_appdata = my_udp_buf; // asssign the uip_appdata pointer to our data pointer
uip_send(uip_appdata, my_udp_buf_len); // sending the data
[...]
// call the periodic function for all UDP connections
for(ui32Temp = 0; ui32Temp < UIP_UDP_CONNS; ui32Temp++)
{
uip_udp_periodic(ui32Temp);
// --> The uip_len is always 0! why?
//
// If the above function invocation resulted in data that
// should be sent out on the network, the global variable
// uip_len is set to a value > 0.
//
if(uip_len > 0)
{
uip_arp_out();
PacketTransmit(EMAC0_BASE, uip_buf, uip_len);
uip_len = 0;
}
}
}
}
The question is, do i set the connection correctly? In the header file i cannot find any macro or function to control on which connection i send out the data, so i assume that i need to set the connection pointer. Also do i need to save the data? probably the pointer to uip_appdata is overwritten somewhere else afterwards.
It seems like UDP is not well implemented in the bare uIP version. you need to do a lot of manual stuff:
uip_udp_conn = c; // set your connection
uip_slen = len; // set the length of data to send
memcpy(&uip_buf[UIP_LLH_LEN + UIP_IPUDPH_LEN], data, len > UIP_BUFSIZE? UIP_BUFSIZE: len); // copy to the buffer
uip_process(UIP_UDP_SEND_CONN); // tell uip to construct the package
uip_arp_out(); // attack Ethernet header
PacketTransmit(EMAC0_BASE, uip_buf, uip_len); // send the package with the Tiva C function
uip_len = 0; // reset length to 0
in the contiki version of uIP is a lot more UDP convenient functionality.
When working with udp and the tiva, I have found that having a seperate function to handle the udp instances works much better. when you run it out of your main function, you will end up having multiple instances and that will cause instability

Issue with SPI (Serial Port Comm), stuck on ioctl()

I'm trying to access a SPI sensor using the SPIDEV driver but my code gets stuck on IOCTL.
I'm running embedded Linux on the SAM9X5EK (mounting AT91SAM9G25). The device is connected to SPI0. I enabled CONFIG_SPI_SPIDEV and CONFIG_SPI_ATMEL in menuconfig and added the proper code to the BSP file:
static struct spi_board_info spidev_board_info[] {
{
.modalias = "spidev",
.max_speed_hz = 1000000,
.bus_num = 0,
.chips_select = 0,
.mode = SPI_MODE_3,
},
...
};
spi_register_board_info(spidev_board_info, ARRAY_SIZE(spidev_board_info));
1MHz is the maximum accepted by the sensor, I tried 500kHz but I get an error during Linux boot (too slow apparently). .bus_num and .chips_select should correct (I also tried all other combinations). SPI_MODE_3 I checked the datasheet for it.
I get no error while booting and devices appear correctly as /dev/spidevX.X. I manage to open the file and obtain a valid file descriptor. I'm now trying to access the device with the following code (inspired by examples I found online).
#define MY_SPIDEV_DELAY_USECS 100
// #define MY_SPIDEV_SPEED_HZ 1000000
#define MY_SPIDEV_BITS_PER_WORD 8
int spidevReadRegister(int fd,
unsigned int num_out_bytes,
unsigned char *out_buffer,
unsigned int num_in_bytes,
unsigned char *in_buffer)
{
struct spi_ioc_transfer mesg[2] = { {0}, };
uint8_t num_tr = 0;
int ret;
// Write data
mesg[0].tx_buf = (unsigned long)out_buffer;
mesg[0].rx_buf = (unsigned long)NULL;
mesg[0].len = num_out_bytes;
// mesg[0].delay_usecs = MY_SPIDEV_DELAY_USECS,
// mesg[0].speed_hz = MY_SPIDEV_SPEED_HZ;
mesg[0].bits_per_word = MY_SPIDEV_BITS_PER_WORD;
mesg[0].cs_change = 0;
num_tr++;
// Read data
mesg[1].tx_buf = (unsigned long)NULL;
mesg[1].rx_buf = (unsigned long)in_buffer;
mesg[1].len = num_in_bytes;
// mesg[1].delay_usecs = MY_SPIDEV_DELAY_USECS,
// mesg[1].speed_hz = MY_SPIDEV_SPEED_HZ;
mesg[1].bits_per_word = MY_SPIDEV_BITS_PER_WORD;
mesg[1].cs_change = 1;
num_tr++;
// Do the actual transmission
if(num_tr > 0)
{
ret = ioctl(fd, SPI_IOC_MESSAGE(num_tr), mesg);
if(ret == -1)
{
printf("Error: %d\n", errno);
return -1;
}
}
return 0;
}
Then I'm using this function:
#define OPTICAL_SENSOR_ADDR "/dev/spidev0.0"
...
int fd;
fd = open(OPTICAL_SENSOR_ADDR, O_RDWR);
if (fd<=0) {
printf("Device not found\n");
exit(1);
}
uint8_t buffer1[1] = {0x3a};
uint8_t buffer2[1] = {0};
spidevReadRegister(fd, 1, buffer1, 1, buffer2);
When I run it, the code get stuck on IOCTL!
I did this way because, in order to read a register on the sensor, I need to send a byte with its address in it and then get the answer back without changing CS (however, when I tried using write() and read() functions, while learning, I got the same result, stuck on them).
I'm aware that specifying .speed_hz causes a ENOPROTOOPT error on Atmel (I checked spidev.c) so I commented that part.
Why does it get stuck? I though it can be as the device is created but it actually doesn't "feel" any hardware. As I wasn't sure if hardware SPI0 corresponded to bus_num 0 or 1, I tried both, but still no success (btw, which one is it?).
UPDATE: I managed to have the SPI working! Half of it.. MOSI is transmitting the right data, but CLK doesn't start... any idea?
When I'm working with SPI I always use an oscyloscope to see the output of the io's. If you have a 4 channel scope ypu can easily debug the issue, and find out if you're axcessing the right io's, using the right speed, etc. I usually compare the signal I get to the datasheet diagram.
I think there are several issues here. First of all SPI is bidirectional. So if yo want to send something over the bus you also get something. Therefor always you have to provide a valid buffer to rx_buf and tx_buf.
Second, all members of the struct spi_ioc_transfer have to be initialized with a valid value. Otherwise they just point to some memory address and the underlying process is accessing arbitrary data, thus leading to unknown behavior.
Third, why do you use a for loop with ioctl? You already tell ioctl you haven an array of spi_ioc_transfer structs. So all defined transaction will be performed with one ioctl call.
Fourth ioctl needs a pointer to your struct array. So ioctl should look like this:
ret = ioctl(fd, SPI_IOC_MESSAGE(num_tr), &mesg);
You see there is room for improvement in your code.
This is how I do it in a c++ library for the raspberry pi. The whole library will soon be on github. I'll update my answer when it is done.
void SPIBus::spiReadWrite(std::vector<std::vector<uint8_t> > &data, uint32_t speed,
uint16_t delay, uint8_t bitsPerWord, uint8_t cs_change)
{
struct spi_ioc_transfer transfer[data.size()];
int i = 0;
for (std::vector<uint8_t> &d : data)
{
//see <linux/spi/spidev.h> for details!
transfer[i].tx_buf = reinterpret_cast<__u64>(d.data());
transfer[i].rx_buf = reinterpret_cast<__u64>(d.data());
transfer[i].len = d.size(); //number of bytes in vector
transfer[i].speed_hz = speed;
transfer[i].delay_usecs = delay;
transfer[i].bits_per_word = bitsPerWord;
transfer[i].cs_change = cs_change;
i++
}
int status = ioctl(this->fileDescriptor, SPI_IOC_MESSAGE(data.size()), &transfer);
if (status < 0)
{
std::string errMessage(strerror(errno));
throw std::runtime_error("Failed to do full duplex read/write operation "
"on SPI Bus " + this->deviceNode + ". Error message: " +
errMessage);
}
}

Resources