Managing UART buffer Thingy91 - c

I have a Thingy91 Nordic device, and I am integrating it with a sensor that works at 1200 baud rate. It basically sends strings that i need to store in a buffer and parse for usage. However I am facing a peculiar issue wherein I am getting the correct string the 1st time I recieve data, but after that I recieve garbage values.
Below is the part of the code:
uint8_t message_buf[100];
void uart_cb(struct device *x) {
uart_irq_update(x);
if (uart_irq_rx_ready(x)) {
uart_fifo_read(x, message_buf, sizeof(message_buf));
printk("%s", message_buf);
}
}
void main(){
struct device *uart = device_get_binding(UART_PORT);
......
uart_irq_callback_set(uart, uart_cb);
......
}
We think that it might be a problem in managing the message_buf while getting the new string, and wanted to know the correct procedure of managing the buffer. Also,what could be the root cause that I get correct data on the first call and get garbage values later on.
Regards,
Adeel.

Looking at the API document:
static int uart_fifo_read(structdevice *dev, u8_t *rx_data, const int size)
Read data from FIFO.
This function is expected to be called from UART interrupt handler (ISR), if uart_irq_rx_ready() returns true. Result of calling this function not from an ISR is undefined (hardware-dependent). It’s unspecified whether “RX ready” condition as returned by uart_irq_rx_ready() is level- or edge- triggered. That means that once uart_irq_rx_ready() is detected, uart_fifo_read() must be called until it reads all available data in the FIFO (i.e. until it returns less data than was requested).
Returns: Number of bytes read.
Parameters:
dev: UART device structure.
rx_data: Data container.
size: Container size.
It clearly specifies that the API must be called until the return value is lesser than requested number of bytes (size).
Your code:
if (uart_irq_rx_ready(x)) {
uart_fifo_read(x, message_buf, sizeof(message_buf));
printk("%s", message_buf);
}
It calls uart_fifo_read() once and then exits the callback uart_cb, which deviates from what is recommended.
You should read from FIFO until it is empty:
int readBytes = 0;
if (uart_irq_rx_ready(x))
{
do{
readBytes += uart_fifo_read(x, &message_buf[readBytes], (sizeof(message_buf)-readBytes));
while(readBytes < sizeof(message_buf));
printk("Received %d bytes: ", readBytes);
// printk("%s", message_buf); // Certainly the code will crash if it isn't a string that fits in message_buf!
}

Related

How to properly utilize masks to send index information to perf event output?

According to the documentation for bpf_perf_event_output found here: http://man7.org/linux/man-pages/man7/bpf-helpers.7.html
"The flags are used to indicate the index in map for which the value must be put, masked with BPF_F_INDEX_MASK."
In the following code:
SEC("xdp_sniffer")
int xdp_sniffer_prog(struct xdp_md *ctx)
{
void *data_end = (void *)(long)ctx->data_end;
void *data = (void *)(long)ctx->data;
if (data < data_end) {
/* If we have reached here, that means this
* is a useful packet for us. Pass on-the-wire
* size and our cookie via metadata.
*/
/* If we have reached here, that means this
* is a useful packet for us. Pass on-the-wire
* size and our cookie via metadata.
*/
__u64 flags = BPF_F_INDEX_MASK;
__u16 sample_size;
int ret;
struct S metadata;
metadata.cookie = 0xdead;
metadata.pkt_len = (__u16)(data_end - data);
/* To minimize writes to disk, only
* pass necessary information to userspace;
* that is just the header info.
*/
sample_size = min(metadata.pkt_len, SAMPLE_SIZE);
flags |= (__u64)sample_size << 32;
ret = bpf_perf_event_output(ctx, &my_map, flags,
&metadata, sizeof(metadata));
if (ret)
bpf_printk("perf_event_output failed: %d\n", ret);
}
return XDP_PASS;
}
It works as you would expect and stores the information for the given CPU number.
However, suppose I want all packets to be sent to index 1.
I swap
__u64 flags = BPF_F_INDEX_MASK;
for
__u64 flags = 0x1ULL;
The code compiles correctly and throws no errors, however no packets get saved at all anymore. What am I doing wrong if I want all of the packets to be sent to index 1?
Partial answer: I see no reason why the packets would not be sent to the perf buffer, but I suspect the error is on the user space code (not provided). It could be that you do not “open” the perf event for all CPUs when trying to read from the buffer. Have a look at the man page for perf_event_open(2): check that the combination of values for pid and cpu allows you to read data written for CPU 1.
As a side note, this:
__u64 flags = BPF_F_INDEX_MASK;
is misleading. The mask should be used to mask the index, not to set its value. BPF_F_CURRENT_CPU should be used instead, the former only happens to work because the two enum attributes have the same value.

Linkage problem with extern variable when compiling?

I'm using MikroC for PIC v7.2, to program a PIC18f67k40.
Within functii.h, I have the following variable declaration:
extern volatile unsigned char byte_count;
Within main.c, the following code:
#include <functii.h>
// ...
volatile unsigned char byte_count = 0;
// ...
void interrupt () {
if (RC1IF_bit) {
uart_rx = Uart1_read();
uart_string[byte_count] = uart_rx;
byte_count++;
}
// ...
}
Then, within command.c, I have the following code:
#include <functii.h>
void how_many_bytes () {
// ...
uart1_write(byte_count);
// ...
}
In main.c, I process data coming through the UART, using an interrupt. Once the end of transmission character is received, I call how_many_bytes(), which sends back the length of the message that was received (plus the data bytes themselves, the code for which I didn't include here, but those are all OK!!).
The problem is that on the uart1_write() call, byte_count is always 0, instead of having been incremented in the interrupt sequence.
Probably you need some synchronization between the interrupt handler and the main processing.
If you do something like this
if(byte_count != 0) {
uart1_write(byte_count);
byte_count = 0;
}
the interrupt can occur anywhere, for example
between if(byte_count != 0)and uart1_write(byte_count); or
during the processing of uart1_write(byte_count); which uses a copy of the old value while the value gets changed or
between uart1_write(byte_count); and byte_count = 0;.
With the code above case 1 is no problem but 2 and 3 are. You would lose all characters received after reading byte_count for the function call.
Maybe you can disable/enable interrupts at certain points.
A better solution might be to not reset byte_count outside of interrupt() but instead implement a ring buffer with separate read and write index. The read index would be modified by how_many_bytes() (or uart1_write()) only and the write index by interrupt() only.

Reading serial port faster

I have a computer software that sends RGB color codes to Arduino using USB. It works fine when they are sent slowly but when tens of them are sent every second it freaks out. What I think happens is that the Arduino serial buffer fills out so quickly that the processor can't handle it the way I'm reading it.
#define INPUT_SIZE 11
void loop() {
if(Serial.available()) {
char input[INPUT_SIZE + 1];
byte size = Serial.readBytes(input, INPUT_SIZE);
input[size] = 0;
int channelNumber = 0;
char* channel = strtok(input, " ");
while(channel != 0) {
color[channelNumber] = atoi(channel);
channel = strtok(0, " ");
channelNumber++;
}
setColor(color);
}
}
For example the computer might send 255 0 123 where the numbers are separated by space. This works fine when the sending interval is slow enough or the buffer is always filled with only one color code, for example 255 255 255 which is 11 bytes (INPUT_SIZE). However if a color code is not 11 bytes long and a second code is sent immediately, the code still reads 11 bytes from the serial buffer and starts combining the colors and messes them up. How do I avoid this but keep it as efficient as possible?
It is not a matter of reading the serial port faster, it is a matter of not reading a fixed block of 11 characters when the input data has variable length.
You are telling it to read until 11 characters are received or the timeout occurs, but if the first group is fewer than 11 characters, and a second group follows immediately there will be no timeout, and you will partially read the second group. You seem to understand that, so I am not sure how you conclude that "reading faster" will help.
Using your existing data encoding of ASCII decimal space delimited triplets, one solution would be to read the input one character at a time until the entire triplet were read, however you could more simply use the Arduino ReadBytesUntil() function:
#define INPUT_SIZE 3
void loop()
{
if (Serial.available())
{
char rgb_str[3][INPUT_SIZE+1] = {{0},{0},{0}};
Serial.readBytesUntil( " ", rgb_str[0], INPUT_SIZE );
Serial.readBytesUntil( " ", rgb_str[1], INPUT_SIZE );
Serial.readBytesUntil( " ", rgb_str[2], INPUT_SIZE );
for( int channelNumber = 0; channelNumber < 3; channelNumber++)
{
color[channelNumber] = atoi(channel);
}
setColor(color);
}
}
Note that this solution does not require the somewhat heavyweight strtok() processing since the Stream class has done the delimiting work for you.
However there is a simpler and even more efficient solution. In your solution you are sending ASCII decimal strings then requiring the Arduino to spend CPU cycles needlessly extracting the fields and converting to integer values, when you could simply send the byte values directly - leaving if necessary the vastly more powerful PC to do any necessary processing to pack the data thus. Then the code might be simply:
void loop()
{
if( Serial.available() )
{
for( int channelNumber = 0; channelNumber < 3; channelNumber++)
{
color[channelNumber] = Serial.Read() ;
}
setColor(color);
}
}
Note that I have not tested any of above code, and the Arduino documentation is lacking in some cases with respect to descriptions of return values for example. You may need to tweak the code somewhat.
Neither of the above solve the synchronisation problem - i.e. when the colour values are streaming, how do you know which is the start of an RGB triplet? You have to rely on getting the first field value and maintaining count and sync thereafter - which is fine until perhaps the Arduino is started after data stream starts, or is reset, or the PC process is terminated and restarted asynchronously. However that was a problem too with your original implementation, so perhaps a problem to be dealt with elsewhere.
First of all, I agree with #Thomas Padron-McCarthy. Sending character string instead of a byte array(11 bytes instead of 3 bytes, and the parsing process) is wouldsimply be waste of resources. On the other hand, the approach you should follow depends on your sender:
Is it periodic or not
Is is fixed size or not
If it's periodic you can check in the time period of the messages. If not, you need to check the messages before the buffer is full.
If you think printable encoding is not suitable for you somehow; In any case i would add an checksum to the message. Let's say you have fixed size message structure:
typedef struct MyMessage
{
// unsigned char id; // id of a message maybe?
unsigned char colors[3]; // or unsigned char r,g,b; //maybe
unsigned char checksum; // more than one byte could be a more powerful checksum
};
unsigned char calcCheckSum(struct MyMessage msg)
{
//...
}
unsigned int validateCheckSum(struct MyMessage msg)
{
//...
if(valid)
return 1;
else
return 0;
}
Now, you should check every 4 byte (the size of MyMessage) in a sliding window fashion if it is valid or not:
void findMessages( )
{
struct MyMessage* msg;
byte size = Serial.readBytes(input, INPUT_SIZE);
byte msgSize = sizeof(struct MyMessage);
for(int i = 0; i+msgSize <= size; i++)
{
msg = (struct MyMessage*) input[i];
if(validateCheckSum(msg))
{// found a message
processMessage(msg);
}
else
{
//discard this byte, it's a part of a corrupted msg (you are too late to process this one maybe)
}
}
}
If It's not a fixed size, it gets complicated. But i'm guessing you don't need to hear that for this case.
EDIT (2)
I've striked out this edit upon comments.
One last thing, i would use a circular buffer. First add the received bytes into the buffer, then check the bytes in that buffer.
EDIT (3)
I gave thought on comments. I see the point of printable encoded messages. I guess my problem is working in a military company. We don't have printable encoded "fire" arguments here :) There are a lot of messages come and go all the time and decoding/encoding printable encoded messages would be waste of time. Also we use hardwares which usually has very small messages with bitfields. I accept that it could be more easy to examine/understand a printable message.
Hope it helps,
Gokhan.
If faster is really what you want....this is little far fetched.
The fastest way I can think of to meet your needs and provide synchronization is by sending a byte for each color and changing the parity bit in a defined way assuming you can read the parity and bytes value of the character with wrong parity.
You will have to deal with the changing parity and most of the characters will not be human readable, but it's gotta be one of the fastest ways to send three bytes of data.

Receiving data function stalls when requesting large chunk of data

I'm creating a mini web server in C.
The following function is supposed to read in data from the client computer.
The objective is to read the second piece of data after the first space. Each piece of incoming data is space separated.
For example, if the incoming data is:
GET /123/456
then I'd expect /123/456.
If the incoming data is:
GET /123/456 789
then I'd still expect /123/456.
This relevant fragment is from an external function that sets up a 10 KB buffer and calls the problematic function:
//nsock is a valid socket handle from an accept() call.
printf("CLIENT CONNECTION!\n");
char req[10000];long reqsz=10000;
getreq(req,&reqsz,nsock);
printf("Received %d bytes\n",reqsz);
printf("Data: %s\n",req);
"CLIENT CONNECTION!" appears on the screen, but "Received" does not appear if bufsize inside the function is a high value. If I set it to a low value like 16, or 100, then everything is displayed correctly. Why do large numbers like 5000 not work?
This is the problematic function:
//getreq params in: req=external buffer for data
// reqsz=size of external buffer. I set 10000
// nsock=valid socket pointer from accept()
//
//getreq params out: reqsz=actual size of data returned
// req=actual data
//
void getreq(char* req,unsigned long *reqsz,long nsock){
//bufsize=how many bytes to read at once. High values like 5000 cause a stall.
//buffer=buffer of data from recv call
const unsigned long ibs=*reqsz,bufsize=5000;
char buffer[ibs],*rp=req;
//spacect=# of spaces in data read
//szct=iterator variable
//mysz=total length of returned data
//bufct=buffer counter to prevent segfault
//recvsz=data size returned from recv or
// forced -2 if buffer hits capacity
// or 2nd space in returned data is found
unsigned long spacect=0,szct=0,mysz=0,bufct=0;
long recvsz=1;char *p=buffer;
//
//Expected data: GET /whatever HTTP/x.x but we
// want /whatever
//
//loop until 2nd space is found or
//ibs bytes of data have been processed
while (recvsz > 0 && bufct < ibs){
recvsz=recv(nsock, p, bufsize, 0);
if (recvsz < 1){break;}
for (szct=1;szct<=recvsz;szct++){
if (*p==' '){spacect++;if (spacect > 2){spacect=2;recvsz=-2;break;}}
if (spacect==1 && *p != ' '){mysz++;if (mysz <= *reqsz){*rp++=*p;}}
p++;bufct++;if (bufct > ibs){recvsz=-2;break;}
}
}
// Process rest of data to try to avoid client errors
while (recvsz == -2){
recvsz=recv(nsock, buffer, bufsize, 0);
}
*reqsz=mysz;
}

How to receive int array from client side (written in C) to server (written in python)

I just want to send an array adc_array=[w, x, y, z] from client to server. Below is the client side code whereas my server is in python which accepts json only. I get no error when i compile the code however get 2 warnings :
1- warning: pointer targets in passing argument 2 of 'UDPWrite' differ in signedness.
2- warning: no newline at end of file.
But at the server side, i am not able to receive the whole array, instead i just get the first character of the array i.e. [ .
I am new to C programming. I would really appreciate any help.
// Main function
void FlyportTask()
{
// Flyport connects to default network
WFConnect(WF_DEFAULT);
while(WFGetStat() != CONNECTED);
vTaskDelay(25);
UARTWrite(1,"Flyport Wi-fi connected...hello world!\r\n");
BOOL UdpSocketOpenRequest=TRUE;
BYTE UdpSocket=0;
// openinging UDP socket
if (UdpSocketOpenRequest) //open socket
{
UdpSocketOpenRequest=FALSE;
if (UdpSocket!=0) //if this is not equals to zero
{
UDPClientClose(UdpSocket);
}
UARTWrite(1,"OpenSocket\r\n");
UdpSocket= UDPClientOpen("10.0.0.106", "8000"); //Client socket opening
}
while(1)
{
//defining pointer
int *array_pointer;
int adc_array[4];
int j;
char buf[10]; //buffer to print
// I have made a separate function to get adc values which returns the pointer to the array.
array_pointer = get_adcval();
UARTWrite (1, "ADC Array\r\n");
for (j = 0; j < 4; j++)
{
adc_array[j] = *(array_pointer + j);
sprintf (buf, "%d", adc_array[j]);
UARTWrite (1, buf);
UARTWrite (1, "\n");
}
//if UDP socket is open, send the data
if ((UdpSocket!=0))
{
// defining pointer of serial_out
char *s_out;
int size;
// creating a JSON array from adc_array with 4 elements
cJSON * int_array = cJSON_CreateIntArray(adc_array,4);
// Serializing the array
s_out = cJSON_Print(int_array);
//Writing to the serial output/monitor
UARTWrite(1, "\r\narray to be sent\r\n");
UARTWrite(1, s_out);
UARTWrite(1,"\r\n");
// Assume adc_array=[1021, 1022, 1023, 1024]
// I get output [1021, 1022, 1023, 1024]
//compose message
size = strlen(s_out);
UDPWrite (UdpSocket, s_out, size);
// at the server side, i just receive only first character i.e. [
/*to free the memory */
free(s_out);
}
//
// remember to add delay vTaskDelay(50) 50ms
//remember to close the socket
}
}
You didn't allocated memory for s_out. even if it is printing correct result on UART but still it can be overwritten by any of the UARTWrite functions or strlen() function in the next lines. If it is overwritten then the "size" variable will get the number of bytes starting from the first byte to first null character in the memory (this is how strlen() functions). hence the "size" value can be totally random. it can be 0 or 1 or 1000. if the size is not correct then you will receive only "size" number of bytes. In your case it is possible that size is one. try printing size before UDPWrite. fix this problem by adding a malloc call before serializing the array.
If it doesn't work either then check your receiver side. is your receiver working fine if you send some dummy data from a tested python client (or any other tested or reliable client)? if no then there is some problem with your receiver.
Print out what strlen(s_out) returns, also print out the return value of UDPWrite ( I assume that like any write function this will be returning the size of the data which is written to the socket).
By reading the function names I presume you are using UDP transmission which is unreliable.

Resources