Arduino: sending an integer array over UDP - c

I'm new to Arduino and fairly new to programming. I'm trying to send an array of integers over UDP to MaxMSP. Using the .print method in the WiFiUDP library works for sending one integer per packet:
void loop() {
Udp.beginPacket(hostIP, HOST_PORT);
Udp.print("start");
for (int i = 0; i < NUMBER_OF_SENSORS; i++) {
int adcValue = analogRead(i);
Udp.print(adcValue);
}
Udp.endPacket();
Udp.flush();
}
The problem is that this is quite slow. I'm getting a refresh rate of about 10 ms for each sensor on the Max end, and I'm assuming that by writing all of the integers (only 4 at the moment) to a single buffer and sending it in its own packet, I'd be able to quadruple the speed. I've tried this:
void loop() {
byte sensorBuffer [NUMBER_OF_SENSORS * 2];
for (int i = 0; i < NUMBER_OF_SENSORS; i++) {
int adcValue = analogRead(i);
sensorBuffer[i*2] = highByte(adcValue);
sensorBuffer[i*2+1] = lowByte(adcValue);
}
Udp.beginPacket(hostIP, HOST_PORT);
Udp.write(sensorBuffer, NUMBER_OF_SENSORS * 2);
Udp.endPacket();
Udp.flush();
}
This produces garbage on the Max end. I have a vague idea why this is the case - the array is formatted as 7-bit ASCII values? - but I haven't been able to figure out how to get it to work. Any input is much appreciated.

Related

failure in client-server communication in c

I am trying a C program for communication between client and server. I want my server to randomly generate an array, send it to the client and get back the sorted array from client to server. But, when I print the array sent by the server, the client shows only zeros. I guess there is problem either send function in server or receive/read function in client. Here is my code:
Server Side:
1- to create a random array
//int ip[255];
void GenIpArray()//generate random values and store in ip[]
{
for(int i=0;i<255;i++)
{
ip[i]= rand()%100 + 100;
}
}
2- to send array to the client
void write(LPVOID sock_fd)//int sock_fd=socket(AF_INET,SOCK_STREAM,0);
{
while(1)
{
send((int)sock_fd,(char*)&ip,255,0);
//let int ip[255] = {123, 109, 240, 150};
}
}
Client Side:
1- to receive from server
void read(LPVOID sock_fd)
{
while(1)
{
if(recv((unsigned int)sock_fd,(char*)&arr,255,0)>0)
{
printf("recevied: ");
strcpy((char*)x, (char*)arr);
printf("%c",(char *)&x);//this statement prints #
printArray();//function to print array
break;
}
}
}
2- function to print the array
void printArray()
{
printf("\n\n Printing the array:\n");
for(int k=0;k<255;k++)
printf("\n %d",x[k]);
}
Firstly, you need to send the correct size:
send((int)sock_fd, (char *)ip, 255 * sizeof(int), 0);
Then, you need to recv the correct size, reading it straight into x:
size_t bytes_received = recv((int)sock_fd, (char *)x, 255 * sizeof(int), 0);
Now x contains bytes_received / sizeof(int) numbers:
size_t ints_received = bytes_received / sizeof(int);
So you can use this number to loop and print them:
for (unsigned int k = 0; k < ints_received; k++) {
printf("%d\n", x[k]);
}
For portability, you should really be converting your ints to network byte order with htons before sending them, and then converting them back to host byte order with ntohs after receiving them.
The error is simply the use of strcpy! It stops copying on first null, and you are sending ints between 100 and 200. So at best only first byte will end in x and all the other bytes will be what was there at the beginning (0 for static duration arrays).
Never, ever use strcpy for binary data, but only memcpy
And that's not all:
you should allways control the return values of send and receive.
printf("%c",(char *)&x); is non sense: you print the first byte of the address or x array
if ip and arr are real arrays (not pointers) use directly sizeof(ip) and sizeof(arr)
send((int)sock_fd,(char*)ip,sizeof(ip),0);
recv((unsigned int)sock_fd,(char*)arr,sizeof(arr),0)

Can't extract an integer from a thermometer byte reading

Afternoon all,
Apologies if this question is in the wrong format or in the wrong place, if this is the case, please flag and I'll change it or take it elsewhere.
I am using a development board to send a temperature reading to an LCD panel and I am really struggling to comprehend as to why the temperature at the moment that the program is run isn't being printed onto my LCD. A lot of the code is from framework given to me and is correct as far as I can tell.
My question stems from these functions:
uch get_temp()
{
int i;
DQ_HIGH();
reset(); //reset,wait for 18b20 responsion
write_byte(0XCC); //ignore ROM matching
write_byte(0X44); //send temperature convert command
for(i=20;i>0;i--)
{
//display(); //call some display function,insure the time of convert temperature
}
reset(); //reset again,wait for 18b20 responsion
write_byte(0XCC); //ignore ROM matching
write_byte(0XBE); //send read temperature command
TLV=read_byte(); //read temperature low byte
THV=read_byte(); //read temperature high byte
DQ_HIGH(); //release general line
TZ=(TLV>>4)|(THV<<4)&0X3f; //temperature integer
TX=TLV<<4; //temperature decimal
if(TZ>100)
{
TZ/100;
} //not display hundred bit
ge=TZ%10; //integer Entries bit
shi=TZ/10; //integer ten bit
wd=0;
if (TX & 0x80)
wd=wd+5000;
if (TX & 0x40)
wd=wd+2500;
if (TX & 0x20)
wd=wd+1250;
if (TX & 0x10)
wd=wd+625; //hereinbefore four instructions are turn decimal into BCD code
shifen=wd/1000; //ten cent bit
baifen=(wd%1000)/100; //hundred cent bit
qianfen=(wd%100)/10; //thousand cent bit
wanfen=wd%10; //myriad cent bit
NOP();
return TZ;
}
I have modified this function so that it should return the temperature integer (unsigned char TZ)
This function is then called here:
void Init_lcd(void)
{
ADCON1 = 0x07; //required setting of analog to digital
uch Temp;
TRISD = 0x00;
TRISA1 = 0;
TRISA2 = 0;
TRISA3 = 0;
writeCommand(0x0f);
writeCommand(0x38); //set to two line mode
clearDisplay();
writeString("MAIN MENU");
Temp = get_temp();
writeString(Temp);
writeCommand(0xC0); //change cursor line
}
It isn't printing anything after "MAIN MENU", which obviously means I'm doing something wrong. I can provide further clarification/code on request.
I should probably mention that I am NOT only simply looking for an answer of "paste this in and it'll work". Any feedback in which I understand my mistake and how to fix it is greatly appreciated.
Thanks in advance!
EDIT:
A few people are asking about my writing functions so for further clarification I'll paste them here:
void writeChar(unsigned char ch)
{
lcd = ch;
RS = 1;
RW =0;
E = 1;
lcdDelay();
E=0;
}
void writeString(char *stringToLcd)
{
while(*stringToLcd > 0)
{
writeChar(*stringToLcd++);
}
}
Temp is an unsigned char
uch Temp;
//...
Temp = get_temp();
writeString(Temp);
So, using writeString() will produce undefined results.
You should use write() instead (depending on the library you're using).
But you probably want to convert the return value of get_temp() to an ASCII string first, and display that using writeString().
Update:
void writeString(char *stringToLcd)
This function needs a char*, so you can't provide a single uch.
You need to convert Temp to a string first, using itoa() for example.
I could suggest you to implement a new function
void writeUCH(uch value)
{
unsigned char test = (value >= 100) ? 100 : (value >= 10) ? 10 : 1;
while(test > 0)
{
writeChar((value/test)+'0');
value = value%test;
test /= 10;
}
}
this line:
TZ/100;
will result in no change to TZ
what you really want is this:
TZ = TZ%100;
the value returned from get_temp() is an integer, not a ascii string. I would expect the LCD needs ascii characters, not the binary value of the bytes of an int variable.

Getting raw data using libusb

I'm doing reverse engineering about a ultrasound probe on the Linux side. I want to capture raw data from an ultrasound probe. I'm programming with C and using the libusb API.
There are two BULK IN endpoints in the device (2 and 6). The device is sending 2048 bytes data, but it is sending data as 512 bytes with four block.
This picture is data flow on the Windows side, and I want to copy that to the Linux side. You see four data blocks with endpoint 02 and after that four data blocks with endpoint 06.
But there is a problem about timing. The first data block of endpoint 02's and first data block of endpoint 06's are close to each other acoording to time. But in data flow they are not in sequence.
I see that the computer is reading the first data blocks of endpoint 02 and 06. After that, the computer is reading the other three data blocks of endpoint 02 and endpoint 06. But in USB Analyzer, the data flow is being viewed according to the endpoint number. The sequence is different according to time.
On the Linux side, I write code like this:
int index = 0;
imageBuffer2 = (unsigned char *) malloc(2048);
imageBuffer6 = (unsigned char *) malloc(2048);
while (1) {
libusb_bulk_transfer(devh, BULK_EP_2, imageBuffer2, 2048, &actual2, 0);
libusb_bulk_transfer(devh, BULK_EP_6, imageBuffer6, 2048, &actual6, 0);
//Delay
for(index = 0; index <= 10000000; index ++)
{
}
}
So that result is in picture as below
In other words, in my code all reading data is being read in sequence according to time and endpoint number. My result is different from the data flow on the Windows side.
In brief, I have two BULK IN endpoints, and they are starting read data close according to time. How is it possible?
It's not clear to me whether you're using a different method for getting the data on Windows or not, I'm going to assume that you are.
I'm not an expert on libusb by any means, but my guess would be that you are overwriting you data with each call, since you're using the same buffer each time. Try giving your buffer a fixed value before using the transfer method, and then evaluate the result.
If it is the case, I believe something along the lines of the following would also work in C:
imageBuffer2 = (unsigned char *) malloc(2048);
char *imageBuffer2P = imageBuffer2;
imageBuffer6 = (unsigned char *) malloc(2048);
char *imageBuffer6P = imageBuffer6;
int dataRead2 = 0;
int dataRead6 = 0;
while(dataRead2 < 2048 || dataRead6 < 2048)
{
int actual2 = 0;
int actual6 = 0;
libusb_bulk_transfer(devh, BULK_EP_2, imageBuffer2P, 2048-dataRead2, &actual2, 200);
libusb_bulk_transfer(devh, BULK_EP_6, imageBuffer6P, 2048-dataRead6, &actual6, 200);
dataRead2 += actual2;
dataRead6 += actual6;
imageBuffer2P += actual2;
imageBuffer6P += actual6;
usleep(1);
}

Calculating the delay between write and read on I2C in Linux

I am currently working with I2C in Arch Linux Arm and not quite sure how to calculate the absolute minimum delay there is required between a write and a read. If i don't have this delay the read naturally does not come through. I have just applied usleep(1000) between the two commands, which works, but its just done empirically and has to be optimized to the real value (somehow). But how?.
Here is my code sample for the write_and_read function i am using:
int write_and_read(int handler, char *buffer, const int bytesToWrite, const int bytesToRead) {
write(handler, buffer, bytesToWrite);
usleep(1000);
int r = read(handler, buffer, bytesToRead);
if(r != bytesToRead) {
return -1;
}
return 0;
}
Normally there's no need to wait. If your writing and reading function is threaded somehow in the background (why would you do that???) then synchronizating them is mandatory.
I2C is a very simple linear communication and all the devices used my me was able to produce the output data within microsecs.
Are you using 100kHz, 400kHz or 1MHz I2C?
Edited:
After some discuss I suggest you this to try:
void dataRequest() {
Wire.write(0x76);
x = 0;
}
void dataReceive(int numBytes)
{
x = numBytes;
for (int i = 0; i < numBytes; i++) {
Wire.read();
}
}
Where x is a global variable defined in the header then assigned 0 in the setup(). You may try to add a simple if condition into the main loop, e.g. if x > 0, then send something in serial.print() as a debug message, then reset x to 0.
With this you are not blocking the I2C operation with the serial traffic.

Displaying 2D arrays of pixel values in a console application

I'm continuously sending 2D arrays of pixel values (uint32) from LabVIEW to a C-program through TCP/IP with the resolution 160x120. The purpose of the C-program is to display the received pixel values as 2D arrays in the console application. I'm sending the pixels as stream of bytes, and using the recv function in Ws2_32.lib to receive the bytes in the C-program. Then I'm converting the bytes to uint32 values and displaying them in the console application using a 2D arrays, so every 2D array will represent an image.
I have en issue with the frame rate though. I'm able to send 30 frames per second in LabVIEW, but when I open the TCP/IP connection with the C-program, the frame rate goes down to 1 frame per second. It must be an issue with the C-program, since I managed to send the desired frames per second with the same LabVIEW program to a corresponding C# program.
The C-code:
#define DEFAULT_BUFLEN 256
#define IMAGEX 120
#define IMAGEY 160
WSADATA wsa;
SOCKET s , new_socket;
struct sockaddr_in server , client;
int c;
int iResult;
char recvbuf[DEFAULT_BUFLEN];
int recvbuflen = DEFAULT_BUFLEN;
typedef unsigned int uint32_t;
unsigned int x=0,y=0,i,n;
uint32_t image[IMAGEX][IMAGEY];
size_t len;
uint32_t* p;
p = (uint32_t*)recvbuf;
do
{
iResult = recv(new_socket, recvbuf, recvbuflen, 0);
len = iResult/sizeof(uint32_t);
for(i=0; i < len; i++)
{
image[x][y] = p[i];
x++;
if (x >= IMAGEX)
{
x=0;
y++;
}
if (y >= IMAGEY)
{
y = 0;
x = 0;
//print image
for (n=0; n< IMAGEX*IMAGEY; n++)
{
printf("%d",image[n%IMAGEX][n/IMAGEY]);
if (n % IMAGEX)
{
printf(" ");
}
else
{
printf("\n");
}
}
}
}
} while ( iResult > 0 );
try reducing the prints .. Since you are reading and printing in the same thread, the data in the TCP connection will fill up and it will then back pressure the other end (LABView) and the LABView will stop sending data until it gets the green signal from the other end (you C program)
To start with you can debug by replacing this
for (n=0; n< IMAGEX*IMAGEY; n++)
{
printf("%d",image[n%IMAGEX][n/IMAGEY]);
if (n % IMAGEX)
{
printf(" ");
}
else
{
printf("\n");
}
}
with
printf("One frame recv\n");
and see if it makes any difference. I am assuming your tcp connection has ample bandwidth
Very hard to diagnose without further information. I can give a few suggestions, however.
First of all, your recv call is using a small buffer, so you are spending a lot of time calling it. Why not read a whole frame at a time? Also, you read in the data and then copy it to the image array. Wouldn't it be simpler to just use the image array itself? Combining those two suggestions would have recv reading a full frame directly into the image array, saving a lot of time.
Another source of the problem could be the console. With the sample code you provided, you are attempting to write 30*120*160=57,600 integer values per second to the terminal. If the average value, with delimiter, takes up 8 characters, that's 4 million characters per second. It's entirely possible that the display just can't go that fast, in which case things would back up and slow down all the way to the server writing to the socket.
There are several ways to handle this, but it's too much to go into here.

Resources