need to provide timeout for error handling using c - c

I am developing a code to make communicate between two controller boards. I am passing 9 byte message from one board to another. Need to define error handling on receiver side such that it will wait for 9 byte value until timeout occurs. If timeout is reached, control should start from the 1st line of function.
currently I have defined one line like
while (/*wait_loop_cnt++<= MAX_WAIT_LOOP &&*/ counter < length);
in my code but it will remain in the same loop infinitely if doesn't receive 9 byte.
Please help thank you

Try this:
const int length = 9;
int counter = 0;
int wait_loop_cnt = 0;
while (
wait_loop_cnt++ <= MAX_WAIT_LOOP &&
counter < length) /* NO semicolon here! */
{
if (read_byte_successfully(...))
{
++counter;
}
}
if (counter < length)
{
/* Handle case of to few bytes received here. */
}

Related

Morse code sound effect using Beep() function

I would like to record speaker's output of my morse code. The code is written in C based on Beep function to play the beep sound. I am using Audacity to record the speaker's output. Here is the code :
/* Send character in morse code */
void MCSendChar(unsigned char ch){
unsigned char a = asciiToMC(ch);
unsigned char n = a & 0x07 , j;
for(j = 0; j < n; j++) {
if((0x80 & a) != 0)
Beep(500,40);
else
Beep(500,120);
a = a << 1;
// Inter Symbol spacing
Beep(0,40);
}
// Inter character space
Beep(0,120);
}
I have a confusion about setting the frequency and the wpm. Also, when I tried to decode the recorded file using an online web decoder, the results are not correct at all. Any Ideas ?
Assuming Beep() is the win32 function, from MSDN on the dwFreq parameter:
The frequency of the sound, in hertz. This parameter must be in the range 37 through 32,767 (0x25 through 0x7FFF).
So you must implement the pauses by some other means than using Beep(), for example using Sleep()

Array values reset in loop

So I am using poll() to read a couple gpio pins. I then compare previously read values to the newly read ones to see if they have changed. The reading of the values works just fine. My problem seems to be in the loop. This can be seen from the output where it looks like buffers is reset at the beginning of the loop. Why is this happening?
Note: In case anyone is wondering why I don't just use poll() as an interrupt with a delay of -1, it is because of a hardware issue that makes it unsupported.
Code
static const int num_buttons = 2;
void *routine(){
struct pollfd pfd[num_buttons];
int fds[num_buttons];
const char gpioValLocations[num_buttons][256];
int i;
for (i = 0; i < num_buttons ; i++){
sprintf(gpioValLocations[i], "/sys/class/gpio/gpio%d/value", gpios[i]);
}
char buffers[num_buttons][2];
char prev_buffers[num_buttons][2];
for (i = 0; i < num_buttons; i++){
if ((fds[i]= open(gpioValLocations[i],O_RDONLY)) < 0) {
LOGD("failed on 1st open");
exit(1);
}
pfd[i].fd = fds[i];
pfd[i].events = POLLIN;
lseek(fds[i], 0, SEEK_SET);
read(fds[i], buffers[i], sizeof buffers[i]);
}
for (;;) {
LOGD("at top: prev:%d%d buff:%d%d", atoi(prev_buffers[0]), atoi(prev_buffers[1]), atoi(buffers[0]), atoi(buffers[1]));
poll(pfd, num_buttons, 1);
for (i = 0; i < num_buttons; i++) {
if ((pfd[i].revents & POLLIN)) {
/* copy current values to compare to next to detected change */
strcpy(prev_buffers[i], buffers[i]);
LOGD("in loop: prev:%d%d buff:%d%d",
atoi(prev_buffers[0]), atoi(prev_buffers[1]),
atoi(buffers[0]), atoi(buffers[1]));
/* read new values */
lseek(fds[i], 0, SEEK_SET);
read(fds[i], buffers[i], sizeof buffers[i]);
/* compare new to previous */
if (atoi(prev_buffers[i]) != atoi(buffers[i])) {
// LOGD("change detected");
}
}
}
}
}
Output
at top: prev:00 buff:01
in loop: prev:01 buff:00
in loop: prev:00 buff:00
at top: prev:00 buff:01
in loop: prev:01 buff:00
in loop: prev:00 buff:00
This line seems risky:
strcpy(prev_buffers[i], buffers[i]);
prev_buffers[i] is only 2 bytes long. If buffers[i] has more than 1 character, you have a buffer overflow and invoke undefined behavior.
Furthermore, you must initialize buffers, they currently start with random junk, not single byte character strings: using strcpy to save the previous values invokes undefined behavior. Using memcpy() to save whatever previous contents is less risky as it will copy just 2 bytes, instead of scanning the buffers for a null terminator, potentially invoking undefined behavior both when reading past the end of buffers[i] and when writing past the end of prev_buffers[i].
Are you sure that read(fds[i], buffers[i], sizeof buffers[i]); reads an ASCII digit and a null terminator? If not, the code is incorrect.
Post a complete function that compiles, there might be more problems.

Calculating the delay between write and read on I2C in Linux

I am currently working with I2C in Arch Linux Arm and not quite sure how to calculate the absolute minimum delay there is required between a write and a read. If i don't have this delay the read naturally does not come through. I have just applied usleep(1000) between the two commands, which works, but its just done empirically and has to be optimized to the real value (somehow). But how?.
Here is my code sample for the write_and_read function i am using:
int write_and_read(int handler, char *buffer, const int bytesToWrite, const int bytesToRead) {
write(handler, buffer, bytesToWrite);
usleep(1000);
int r = read(handler, buffer, bytesToRead);
if(r != bytesToRead) {
return -1;
}
return 0;
}
Normally there's no need to wait. If your writing and reading function is threaded somehow in the background (why would you do that???) then synchronizating them is mandatory.
I2C is a very simple linear communication and all the devices used my me was able to produce the output data within microsecs.
Are you using 100kHz, 400kHz or 1MHz I2C?
Edited:
After some discuss I suggest you this to try:
void dataRequest() {
Wire.write(0x76);
x = 0;
}
void dataReceive(int numBytes)
{
x = numBytes;
for (int i = 0; i < numBytes; i++) {
Wire.read();
}
}
Where x is a global variable defined in the header then assigned 0 in the setup(). You may try to add a simple if condition into the main loop, e.g. if x > 0, then send something in serial.print() as a debug message, then reset x to 0.
With this you are not blocking the I2C operation with the serial traffic.

Efficiently find a sequence within a buffer

So I have a buffer that I am filling with a frame that has a maximum of 1200 bytes and is variably sized. I know the frame is complete when I get a tail sequence that is always the same and doesn't occur otherwise. So I am trying to find how to most efficiently detect that tail sequence. This is embedded so ideally the fewer function calls and data structures I use the better.
Here is what I have thus far:
//I am reading off of a circular buffer so this is checking that I still have unread bytes
while (cbuf_last_written_index != cbuf_last_read_index) {
buffer[frame_size] = circular_buffer[cbuf_last_read_index];
//this function does exactly what it says and just maintains circular buffer correctness
increment_cbuf_read_index_count();
frame_size++;
//TODO need to make this more efficient
int i;
uint8_t sync_test_array[TAIL_SYNC_LENGTH] = {0};
//this just makes sure I have enough in the frame to even bother checking the tail seq
if (frame_size > TAIL_SYNC_LENGTH) {
for (i = 0; i < TAIL_SYNC_LENGTH; i++) {
//sets the test array equal to the last TAIL_SYNC_LENGTH elements the buffer
sync_test_array[i] = buffer[(frame_size - TAIL_SYNC_LENGTH) + i];
}
if (sync_test_array == tail_sequence_array) {
//I will toggle a pin here to notify that the frame is complete
//get out of the while loop because the following bytes are part of the next frame
break;
}
}
//end efficiency needed area
}
So basically for each new byte that is added to the frame I am checking the last x bytes (will probably actually be ~8) to see if they are the tail sequence. Can you think of a better way to do this?
Implement it as a state machine.
If your tail sequence is 1,2,5, the psuedo code would be:
switch(current_state) {
IDLE: next_state = ONE_SEEN if new_byte == 1 else next-state = IDLE
ONE_SEEN: next_state = TWO_SEEN if new_byte == 2 else next_state = IDLE
TWO_SEEN: next_state = TERMINATE if new_byte == 5 else next_state = IDLE
}

Displaying 2D arrays of pixel values in a console application

I'm continuously sending 2D arrays of pixel values (uint32) from LabVIEW to a C-program through TCP/IP with the resolution 160x120. The purpose of the C-program is to display the received pixel values as 2D arrays in the console application. I'm sending the pixels as stream of bytes, and using the recv function in Ws2_32.lib to receive the bytes in the C-program. Then I'm converting the bytes to uint32 values and displaying them in the console application using a 2D arrays, so every 2D array will represent an image.
I have en issue with the frame rate though. I'm able to send 30 frames per second in LabVIEW, but when I open the TCP/IP connection with the C-program, the frame rate goes down to 1 frame per second. It must be an issue with the C-program, since I managed to send the desired frames per second with the same LabVIEW program to a corresponding C# program.
The C-code:
#define DEFAULT_BUFLEN 256
#define IMAGEX 120
#define IMAGEY 160
WSADATA wsa;
SOCKET s , new_socket;
struct sockaddr_in server , client;
int c;
int iResult;
char recvbuf[DEFAULT_BUFLEN];
int recvbuflen = DEFAULT_BUFLEN;
typedef unsigned int uint32_t;
unsigned int x=0,y=0,i,n;
uint32_t image[IMAGEX][IMAGEY];
size_t len;
uint32_t* p;
p = (uint32_t*)recvbuf;
do
{
iResult = recv(new_socket, recvbuf, recvbuflen, 0);
len = iResult/sizeof(uint32_t);
for(i=0; i < len; i++)
{
image[x][y] = p[i];
x++;
if (x >= IMAGEX)
{
x=0;
y++;
}
if (y >= IMAGEY)
{
y = 0;
x = 0;
//print image
for (n=0; n< IMAGEX*IMAGEY; n++)
{
printf("%d",image[n%IMAGEX][n/IMAGEY]);
if (n % IMAGEX)
{
printf(" ");
}
else
{
printf("\n");
}
}
}
}
} while ( iResult > 0 );
try reducing the prints .. Since you are reading and printing in the same thread, the data in the TCP connection will fill up and it will then back pressure the other end (LABView) and the LABView will stop sending data until it gets the green signal from the other end (you C program)
To start with you can debug by replacing this
for (n=0; n< IMAGEX*IMAGEY; n++)
{
printf("%d",image[n%IMAGEX][n/IMAGEY]);
if (n % IMAGEX)
{
printf(" ");
}
else
{
printf("\n");
}
}
with
printf("One frame recv\n");
and see if it makes any difference. I am assuming your tcp connection has ample bandwidth
Very hard to diagnose without further information. I can give a few suggestions, however.
First of all, your recv call is using a small buffer, so you are spending a lot of time calling it. Why not read a whole frame at a time? Also, you read in the data and then copy it to the image array. Wouldn't it be simpler to just use the image array itself? Combining those two suggestions would have recv reading a full frame directly into the image array, saving a lot of time.
Another source of the problem could be the console. With the sample code you provided, you are attempting to write 30*120*160=57,600 integer values per second to the terminal. If the average value, with delimiter, takes up 8 characters, that's 4 million characters per second. It's entirely possible that the display just can't go that fast, in which case things would back up and slow down all the way to the server writing to the socket.
There are several ways to handle this, but it's too much to go into here.

Resources