I have a computer software that sends RGB color codes to Arduino using USB. It works fine when they are sent slowly but when tens of them are sent every second it freaks out. What I think happens is that the Arduino serial buffer fills out so quickly that the processor can't handle it the way I'm reading it.
#define INPUT_SIZE 11
void loop() {
if(Serial.available()) {
char input[INPUT_SIZE + 1];
byte size = Serial.readBytes(input, INPUT_SIZE);
input[size] = 0;
int channelNumber = 0;
char* channel = strtok(input, " ");
while(channel != 0) {
color[channelNumber] = atoi(channel);
channel = strtok(0, " ");
channelNumber++;
}
setColor(color);
}
}
For example the computer might send 255 0 123 where the numbers are separated by space. This works fine when the sending interval is slow enough or the buffer is always filled with only one color code, for example 255 255 255 which is 11 bytes (INPUT_SIZE). However if a color code is not 11 bytes long and a second code is sent immediately, the code still reads 11 bytes from the serial buffer and starts combining the colors and messes them up. How do I avoid this but keep it as efficient as possible?
It is not a matter of reading the serial port faster, it is a matter of not reading a fixed block of 11 characters when the input data has variable length.
You are telling it to read until 11 characters are received or the timeout occurs, but if the first group is fewer than 11 characters, and a second group follows immediately there will be no timeout, and you will partially read the second group. You seem to understand that, so I am not sure how you conclude that "reading faster" will help.
Using your existing data encoding of ASCII decimal space delimited triplets, one solution would be to read the input one character at a time until the entire triplet were read, however you could more simply use the Arduino ReadBytesUntil() function:
#define INPUT_SIZE 3
void loop()
{
if (Serial.available())
{
char rgb_str[3][INPUT_SIZE+1] = {{0},{0},{0}};
Serial.readBytesUntil( " ", rgb_str[0], INPUT_SIZE );
Serial.readBytesUntil( " ", rgb_str[1], INPUT_SIZE );
Serial.readBytesUntil( " ", rgb_str[2], INPUT_SIZE );
for( int channelNumber = 0; channelNumber < 3; channelNumber++)
{
color[channelNumber] = atoi(channel);
}
setColor(color);
}
}
Note that this solution does not require the somewhat heavyweight strtok() processing since the Stream class has done the delimiting work for you.
However there is a simpler and even more efficient solution. In your solution you are sending ASCII decimal strings then requiring the Arduino to spend CPU cycles needlessly extracting the fields and converting to integer values, when you could simply send the byte values directly - leaving if necessary the vastly more powerful PC to do any necessary processing to pack the data thus. Then the code might be simply:
void loop()
{
if( Serial.available() )
{
for( int channelNumber = 0; channelNumber < 3; channelNumber++)
{
color[channelNumber] = Serial.Read() ;
}
setColor(color);
}
}
Note that I have not tested any of above code, and the Arduino documentation is lacking in some cases with respect to descriptions of return values for example. You may need to tweak the code somewhat.
Neither of the above solve the synchronisation problem - i.e. when the colour values are streaming, how do you know which is the start of an RGB triplet? You have to rely on getting the first field value and maintaining count and sync thereafter - which is fine until perhaps the Arduino is started after data stream starts, or is reset, or the PC process is terminated and restarted asynchronously. However that was a problem too with your original implementation, so perhaps a problem to be dealt with elsewhere.
First of all, I agree with #Thomas Padron-McCarthy. Sending character string instead of a byte array(11 bytes instead of 3 bytes, and the parsing process) is wouldsimply be waste of resources. On the other hand, the approach you should follow depends on your sender:
Is it periodic or not
Is is fixed size or not
If it's periodic you can check in the time period of the messages. If not, you need to check the messages before the buffer is full.
If you think printable encoding is not suitable for you somehow; In any case i would add an checksum to the message. Let's say you have fixed size message structure:
typedef struct MyMessage
{
// unsigned char id; // id of a message maybe?
unsigned char colors[3]; // or unsigned char r,g,b; //maybe
unsigned char checksum; // more than one byte could be a more powerful checksum
};
unsigned char calcCheckSum(struct MyMessage msg)
{
//...
}
unsigned int validateCheckSum(struct MyMessage msg)
{
//...
if(valid)
return 1;
else
return 0;
}
Now, you should check every 4 byte (the size of MyMessage) in a sliding window fashion if it is valid or not:
void findMessages( )
{
struct MyMessage* msg;
byte size = Serial.readBytes(input, INPUT_SIZE);
byte msgSize = sizeof(struct MyMessage);
for(int i = 0; i+msgSize <= size; i++)
{
msg = (struct MyMessage*) input[i];
if(validateCheckSum(msg))
{// found a message
processMessage(msg);
}
else
{
//discard this byte, it's a part of a corrupted msg (you are too late to process this one maybe)
}
}
}
If It's not a fixed size, it gets complicated. But i'm guessing you don't need to hear that for this case.
EDIT (2)
I've striked out this edit upon comments.
One last thing, i would use a circular buffer. First add the received bytes into the buffer, then check the bytes in that buffer.
EDIT (3)
I gave thought on comments. I see the point of printable encoded messages. I guess my problem is working in a military company. We don't have printable encoded "fire" arguments here :) There are a lot of messages come and go all the time and decoding/encoding printable encoded messages would be waste of time. Also we use hardwares which usually has very small messages with bitfields. I accept that it could be more easy to examine/understand a printable message.
Hope it helps,
Gokhan.
If faster is really what you want....this is little far fetched.
The fastest way I can think of to meet your needs and provide synchronization is by sending a byte for each color and changing the parity bit in a defined way assuming you can read the parity and bytes value of the character with wrong parity.
You will have to deal with the changing parity and most of the characters will not be human readable, but it's gotta be one of the fastest ways to send three bytes of data.
Related
I have a CAN bus message that is composed from 3 parts.
What is the best way to decode it ?
My thinking is to use 3 FIFOs when the first part is decompsed, I store it in the FIFO, and the same for the other 2 parts.
Then I combine those 3 Fifos togethers into one message.
The total message length is 64bytes PDU Lenght
I'm using the following function to get can bus data
HAL_CAN_GetRxMessage
can bus
Using the answer to your previous question from here, you can use that bitfield in combination with a plain uint8_t [64]. For example
typedef struct
{
uint8_t data[64];
can_received_t received;
} msg_t;
Fill up with data as you receive it, writing it to the corresponding data bytes, then set the bit to indicate that the message has been partially received. The struct isn't regarded as complete until you have received all parts.
A queue/FIFO only fills one purpose, and that is to delay execution of something until later, when there's more time. There's no reason to do that here. Your CAN message decoding could look something like:
msg_t msg;
switch(received_can_id)
{
...
case CANID_FOO:
memcpy(&msg.data[FOO_INDEX], rec_data, FOO_SIZE);
msg.received |= RECEIVED_FOO;
break;
case CANID_BAR:
memcpy(&msg.data[BAR_INDEX], rec_data, BAR_SIZE);
msg.received |= RECEIVED_BAR;
break;
...
}
if(msg.received == RECEIVED_ALL)
{
use(&msg); // do something
memset(&msg, 0, sizeof msg); // reset everything
}
This is fairly quick code, no need to queue anything.
Considering the standard CAN message is of size 8bytes length, you can declare message as uin64_t and combine the respective signals into message using |.
Example:
uint64_t message = 0;
uint8_t incomingBytes[8] = {0};
for(int i=0; i<8; i++)
{
message = message <<8;
message |= incomingBytes[i];
}
If you want to interpret VIN data as string then,
char vindata [9];
memcpy(vindata, incomingBytes, 8);
vindata[8] = '\0';
I made a decoder of LZW-compressed TIFF images, and all the parts work, it can decode large images at various bit depths with or without horizontal prediction, except in one case. While it decodes files written by most programs (like Photoshop and Krita with various encoding options) fine, there's something very strange about the files created by ImageMagick's convert, it produces LZW codes that aren't yet in the dictionary, and I don't know how to handle it.
Most of the time the 9 to 12-bit code in the LZW stream that isn't yet in the dictionary is the next one that my decoding algorithm will try to put in the dictionary (which I'm not sure should be a problem although my algorithm fails on an image that contains such cases), but at times it can even be hundreds of codes into the future. In one case the first code after the clear code (256) is 364, which seems quite impossible given that the clear code clears my dictionary of all codes 258 and above, in another case the code is 501 when my dictionary only goes up to 317!
I have no idea how to deal with it, but it seems that I'm the only one with this problem, the decoders in other programs load such images fine. So how do they do it?
Here's the core of my decoding algorithm, obviously due to how much code is involved I can't provide complete compilable code in a compact manner, but since this is a matter of algorithmic logic this should be enough. It follows closely the algorithm described in the official TIFF specification (page 61), in fact most of the spec's pseudo code is in the comments.
void tiff_lzw_decode(uint8_t *coded, buffer_t *dec)
{
buffer_t word={0}, outstring={0};
size_t coded_pos; // position in bits
int i, new_index, code, maxcode, bpc;
buffer_t *dict={0};
size_t dict_as=0;
bpc = 9; // starts with 9 bits per code, increases later
tiff_lzw_calc_maxcode(bpc, &maxcode);
new_index = 258; // index at which new dict entries begin
coded_pos = 0; // bit position
lzw_dict_init(&dict, &dict_as);
while ((code = get_bits_in_stream(coded, coded_pos, bpc)) != 257) // while ((Code = GetNextCode()) != EoiCode)
{
coded_pos += bpc;
if (code >= new_index)
printf("Out of range code %d (new_index %d)\n", code, new_index);
if (code == 256) // if (Code == ClearCode)
{
lzw_dict_init(&dict, &dict_as); // InitializeTable();
bpc = 9;
tiff_lzw_calc_maxcode(bpc, &maxcode);
new_index = 258;
code = get_bits_in_stream(coded, coded_pos, bpc); // Code = GetNextCode();
coded_pos += bpc;
if (code == 257) // if (Code == EoiCode)
break;
append_buf(dec, &dict[code]); // WriteString(StringFromCode(Code));
clear_buf(&word);
append_buf(&word, &dict[code]); // OldCode = Code;
}
else if (code < 4096)
{
if (dict[code].len) // if (IsInTable(Code))
{
append_buf(dec, &dict[code]); // WriteString(StringFromCode(Code));
lzw_add_to_dict(&dict, &dict_as, new_index, 0, word.buf, word.len, &bpc);
lzw_add_to_dict(&dict, &dict_as, new_index, 1, dict[code].buf, 1, &bpc); // AddStringToTable
new_index++;
tiff_lzw_calc_bpc(new_index, &bpc, &maxcode);
clear_buf(&word);
append_buf(&word, &dict[code]); // OldCode = Code;
}
else
{
clear_buf(&outstring);
append_buf(&outstring, &word);
bufwrite(&outstring, word.buf, 1); // OutString = StringFromCode(OldCode) + FirstChar(StringFromCode(OldCode));
append_buf(dec, &outstring); // WriteString(OutString);
lzw_add_to_dict(&dict, &dict_as, new_index, 0, outstring.buf, outstring.len, &bpc); // AddStringToTable
new_index++;
tiff_lzw_calc_bpc(new_index, &bpc, &maxcode);
clear_buf(&word);
append_buf(&word, &dict[code]); // OldCode = Code;
}
}
}
free_buf(&word);
free_buf(&outstring);
for (i=0; i < dict_as; i++)
free_buf(&dict[i]);
free(dict);
}
As for the results that my code produces in such situations it's quite clear from how it looks that it's only those few codes that are badly decoded, everything before and after is properly decoded, but obviously in most cases the subsequent image after one of these mystery future codes is ruined by virtue of shifting the rest of the decoded bytes by a few places. That means that my reading of the 9 to 12-bit code stream is correct, so this really means that I see a 364 code right after a 256 dictionary-clearing code.
Edit: Here's an example file that contains such weird codes. I've also found a small TIFF LZW loading library that suffers from the same problem, it crashes where my loader finds the first weird code in this image (code 3073 when the dictionary only goes up to 2051). The good thing is that since it's a small library you can test it with the following code:
#include "loadtiff.h"
#include "loadtiff.c"
void loadtiff_test(char *path)
{
int width, height, format;
floadtiff(fopen(path, "rb"), &width, &height, &format);
}
And if anyone insists on diving into my code (which should be unnecessary, and it's a big library) here's where to start.
The bogus codes come from trying to decode more than we're supposed to. The problem is that a LZW strip may sometimes not end with an End-of-Information 257 code, so the decoding loop has to stop when a certain number of decoded bytes have been output. That number of bytes per strip is determined by the TIFF tags ROWSPERSTRIP * IMAGEWIDTH * BITSPERSAMPLE / 8, and if PLANARCONFIG is 1 (which means interleaved channels as opposed to planar), by multiplying it all by SAMPLESPERPIXEL. So on top of stopping the decoding loop when a code 257 is encountered the loop must also be stopped after that count of decoded bytes has been reached.
I am building a POS application verifone (C-language) which should communicate with m2m switch from Morocco but I'm stuck when sending initialization message which should have a backslash like this (08\00) but when sending this I'm having 08\5c00.
It converts backslash by its value in hex(5c). The tool I'm using is socket workbench to simulate the server.
How can I send a backslash without being converted into \5c?
It needs to be done in C Language.
EDIT
This is the data I want to send to the server with the header but when trying to print \00 I get \5C00
sprintf(data,"%s%s%s%s%s%s%s%s%s%s%s%s%s","\x30\x60\x60\x20\x15\x35\x35","\x08","\\00","\x0x00","\x01\x30\x30\x30\x30\xC0\x30\x30\x30\x30","\x97","\\00","\x30\x30","\x00\x00\x01\x00","\x02",idTerminal,idCommercant,"\x20\x20\x20\xA4\xBC");
If I'm understanding correctly, the first part of your example:
sprintf(data,"%s%s",
"\x30\x60\x60\x20\x15\x35\x35",
"\x08");
is doing exactly what you want. The problem is that on the next %s, you are using "\\00" and you want to server to receive ASCII \00 (which would be 0x5c, 0x30, 0x30), but instead the server reports that it is receiving ASCII \5c00 (which would be 0x5c, 0x,35, 0x43, 0x30, 0x30).
I agree with Klas Lindbäck in that it sounds like the VeriFone terminal is doing the correct thing, but the server is displaying it wrong. There are 2 things I would consider doing in order to troubleshoot this in order to prove that this is correct (and you can do just one or the other or you can do both together).
First: You can use LOG_PRINTF (or print to paper or the screen if you prefer) to print the values of each byte just before you send it off. Below is a quick-and-dirty function I wrote to do just that when I was troubleshooting a similar sort of problem once. Note that I only cared about the beginning of the string (as is the case with you, it seems) so I don't print the end if I run out of buffer space.
void LogDump(unsigned char* input, int expectedLength)
{
#ifdef LOGSYS_FLAG
char buffer[100];
int idx, bfdx;
memset(buffer, 0, sizeof(buffer));
bfdx = 0;
for (idx = 0; idx < expectedLength && bfdx < sizeof(buffer); idx++)
{
//if it is a printable character, print as is
if (input[idx] > 31 && input[idx] < 127)
{
buffer[bfdx++] = (char) input[idx];
continue;
}
//if we are almost out of buffer space, show that we are truncating
// the results with a ~ character and break. Note we are leaving 5 bytes
// because we expand non-printable characters like "<121>"
if (bfdx + 5 > sizeof(buffer))
{
buffer[bfdx++] = '~';
break;
}
//if we make it here, then we have a non-printable character, so we'll show
// the value inside of "<>" to visually denote it is a numeric representation
sprintf(&buffer[bfdx], "<%d>", (int) input[idx]);
//advance bfdx to the next 0 in buffer. It will be at least 3...
bfdx += 3;
//... but for 2 and 3 digit numbers, it will be more.
while (buffer[bfdx] > 0)
bfdx++;
}
//I like to surround my LOG_PRINTF statements with short waits because if there
// is a crash in the program directly after this call, the LOG_PRINTF will not
// finish writing to the serial port and that can make it look like this LOG_PRINTF
// never executed which can make it look like the problem is elsewhere
SVC_WAIT(5);
LOG_PRINTF(("%s", buffer));
SVC_WAIT(5);
#endif
}
Second: try assigning each position in your char array an explicit value. If you already used my LOG_PRINTF suggestion above and found it was not sending what you thought it should be, this would be one way to fix it so that it DOES send EXACTLY what you want. This method is a bit more tedious, but since you are spelling it out each value, anyway, it shouldn't be too much more overhead:
data[0] = 0x30;
//actually, I'd probably use either the decimal value: data[0] = 48;
// or I'd use the ASCII value: data[0] = '0';
// depending on what this data actually represents, either of those is
// likely to be more clear to whomever has to read the code later.
// However, that's your call to make.
data[1] = 0x60;
data[2] = 0x60;
data[3] = 0x20;
data[4] = 0x15;
data[5] = 0x35;
data[6] = 0x35;
data[7] = 0x08;
data[8] = 0x5C; // This is the '\'
data[9] = 0x48; // The first '0'
data[10]= 0x48; // The second '0'
data[11]= 0;
//for starters, you may want to stop here and see what you get on the other side
After you have proven to yourself that it IS or IS NOT the VeriFone code causing the problem, you will know whether you need to focus on the terminal or on the server side.
This is the example code:
while (mpg123_read(mh, buffer, buffer_size, &done) == MPG123_OK)
{
// -> I'm consider this line
if((ao_play(dev, (char*)buffer, done)==0)){
}
}
In this code i want to edit the audio before it's played. Anyone suggest me to use fft to do this, personally i'm try to do this:
while (mpg123_read(mh, buffer, buffer_size, &done) == MPG123_OK)
{
buffer=((int)buffer)*2
if((ao_play(dev, (char*)buffer, done)==0))
}
for experiment, but this can't do anything. So, what is a buffer? How i can change it in real time? And can i stop it and after resume it (also called "pause" in music player..)?
Sorry for noob questions but I'm starting to program just from 6 months.
A buffer is a memory block used to contain an arbitrary, up bounded amount of data. In C, it's used as an array. If the buffer is dynamically allocated, then the variable buffer is a pointer that points to the address where the actual buffer (memory block) begins. You have to look at the declaration of variable buffer to know what is the type of the elements inside such array.
Also you have to look at the mpg123 documentation to know how to interpret the data that is returned by the mpg123_read() function.
Making an educated guess based upon the nature of the data you have decoded, I would say that buffer is probably an array of interleaved short ints that comprises data for stereo channels L and R of an uncompressed 16-bit audio signal. Channel L being at even indexed elements, and channel R being at odd indexed elements.
So, a possible editing would be like this:
for (int i=0;i<done;i+=2)
{
buffer[i] = (buffer[i+1] - buffer[i]) / 2;
}
This would substract left channel data with right channel data, cancelling any audio data that is identical on both channels. It's the basic technique for cancelling vocals in a song.
Your proposed editing has no meaning. You are changing the value of the pointer buffer by multiplying it by two. That makes the pointer to have a very different memory address, much possibly illegal, so when that pointer is used in ao_play() you will get a segmentation fault.
I guess that what you want to do with your example is to make your audio data twice louder, don't you? In that case, you are looking for this:
for (int i=0;i<done;i+=2)
{
if (buffer[i]>16383)
buffer[i] = 32767;
else if (buffer[i]<-16384)
buffer[i] = -32768;
else
buffer[i] = 2*buffer[i];
}
To stop and resume, you have to find a way for your program to check the value of something you can change with an input device (a button pressed in a window, a key pressed, etc).
For example, let's say you have a function called khbit() that returns non zero if a key is being pressed (this function is present in DOS compilers and sometimes is available as non-standard library for easing portability of older DOS programs: look at conio.h if you have it). Then you can do something like this:
int paused = 0; /* flip-flop variable to pause/resume playing */
while (mpg123_read(mh, buffer, buffer_size, &done) == MPG123_OK)
{
if (!paused)
{
if((ao_play(dev, (char*)buffer, done)==0))
break;
}
if (kbhit() && getchar()==' ')
paused = !paused;
}
This will play/pause your music using the SPACE bar.
It will not pause the music, only mute it. Reading (mpg123_read) goes on so you're just skipping a part.
I'm trying to send data to an SD card from a PIC18f4580, but the PIC is not sending what it should be.
related global variables:
unsigned char TXBuffer[128]; //tx buffer
unsigned char TXCurrentPos = 0x00; //tracks the next byte to be sent
unsigned char TXEndPos = 0x00; //tracks where new data should be put into the array
I am adding data to a buffer using the following function:
void addToBuffer(char data){
TXBuffer[TXEndPos] = data;
TXEndPos++;
}
And putting the data from the TXBuffer into TXREG with the following interrupt:
else if (PIR1bits.TXIF == 1){
if((TXEndPos - TXCurrentPos) > 0){ // if there is data in the buffer
TXREG = TXBuffer[TXCurrentPos]; // send next byte
TXCurrentPos++; // update to the new position
}
Using an oscilloscope I am able to see that the PIC is sending 0x98, regardless of what I put into the buffer. In fact I never put 0x98 into the buffer.
However, if I replace
TXREG = TXBuffer[TXCurrentPos];
with
TXREG = 0x55;
or
TXREG = TXCurrentPos;
then I get the expected results, that is the PIC will send 0x55 repeatedly, or count up from 0 respectively.
So why does the PIC have trouble sending data from the array, but any other time it is fine? I'll emphasize that transferring is handled in an interrupt, because I feel like that's the root of my issue.
EDIT: It is a circular buffer in the sense that TXEndPos and TXCurrentPos return to 0 when they reach 127.
I also disable the transmit interrupt when TXEndPos - TXCurrentPos == 0, and re-enable it when adding data to the buffer. Really, my code works completely as expected in that if I add 13 characters to TXBuffer in main, my PIC will transmit 13 characters and then stop. The problem is that they are always the same (wrong) character - 0x98.
EDIT2: more complete functions are here: http://pastebin.com/MyYz1Qzq
Perhaps TXBuffer doesn't really contain the data you think it does? Maybe you're not calling addToBuffer or calling it at the wrong time or with the wrong parameter?
You can try something like this in your interrupt handler:
TXBuffer[TXCurrentPos] = TXCurrentPos;
TXREG = TXBuffer[TXCurrentPos];
TXCurrentPos++;
Just to prove to yourself you can read and write to TXBuffer and send that to the USART.
Also try:
TXREG = TXEndPos;
To see if this matches your expectation (= the length of your message).
I am assuming there's some other code we're not seeing here that takes care of starting the transmission. Also assuming this is done per message with the position being reset between messages - i.e. this is not supposed to be a circular buffer.
EDIT: Based on looking at the more recently posted code:
Don't you need to kickstart the transmitter by writign the first byte of your buffer to TXREG? What I would normally do is enable the interrupt and write the first byte into the transmit register and a quick look at the datasheet seems to indicate that's what you need to do. Another thing is I still don't see how you ensure a wraparound from 127 to 0?
Also your main() seems to just end abruptly, where does the execution continue once main ends?
There are a lot of things you are not taking care of. TXBuffer needs to be a circular buffer. Once you increment TXEndPos past 127 then you need to wrap it back to 0. Same for TXCurrrentPos. That also affects the test to see if there's something in the buffer, the > 0 test isn't good enough. Generic advice is available here.
Your code is incomplete, but it looks wrong as-is: what happens if there is nothing to send? You don't seem to load TXREG then, so why would anything be transmitted, be it 0x98 or anything else?
The way it is usually done when this kind of code architecture is used is to turn off TXIE if there is nothing to send (in a else part of the IRQ routine), and turn it on unconditionally at the end of the addToBuffer function (since you then know for sure that there is at least one character to send).
Also, you should test TXEndPos and TXCurrentPos for equality directly, since that would let you use a circular buffer very easily by adding two modulo operations.