Parsing AT commands & responses in a circular buffer - c

I have a PIC32MM connected via UART with BT module RN4020 and currently struggling with it's configuration. It uses AT-command format COMMAND+CR+LF (or corresponsive ANSWER+CR+LF).
First, the module is set via special signal line into Command Mode, which after boot-up gives "CMD\r\n" on MCU's RX line. I am using default nifty functions provided by MPLAB IDE, which are based on circular buffers filled byte by byte during RX interrupt routine and I think I've already grasped the simple principle of how they work.
There even is a neat function ReadBuffer(target string, num of bytes to read) provided, which reads given amount of bytes from the rx circ buff into a given char array (buffer, string) to work with then.
My question is - what might be the best way to wait for and recognize a correct answer and then after it came and is checked, which means the module is ready for a command to be send via TX line, send a command. I need to signalize the moment when the answer from the module is fully read in MCU - in other words, when to call the ReadBuffer function so it gives me correct string back and I can be sure that I call ReadBuffer(my_target_string, 5) and my_target_string contains "CMD\r\n". I don't want to move the Head&Tail pointers unnecessarily wrongly, and mess up the structure. Is some kind of timeout a good way to proceed or is it even better to create some indicator flag and (un)set it when CR and LF come?
Thanks for any ideas or hints in advance.
After finding out, that the answer's home, some sort of string compare might be enough to check if it is what is expected I think. (Tricky might be '\0' character, but I'll try to keep that one in mind.)

Related

What is the difference between a HAL_DAC_Stop_DMA vs a HAL_DAC_Stop?

I am running a DMA transfer through the DAC of an stmf303re nucleo, and was wondering if there was a difference between the HAL_DAC_Stop_DMA vs a HAL_DAC_Stop? I ask this because earlier on in my code I just used a HAL_DAC_Stop and it worked fine, however i now see that there is a HAL_DAC_Stop_DMA also and was wondering what the difference is?
If you started it with DMA you should stop it with the equivalent function. If you use the non-DMA stop function the ADC will stop but the DMA is still running waiting for the ADC to tell it more data is available. Obviously this will never come so the system is left in a funny state. Maybe the next start function can tidy up this funny state, or maybe it can't. Read the source of the functions if you want to find out the exact details.
Another possible problem of not using the DMA stop function could be that the last data produced by the ADC is still being copied by the DMA. This would mean that your destination buffer is not ready for you to use. Whether or not this causes a problem depends on how soon your code reads the destination buffer.

C: Is there a way to lower the speed of printf-outputs

as said in the heading, is there a way to lower the speed of printf-outputs in C? Just like watching every character getting printed in particular (it does not have to be so slow, just so you understand what i mean).
The reason why i ask is:
I need to program a small microcontroller. But every 'printf' executed on it should be send back to the com1 port of the host. Everything works fine, I already buffered my printf so everything will be stored in a char-array with a finit size and this array will be sent back to com1 char by char. But because i don't know how many printfs there will be, and because of the limited memory of the μC, a size-limited array isn't the best solution. So my new attempt is to write directly to the send-register of the μC, which can only contain one char at a time until its sent. I do this via
setvbuf(stdout, LINFLEX_0.BDRL.B.DATA0, _IOFBF, 1);
where LINFLEX_0.BDRL.B.DATA0 represents the transmit-register. What I think what my problem is now: the printfs just overwrite the register to fast, so it has no time to send any char stored in it before it gets changed again. When sending char by char from the array, i wait until a data-transmission-flag is set:
//write character to transmit buffer
LINFLEX_0.BDRL.B.DATA0 = buffer[j];
// Wait for data transmission completed flag
while (1 != LINFLEX_0.UARTSR.B.DTF) {}
// Clear DTF Flag
LINFLEX_0.UARTSR.R = 0x0002;
So the idea is to slower the speed the printfs processes every character, but feel free to comment if anyone has another idea.
The problem isn't with printf as such but with the underlying UART driver. That's what you'd have to tweak. If you are using Codewarrior for MPC56 you can actually view the source code for all of it: quite horrible code. Messing with it will only go bad - and apparently it doesn't seem to work well in the first place.
Using printf in these kind of embedded applications is overall a very bad idea, since the function is unsuitable for pretty much any purpose, UART communication in particular. The presence of printf is actually an indicator that a project has gone terribly wrong, quite possibly it has been hijacked by PC programmers. That's not really a programming problem, but a manager one.
Technically, the only sane thing to do here is to toss out all the crap from your project. That means everything remotely resembling stdio.h. Instead, write your own UART driver, based on the available Freescale examples. Make it work on bytes. This also enables you to add custom features such as "echo", where the MCU has to wait for a reply from the receiver. Or you could implement it with DMA if you just want to write data to a buffer and then forget all about it.

EhBASIC input on 6502 emulator

I have been working on this 6502 emulator for a while, and I'm tying to get a simple Enhanced BASIC ROM to work. Although it lacks precise clock timing, the emulator did pass AllSuiteA.asm.
With EhBASIC, I've managed to get some output by printing the value of $F001 when a read is done to that address.
if(lastwrite == 0xF001)
{
printf("%c",CPUMEM[0xF001]);
}
However, I have no idea how to emulate the input process. This post states that whenever EhBASIC wants input, It will poll $F004. But my current code seems to have two problems:
while(1)
{
decodeandexecute();
if(lastread == 0xF004)
{
inputchar = getchar();
CPUMEM[0xF004] = inputchar;
}
if(lastwrite == 0xF001)
{
printf("%c",CPUMEM[0xF001]);
}
}
Input is only possible through individual letters (Expected)
After the program asks for memory size, giving any input will just cause a loop of reading from $F004 (LDA $F004) - i.e. I can't let EhBASIC know when to stop receiving inputs
I want to know an effective method of entering a string of characters, and getting passed "memory size?".
Additionally, if I want to let EhBASIC calculate the memory size automatically, what should I input to $F004?
I'm pretty much a newbie in this area....
I see you use getchar in the code and if I remember correctly that is a blocking call (it will wait until someone presses some key).
In the manual of ehbasic it says:
How to.
The interpreter calls the system routines via RAM based vectors and,
as long as the requirements for each routine are met, these can be changed
on the fly if needs be.
All the routines exit via an RTS.
The routines are ...
Input
This is a non halting scan of the input device. If a character is ready it
should be placed in A and the carry flag set, if there is no character then A,
and the carry flag, should be cleared.
One way to deal with this is to use two threads. One thread that runs the emulation of the 6502 running ehbasic and another thread that polls the keyboard.
Then let the polling thread push any input key strokes into a small buffer from which the ehbasic input routine can use.
Manual: http://www.sunrise-ev.com/photos/6502/EhBASIC-manual.pdf
UPDATE
Reading the question/answer you linked to, I see it is a modified ehbasic.
Your keyboard polling thread should place the keystrokes read in $F004 (and after a while clear F004 again - if I understand the instructions).
UPDATE 2
As a debugging tip: In you first version simply have a string with a fixed input such as 10 print "hello" 20 goto 10 and feed $f004 from there. That way you don't have to worry about any problems with using an actual keyboard.

What's the 'safest' way to read from this buffer?

I'm trying to read and write a serial port in Linux (Ubuntu 12.04) where a microcontroller on the other end blasts 1 or 3 bytes whenever it finishes a certain task. I'm able to successfully read and write to the device, but the problem is my reads are a little 'dangerous' right now:
do
{
nbytes = read(fd, buffer, sizeof(buffer));
usleep(50000);
} while(nbytes == -1);
I.e. to simply monitor what the device is sending me, I poll the buffer every half second. If it's empty, it idles in this loop. If it receives something or errors, it kicks out. Some logic then processes the 1 or 3 packets and prints it to a terminal. A half second is usually a long enough window for something to fully appear in the buffer, but quick enough for a human who will eventually see it to not think it's slow.
'Usually' is the keyword. If I read the buffer in the middle of it blasting 3 bytes. I'll get a bad read; the buffer will have either 1 or 2 bytes in it and it'll get rejected in the packet processing (If I catch the first of a 3 byte packet, it won't be a purposefully-sent-one-byte value).
Solutions I've considered/tried:
I've thought of simply reading in one byte at a time and feeding in additional bytes if its part of a 3 byte transmission. However this creates some ugly loops (as read() only returns the number of bytes of only the most previous read) that I'd like to avoid if I can
I've tried to read 0 bytes (eg nbytes = read(fd, buffer, 0);) just to see how many bytes are currently in the buffer before I try to load it into my own buffer, but as I suspected it just returns 0.
It seems like a lot of my problems would be easily solved if I could peek into the contents of the port buffer before I load it into a buffer of my own. But read() is destructive up to the amount of bytes that you tell it to read.
How can I read from this buffer such that I don't do it in the middle of receiving a transmission, but do it fast enough to not appear slow to a user? My serial messenger is divided into a sender and receiver thread, so I don't have to worry about my program loop blocking somewhere and neglecting the other half.
Thanks for any help.
Fix your packet processing. I always end up using a state machine for instances like this, so that if I get a partial message, I remember (stateful) where I left off processing and can resume when the rest of the packet arrives.
Typically I have to verify a checksum at the end of the packet, before proceeding with other processing, so "where I left off processing" is always "waiting for checksum". But I store the partial packet, to be used when more data arrives.
Even though you can't peek into the driver buffer, you can load all those bytes into your own buffer (in C++ a deque is a good choice) and peek into that all you want.
You need to know how large the messages being sent are. There are a couple of ways to do that:
Prefix the message with the length of the message.
Have a message-terminator, a byte (or sequence of bytes) that can not be part of a message.
Use the "command" to calculate the length, i.e. when you read a command-byte you know how much data should follow, so read that amount.
The second method is best for cases when you can come out of sync, because then read until you get the message-terminator sequence and you're sure that the next bytes will be a new message.
You can of course combine these methods.
To poll a device, you should better use a multiplexing syscall like poll(2) which succeeds when some data is available for reading from that device. Notice that poll is multiplexing: you can poll several file descriptors at once, and poll will succeed as soon as one (any) file descriptor is readable with POLLIN (or writable, if so asked with POLLOUT, etc...).
Once poll succeeded for some fd which you POLLIN you can read(2) from that fd
Of course, you need to know the conventions used by the hardware device about its messages. Notice that a single read could get several messages, or only a part of one (or more). There is no way to prevent reading of partial messages (or "packets") - probably because your PC serial I/O is much faster than the serial I/O inside your microcontroller. You should bear with that, by knowing the conventions defining the messages (and if you can change the software inside the microcontroller, define an easy convention for that) and implementing the appropriate state machine and buffering, etc...
NB: There is also the older select(2) syscall for multiplexing, which has limitations related to the C10K problem. I recommend poll instead of select in new code.

Receive message of undefined size in UART in C

I'm writing my own drivers for LPC2148 and a question came to mind.
How do I receive a message of unspecified size in UART?
The only 2 things that come to mind are: 1 - Configure a watchdog and end the receiving when the time runs out. 2- make it so that whenever a meswsage is sent to it there must be an end of message character.
The first choice seems better in my opinion, but I'd like to know if anybody has a better answer, and I know there must be.
Thank you very much
Just give the caller whatever bytes you have received so far. The UART driver shouldn't try to implement the application protocol, the application should do that.
It looks like a wrong use for a watchdog. I ended up with three solutions for this problem:
Use fixed-size packets and DMA; so, you receive one packet per transaction. Apparently, it is not possible in your case.
Receive message char-by-char until the end-of-message character is received. Kind of error-prone, since the EOM char may appear in the data, probably.
Use a fixed-size header before every packet. In the header, store payload size and/or message type ID.
The third approach is probably the best one. You may combine it with the first one, i.e. use DMA to receive header and then data (in the second transaction, after the data size is known from the header). It is also one of the most flexible approaches.
One more thing to worry about is to keep bytestream in sync. There may be rubbish laying in the UART input buffers, which may get read as data, or you can get only a part of a packet after your MCU is powered (i.e. the beginning of the packet had already been sent by that time). To avoid that, you can add magic bytes in your packet header, and probably CRC.
EDIT
OK, one more option :) Just store everything you receive in a growing buffer for later use. That is basically what PC drivers do.
Real embedded uart drivers usually use a ring buffer. Bytes are stored in order and the clients promise to read from the buffer before it's full.
A state machine can then process the message in multiple passes with no need for a watchdog to tell it reception is over
better to go for option 2) append end of transmission character to the transmission string.
but i suggest to add start of transmission also to validate that you are receiving actual transmission.
Watchdog timer is used to reset system when there is a unexpected behavior of device. I think it is better to use a buffer which can store size of data that your application requires.

Resources