C: Is there a way to lower the speed of printf-outputs - c

as said in the heading, is there a way to lower the speed of printf-outputs in C? Just like watching every character getting printed in particular (it does not have to be so slow, just so you understand what i mean).
The reason why i ask is:
I need to program a small microcontroller. But every 'printf' executed on it should be send back to the com1 port of the host. Everything works fine, I already buffered my printf so everything will be stored in a char-array with a finit size and this array will be sent back to com1 char by char. But because i don't know how many printfs there will be, and because of the limited memory of the μC, a size-limited array isn't the best solution. So my new attempt is to write directly to the send-register of the μC, which can only contain one char at a time until its sent. I do this via
setvbuf(stdout, LINFLEX_0.BDRL.B.DATA0, _IOFBF, 1);
where LINFLEX_0.BDRL.B.DATA0 represents the transmit-register. What I think what my problem is now: the printfs just overwrite the register to fast, so it has no time to send any char stored in it before it gets changed again. When sending char by char from the array, i wait until a data-transmission-flag is set:
//write character to transmit buffer
LINFLEX_0.BDRL.B.DATA0 = buffer[j];
// Wait for data transmission completed flag
while (1 != LINFLEX_0.UARTSR.B.DTF) {}
// Clear DTF Flag
LINFLEX_0.UARTSR.R = 0x0002;
So the idea is to slower the speed the printfs processes every character, but feel free to comment if anyone has another idea.

The problem isn't with printf as such but with the underlying UART driver. That's what you'd have to tweak. If you are using Codewarrior for MPC56 you can actually view the source code for all of it: quite horrible code. Messing with it will only go bad - and apparently it doesn't seem to work well in the first place.
Using printf in these kind of embedded applications is overall a very bad idea, since the function is unsuitable for pretty much any purpose, UART communication in particular. The presence of printf is actually an indicator that a project has gone terribly wrong, quite possibly it has been hijacked by PC programmers. That's not really a programming problem, but a manager one.
Technically, the only sane thing to do here is to toss out all the crap from your project. That means everything remotely resembling stdio.h. Instead, write your own UART driver, based on the available Freescale examples. Make it work on bytes. This also enables you to add custom features such as "echo", where the MCU has to wait for a reply from the receiver. Or you could implement it with DMA if you just want to write data to a buffer and then forget all about it.

Related

How to set the baud rate of non-serial ttys?

A bit of a strange question here, but I'm wondering if there's any standard way to set the baud rate of non-serial interfaces, e.g. an SSH session to a Linux machine?
There are lots of examples of setting the baud rate for serial ttys in Linux, such as this:
struct termios slow;
tcgetattr(STDOUT_FILENO, &slow);
cfsetispeed(&slow, B1200);
cfsetospeed(&slow, B1200);
tcsetattr(STDOUT_FILENO, TCSANOW, &slow);
However, I've never been able to get any to work for me when working with ttys like an SSH connection. I'm not sure if it's just not supposed to work in those contexts, or if there's something else/different that needs to be done here.
The example compiles and runs just fine, but everything printed out to the screen is still at full speed. I've confirmed none of the functions runs into any issues, tried different baud rates, tried STDIN and STDOUT, set different flags in some of the examples floating around - none of that has any effect.
The stty command itself also seems to have no effect.
Is it possible to set the baud rate of an SSH session in the first place? Maybe it's not, and that's why it doesn't work.
If not, is there any way to achieve this? Short of a brute force solution like:
for each char in str:
print char
sleep 0.03
In case it wasn't obvious from the question, no, there's not a super useful application of this, but sometimes it's useful to simulate different baud rates without actually using a physical serial connection or anything like that, and it would be useful to handle the connection speed at the terminal layer or in the kernel somehow, rather than doing something dumb like printing everything out character by character. Can this be accomplished?
This must be accomplished entirely server-side (the connected SSH client should not have to do anything special), and the speed must be able to be set on demand just like with cfsetospeed.
One solution that does work is to use a pseudoterminal, which relays output from the PTY master to the actual file descriptor (e.g. a socket) one character at a time, with usleep inbetween each call to write.
However, this results in doing dozens to hundreds of calls to write() per second, which uses quite a bit of CPU. I doubt there's any way to space out writing a buffer gradually in the kernel, so this kind of system call overhead might be unavoidable, but if there was a more efficient way to do this, that would be great.

Parsing AT commands & responses in a circular buffer

I have a PIC32MM connected via UART with BT module RN4020 and currently struggling with it's configuration. It uses AT-command format COMMAND+CR+LF (or corresponsive ANSWER+CR+LF).
First, the module is set via special signal line into Command Mode, which after boot-up gives "CMD\r\n" on MCU's RX line. I am using default nifty functions provided by MPLAB IDE, which are based on circular buffers filled byte by byte during RX interrupt routine and I think I've already grasped the simple principle of how they work.
There even is a neat function ReadBuffer(target string, num of bytes to read) provided, which reads given amount of bytes from the rx circ buff into a given char array (buffer, string) to work with then.
My question is - what might be the best way to wait for and recognize a correct answer and then after it came and is checked, which means the module is ready for a command to be send via TX line, send a command. I need to signalize the moment when the answer from the module is fully read in MCU - in other words, when to call the ReadBuffer function so it gives me correct string back and I can be sure that I call ReadBuffer(my_target_string, 5) and my_target_string contains "CMD\r\n". I don't want to move the Head&Tail pointers unnecessarily wrongly, and mess up the structure. Is some kind of timeout a good way to proceed or is it even better to create some indicator flag and (un)set it when CR and LF come?
Thanks for any ideas or hints in advance.
After finding out, that the answer's home, some sort of string compare might be enough to check if it is what is expected I think. (Tricky might be '\0' character, but I'll try to keep that one in mind.)

Glitchy audio output, no underruns

When using snd_pcm_writei() in non-blocking mode everything works perfect for a while but eventually the audio gets choppy. It sounds like the ring buffer pointers are getting out of sync (ie. sometimes I can tell that the audio is playing out of order). How long it takes for the problem to start it's hardware dependent. On a Gentoo box on real hardware it seldom happens, but on a buildroot system running on QEMU it happens after about 5 minutes. On both cases draining the pcm stream fixes the problem. I have verified that I'm writing the samples correctly by also writting them to a file and playing them with aplay.
Currently I'm setting avail_min to the period size (1024 frames) and calling snd_pcm_wait() before writting chunks of the period size. But I tried a number of different variations (different chunk sizes, checking avail myself and use pthread_cond_timedwait() instead of snd_pcm_wait(), etc). But the only thing that works fine is using blocking mode but I can not do that.
You can see the current source code here: https://bitbucket.org/frodzdev/mediabox/src/5a6471316c7ae481b329e7e0d4af1bb68a32e71d/src/audio.c?at=staging&fileviewer=file-view-default (it needs a little cleanup since I'm trying all kinds of things). The code that does the actual IO starts at line 375.
Edit:
I think I got a solution but I don't understand why it seems to work. It seems that it does not matter if I'm using non-blocking mode, the problem is when I wait to make sure there's room on the buffer (either through snd_pcm_wait(), pthread_cond_timedwait(), or usleep()).
The version that seems to work is here: https://bitbucket.org/frodzdev/mediabox/src/c3eb290087d9bbe0d5f37653a33a1ba88ef0628b/src/audio.c?fileviewer=file-view-default. I switched to blocking mode while still waiting before calling snd_pcm_writei() and it didn't made a difference. Then I added the call to snd_pcm_avail() before calling snd_pcm_status() on avbox_audiostream_gettime(). This function is called constantly by another thread to get the stream clock and it only uses snd_pcm_status() to get the timestamps. Now it seems to work (at least it is a lot less probable to happen) but I don't understand exactly why. I understand that snd_pcm_avail() will synchronize the pointers with the kernel but I don't really understand when it needs to be called and the difference between snd_pcm_state() et al and snd_pcm_status(). Does snd_pcm_status() also synchronize anything? It seems not because sometimes snd_pcm_status_get_state() will return RUNNING when snd_pcm_avail() returns -EPIPE. The ALSA documentation is really vague. Perhaps understanding these things will help me understand my problem?
Now, when I said that it seems to be working I mean that I cannot reproduce it on real hardware. It still happens on QEMU though way less often. But considering that on the next commit I switched to blocking mode without waiting (which I've used in the past and never had a problem with on real hardware) and it still happens in QEMU and also the fact that this is a common issue with QEMU I'm starting to think that I may have fixed the issue on my end and now it's just a QEMU problem. Is there any way to determine if the problem is a bug on my end that is easier to trigger on the emulator or if it's just an emulator problem?
Edit: I realize that I should fill the buffer before waiting but at this point my concern is not to prevent underruns but to make sure that my code can handle them when they happen. Besides the buffer is filling up after a few iterations. I confirmed this by outputing avail, buffer_size, etc before writing each packet and the numbers I get don't make perfect sense, they show an error of 1 or 2 periods about every 8th period. Also (and this is the main problem) I'm not detecting any underruns, the audio get choppy but all writes succeed. In fact, if the problem start happening and I trigger an underrun by overloading the CPU it will correct itself when the pcm is reset.
In line 505: You're using time as argument to malloc.
In line 568: Weren't you playing audio? In this case you should do wait only after you wrote the frames. Let's think ...
Audio device generates an interrupt when it terminates to process a period.
| period A | period B |
^ ^
irq irq
Before you start the pcm, audio device doesn't generate any interrupt. Notice here that you're waiting and you haven't started the pcm yet. You only starts it when you call snd_pcm_writei().
When you wait for audio data you'll be awake only when the current period has been fully processed -- in your first wait the first period wasn't even written -- so in a comfortable situation you should write the whole buffer, wait for the first interrupt, and then write the just processed period, and on and on.
Initially, buffer is empty:
| | |
write():
|############|############|
wait():
..............
When we wake up:
| |############|
write():
|############|############|
I found the problem is you're writing audio just before it be played, then sometimes it may arrive delayed in the buffer.

Receive message of undefined size in UART in C

I'm writing my own drivers for LPC2148 and a question came to mind.
How do I receive a message of unspecified size in UART?
The only 2 things that come to mind are: 1 - Configure a watchdog and end the receiving when the time runs out. 2- make it so that whenever a meswsage is sent to it there must be an end of message character.
The first choice seems better in my opinion, but I'd like to know if anybody has a better answer, and I know there must be.
Thank you very much
Just give the caller whatever bytes you have received so far. The UART driver shouldn't try to implement the application protocol, the application should do that.
It looks like a wrong use for a watchdog. I ended up with three solutions for this problem:
Use fixed-size packets and DMA; so, you receive one packet per transaction. Apparently, it is not possible in your case.
Receive message char-by-char until the end-of-message character is received. Kind of error-prone, since the EOM char may appear in the data, probably.
Use a fixed-size header before every packet. In the header, store payload size and/or message type ID.
The third approach is probably the best one. You may combine it with the first one, i.e. use DMA to receive header and then data (in the second transaction, after the data size is known from the header). It is also one of the most flexible approaches.
One more thing to worry about is to keep bytestream in sync. There may be rubbish laying in the UART input buffers, which may get read as data, or you can get only a part of a packet after your MCU is powered (i.e. the beginning of the packet had already been sent by that time). To avoid that, you can add magic bytes in your packet header, and probably CRC.
EDIT
OK, one more option :) Just store everything you receive in a growing buffer for later use. That is basically what PC drivers do.
Real embedded uart drivers usually use a ring buffer. Bytes are stored in order and the clients promise to read from the buffer before it's full.
A state machine can then process the message in multiple passes with no need for a watchdog to tell it reception is over
better to go for option 2) append end of transmission character to the transmission string.
but i suggest to add start of transmission also to validate that you are receiving actual transmission.
Watchdog timer is used to reset system when there is a unexpected behavior of device. I think it is better to use a buffer which can store size of data that your application requires.

How can I send a string serially from an 8051 only ONCE?

I am making an 8051 microcontroller communicate wirelessly with a computer. The microcontroller will send a string to its serial port (DB9) and the computer will receive this string and manipulate it.
My problem is that I do not know how to make the 8051 transmit the string just once. Since I need to manipulate the string at the PC end it has to be received only one time. Currently, even though in the C code I am sending the string once, on my computer I am receiving the same string continuously.
I assume this is because whatever is in the SBUF is continuously transmitted. Is there any way that I can send my string only once? Is there a way to empty the SBUF?
I tried to use the RTS (Request to Send) pin (7th pin) on the DB9 because I read somewhere that if I negated the voltage on that pin it would stop the flow of data to the serial port. So what I did was programmed my microcontroller to send the string and then sent logic level 0 to an output pin that was connected to my DB9 RTS pin. However, that didn't work.
Does anyone have any suggestions? I'd really appreciate them.
EDIT
The software that I'm using on the PC is X-CTU for Xbee modules. This is the code on my microcontroller:
include reg51.h
void SerTx(unsigned char);
void main(void)
{
TMOD = 0x20;
TH1 = 0xFD;
SCON = 0x50;
TR1 = 1;
SerTx('O');
SerTx('N');
SerTx('L');
SerTx('Y');
}
void SerTx(unsigned char x)
{
SBUF = x;
while(TI==0);
TI = 0;
}
Could someone please verify that it is in fact only sending the string once?
EDIT
Looks like Steve, brookesmoses and Neil hit the nail on the head when they said that it was what was happening AFTER my main function that was causing the problem. I just tried the suggested code Steve put up (more specifically the for(;;); and defining serTX outside of main) and it worked perfectly. The controller is probably rebooted and hence the same code keeps repeating itself.
Thanks so much for the help! :)
Can you confirm that the 8051 really is sending the data only once? One way to check would be to use a scope to see what is happening on the UART's TX pin.
What software are you using on the PC? I'd suggest using simple communications software like HyperTerminal or PuTTY. If they are showing the string being sent to the PC multiple times, then chances are the fault is in the software running on the 8051.
EDIT: To be honest, this sounds like the kind of debugging that engineers have to face on a regular basis, and so it's a good opportunity for you to practise good old-fashioned methodical problem-solving.
If I may be very blunt, I suggest you do the following:
Debug. Try things out, but don't guess. Experiment. Make small changes in your code and see what happens. Try everything you can think of. Search the web for more information.
If that doesn't produce a solution, then return here, and provide us with all the information we need. That includes relevant pieces of the code, full details of the hardware you're using, and information about what you tried in step 1.
EDIT: I don't have the rep to edit the question, so here's the code posted by the OP in the comment to her question:
#include<reg51.h>
void SerTx(unsigned char);
void main(void)
{
TMOD = 0x20; TH1 = 0xFD; SCON = 0x50; TR1 = 1;
SerTx('O'); SerTx('N'); SerTx('L'); SerTx('Y');
void SerTx(unsigned char x)
{ SBUF = x; while(TI==0); TI = 0; }
}
As Neil and Brooksmoses mention in their answers, in an embedded system, the main function is never allowed to terminate. So you either need to put your code in an infinite loop (which may be what is inadvertently happening), or add an infinite loop at the end, so the program effectively halts.
Also, the function SerTx should be defined outside main. This may be syntatically correct, but it keeps things simple not declaring functions within other functions.
So try this (I've also added some comments in an attempt to make the code easier to understand):
#include<reg51.h>
void SerTx(unsigned char);
void main(void)
{
/* Initialise (need to add more explanation as to what
each line means, perhaps by replacing these "magic
numbers" with some #defines) */
TMOD = 0x20;
TH1 = 0xFD;
SCON = 0x50;
TR1 = 1;
/* Transmit data */
SerTx('O'); SerTx('N'); SerTx('L'); SerTx('Y');
/* Stay here forever */
for(;;) {}
}
void SerTx(unsigned char x)
{
/* Transmit byte */
SBUF = x;
/* Wait for byte to be transmitted */
while(TI==0) {}
/* Clear transmit interrupt flag */
TI = 0;
}
The code you posted has no loop in main(), so you need to determine what your compiler's C runtime does when main() returns after sending 'Y'. Given your problem, I imagine the compiler generates some code to do some cleanup then restart the micro (maybe a hardware reset, maybe just be restarting the C runtime). It looks like your program works exactly as you've written it, but you've ignored what happens before and after main() is called.
If you want your string sent once and only once, ever, then you need something like while(1) {} added after the last character is sent. But, then your program is doing nothing -- it will just execute an empty loop forever. A reset (such as power cycling) is needed to start again, and send the string.
Note that if your micro has a watchdog timer, it might intervene and force an unexpected reset. If this happens, your string will be sent once for each watchdog reset (which could be something like once each second, with the rate depending on your hardware).
Also, having serTx() defined nested inside main() is probably not what you want.
It's hard to say what the problem is without seeing any of the 8051 code. For example, a logic error on that side could lead to the data being sent multiple times, or the 8051 software might be waiting for an ACK which is never received, etc.
Normally on an 8051 code has to explicitly send each character, but I assume this is taken care of for you by the C run-time.
Use of the RTS/CTS (Request To Send/Clear To Send) is for flow control (i.e. to prevent buffer overruns - the buffers are usually pretty small on these microcontrollers) and not to stop transmission altogether.
Echoing Neil's answer (in a reply, since I don't yet have the rep to comment): In a typical microcontroller situation without an OS, it's not immediately clear what the exit() function that gets implicitly called at the end of main() should do -- or, more accurately, it can't do the usual "end the program and return to the OS", because there's no OS to return to.
Moreover, in a real application, you almost never want the program to just stop, unless you turn off the system. So one thing that the exit() implementation should definitely not do is take up a lot of code space.
In some systems I've worked on, exit() is not actually implemented at all -- if you're not going to use it, don't even waste a byte on it! The result is that when the execution path gets to the end of main(), the chip just wanders off into a lala land of executing whatever happens to be in the next bit of memory, and typically quickly ends up either stuck in a loop or faulting with an illegal opcode. And the usual result of a fault with an illegal opcode is ... rebooting the chip.
That seems like a plausable theory for what's happening here.

Resources