The break and error indication glows up in real terms while communicating with RS-232. Sometimes, the CTS is also will be glowing.
Due to this, the data in prints as junk for some time; later it gets corrected after a few reset of real term.
This is a screenshot showing the error:
What does BREAK mean really? What happens when there's a break?
A break condition occurs when the transmitter is holding the data line at logical 0 for too long, i.e. longer than the time needed for transmitting a start bit and the (usually 8) data bits. Possible causes:
The transmitter can send a break deliberately, as an out-of band signal, e.g. to signal the beginning of a data packet, like in the LIN protocol.
It can occur when the transmitter is sending at a lower speed as the receiver is expecting. Perhaps its clock is not properly initialized.
Of course it can be caused by a noisy or otherwise bad connection.
Related
I'll skirt round the long and tedious story of how we got where we are, but the situation is this:
We are using half-duplex RS485 serial comms and (by necessity) driving the TX/RX flag "manually" via GPIO pin toggling. In order to make this work we're using tcdrain() to wait until the Tx buffer is empty before flipping back to Rx mode.
The problem is that tcdrain() seems to wait (block) for quite a while after the last character has been transmitted, which causes us a bit of a bottleneck.
I've seen suggestions that the default tcdrain() code just multiplies the baud rate by the (maximum) size of the serial buffer, sleep()s for that time period and then returns.- and I could easily believe that.
So, can anyone suggest ways to either:
Speed up tcdrain() perhaps by shortening the serial buffer
Modify tcdrain() (or related code/parameters) to actually wait for the last character to be sent by the hardware, or wait for a period more closely related to the buffer contents
I've grepped our (embedded) kernel (2.6.x) code and can't see any references other than a single header file (termios.h).
Edit to add: As per this post, if for example we could reduce the serial Tx buffer to 1 byte using an IOCTL I assume the write() call would/could block while chars were written, then return, which would allow us to avoid relying on tcdrain() and just use a very short usleep() before toggling the Tx/Rx pin. I will experiment when I get a moment, in the meantime any suggestions/examples welcome.
I'm trying to set up a serial communication between the RPI and an FPGA. However, there is an issue when using the standard C library open() to init the serial interface: I'm using a scope to monitor what is sent and received via the RX and TX lines. A call to open causes the TX line of the RPI to go low for the length of one bit. I do not see this behavior with other computers/linux PCs. The point is, the FPGA assumes a valid transmission, since he thinks it's a start bit, but it's not.
I checked with minicom installed on the RPI. Same thing. Starting minicom causes the TX line sending one bit. Once minicom has started, the communication runs as expected and all bytes have the correct frame size. Is there any way to suppress the TX line going low upon the open call to init the serial communication? Is this an expected behavior?
This is a super far-fetched hunch, but this code seems a bit suspicious, from the pl011_startup() function in the PL011 serial port driver:
/*
* Provoke TX FIFO interrupt into asserting.
*/
It seems as if it's twiddling the TX line when starting up the port, which would explain the pulse you're seeing. More investigation would surely be needed before concluding this is what happens, of course.
So, I guess my "answer" boils down to: that sounds weird, perhaps it's something with the driver?
Of course, one way of working around this is to apply some care in the FPGA end, assuming you have more control over it. "Proper" framing would take care of this, and make it clear that the spurious send can be discarded.
UPDATE: I meant that if "proper" messages were to be always framed by some sequence of bytes, the FPGA might be able to discard invalid ("unframed") data anyway, and thus become immune to the random pulse. For instance, messages could be defined to always start with SOH (start of header) or SOT (start of text) symbols (bytes with the values 0x01 and 0x02, respectively).
I'm working with a PIC18F and am trying to send data via hyperterminal. When I send data at a slow rate by pressing one key every half secondish it recieves the data and echos it correctly but when I start pressing the keys at a faster rate the MCU locks up. Not sure what causes this.
while(index<length){
while(PIR1bits.RCIF==0); // Wait till char recieved
sendData(str2,9); // confirm reception
Delay1KTCYx(5); //delay because without it, it messes up.
rxData[index]= RCREG; //char array
index++;
}
baudrate is 2400 On both PIC and hyperterminal.
This is our receive loop. sendData is just debug code that we send saying "recieved". It's how we know when it has frozen.
It does not freeze at the same amount of loops everytime, it is solely on how fast we input data.
(I did work on MCUs but haven't a deal with PICs, so i'll try to help with common problems)
You do not check any receiver error flags. Receiver may lock up in Overrun Error state and do not receive further, until you clear Overrun flag. Add check for error conditions and resolve them accordinly to PIC documentation.
Good practice is to read received byte as early, as possible when receive complete is indicated, so try to move rxData[index]= RCREG; imediately after while(PIR1bits.RCIF==0);. This lowers possibility
You didn't shown code for sendData. There may be missed checking for TX ready state and error conditions, so it may also lock up.
Unmotivated delay indicates that you're already going wrong somewhere. Try to remove it and THEN debug code.
You should test your receive and transmit separately. At first, check transmitter: try to output long line of text through UART without any receiving. (Say, write "Hello world!" program:))
Check receiver code alone: remove transmission from program, connect LED (voltmeter, oscillosocope, whatever you have) to free GPIO pin, and then make it toggle logic level on it every time it receives a byte. Is it takes only several clock ticks to do, it should not intervene receiving or lockup.
Maybe when you send 2 characters while it is busy sending the "received" one of them is discarded and you never reach your length?
On most microcontrollers, a UART receiver overrun will cause the newly-received byte to be discarded and a flag to be set, but the receiver will continue to operate normally. On the PIC, a receiver overrun will cause the UART to die until the CREN bit is cleared and re-set.
I am sending data from a linux application through serial port to an embedded device.
In the current implementation a byte circular buffer is used in the firmware. (Nothing but an array with a read and write pointer)
As the bytes come in, it is written to the circular bufffer.
Now the PC application appears to be sending the data too fast for the firmware to handle. Bytes are missed resulting in the firmware returning WRONG_INPUT too mant times.
I think baud rate (115200) is not the issue. A more efficient data structure at the firmware side might help. Any suggestions on choice of data structure?
A circular buffer is the best answer. It is the easiest way to model a hardware FIFO in pure software.
The real issue is likely to be either the way you are collecting bytes from the UART to put in the buffer, or overflow of that buffer.
At 115200 baud with the usual 1 start bit, 1 stop bit and 8 data bits, you can see as many as 11520 bytes per second arrive at that port. That gives you an average of just about 86.8 µs per byte to work with. In a PC, that will seem like a lot of time, but in a small microprocessor, it might not be all that many total instructions or in some cases very many I/O register accesses. If you overfill your buffer because bytes are arriving on average faster than you can consume them, then you will have errors.
Some general advice:
Don't do polled I/O.
Do use a Rx Ready interrupt.
Enable the receive FIFO, if available.
Empty the FIFO completely in the interrupt handler.
Make the ring buffer large enough.
Consider flow control.
Sizing your ring buffer large enough to hold a complete message is important. If your protocol has known limits on the message size, then you can use the higher levels of your protocol to do flow control and survive without the pains of getting XON/XOFF flow to work right in all of the edge cases, or RTS/CTS to work as expected in both ends of the wire which can be nearly as hairy.
If you can't make the ring buffer that large, then you will need some kind of flow control.
There is nothing better than a circular buffer.
You could use a slower baud rate or speed up the application in the firmware so that it can handle data coming at full speed.
If the output of the PC is in bursts it may help to make the buffer big enough to handle one burst.
The last option is to implement some form of flow control.
What do you mean by embedded device ? I think most of current DSP and processor can easily handle this kind of load. The problem is not with the circular buffer, but how do you collect bytes from the serial port.
Does your UART have a hardware fifo ? If yes, then you should enable it. If you have an interrupt per byte, you can quickly get into trouble, especially if you are working with an OS or with virtual memory, where the IRQ cost can be quit high.
If your receiving firmware is very simple (no multitasking), and you don't have an hardware fifo, polled mode can be a better solution than interrupt driven, because then your processor is doing only UART data reception, and you have no interrupt overhead.
Another problem might be with the transfer protocol. For example if you have long packet of data that you have to checksum, and you do the whole checksum at the end of the packet, then all the processing time of the packet is at the end of it, and that is why you may miss the beginning of the next packet.
So circular buffer is fine and you have to way to improve :
- The way you interact with the hardware
- The protocol (packet length, acknoledgment etc ...)
Before trying to solve the problem, first you need to establish what the problem really is. Otherwise you might waste time trying to fix something that isn't actually broken.
Without knowing more about your set-up it's hard to give more specific advice. But you should investigate further to establish what exactly the hardware and software is currently doing when the bytes come in, and then what is the weak point where they're going missing.
A circular buffer with Interrupt driven IO will work on the smallest and slowest of embedded targets.
First try it at the lowest baud rate and only then try at high speeds.
Using a circular buffer in conjunction with IRQ is an excellent suggestion. If your processor generates an interrupt each time a byte is received take that byte and store it in the buffer. How you decide to empty that buffer depends on if you are processing a stream of data or data packets. If you are processing a stream simply have your background process remove the bytes from the buffer and process them first-in-first-out. If you are processing packets then just keep filing the buffer until you have a complete packet. I've used the packet method successfully many times in the past. I would implement some type of flow control as well to signal to the PC if something went wrong like a full buffer or if packet-processing time is long to indicate to the PC when it is ready for the next packet.
You could implement something like IP datagram which contains data length, id, and checksum.
Edit:
Then you could hard-code some fixed length for the packets, for example 1024 byte or whatever that makes sense for the device. PC side would then check if the queue is full at the device every time it writes in a packet. Firmware side would run checksum to see if all data is valid, and read up till the data length.
In all honesty, I think the answer is "no;" however, I want to get a second opinion. Basically, I need one micro-controller device to send a steady signal to another one, but the communicate between them is using RS232. So I think that I have to create/update the communication messages to get it to do what I want.
What do you think?
You should be able to set something like DTR (Data Terminal Ready), pin 20, or DSR (Data Set Ready), pin 6, high and keep it there as your steady-state signal. This is how modems/terminals detect that there is a device on the other end that is ready to communicate. It all depends on what level of access you have to the hardware through your driver.
[EDIT] This doesn't involve sending data, although you could still do that using TX/RX, pins 2 & 3.
RS-232 Reference on wikipedia
You mean a fixed voltage? Not a square wave? (the letter U) What about a break command (if you want to call it a command)?
Certainly you can use one of the control lines if that helps...Or are you specifically looking for something out of the TX?
If the question is "can I alter the DC state of the Tx line", then the answer is that many uarts (including the ones in PCs) can be asked to create a 'break' condition, which is the opposite to the normal idle condition of the line.
So you can turn 'break' on and off and toggle the line like that.
It might be possible to do something like that, provided you don't mind a burst-like interface. One micro could transmit a byte and the other could do something to that byte and send it back as a response.
If you can control both ends of the line, you may be able to turn the rs-232 tx and rx lines into regular logic lines to give that information.
In most situations, however, each end periodically sends a byte of status information that contains 8 possible digital values - gives much more status information.
A timer on the receiving end is reset everytime a message is received, and if the timer times out then the message has taken too long and yo can act on missing status message.
As others have pointed out, if you're using hardware flow control you have some status lines available as well, though in many cases those lines aren't implemented so that may not be an option.
-Adam
Steady signal could mean:
steady burst of charaters: keep the send buffer full
line held high or low : send nothing or send continual breaks
I think it depends to a large extent on the UART that you are using, e.g. link text, and the level of access you have to it under software. If you check the data sheet there are often ways of controlling most of the pins directly for testing purposes, but you will need to go at it from a pretty low level.
At a higher level, tvanfosson's answer is pretty much the way i'd do it.
While the first answer is correct it may not be possible to use this technique (using DTR or DSR) on many micro controllers as they may not have those signals (many micro controllers may just have the basic RX/TX lines and you would often have to use other i/o ports if you wanted extra control/status lines. However, all is not lost, many RS232 controllers allow you to set the TX line to 'mark' or 'space' (i.e set the TX line to logic high or low). This would allow you to get your steady state signal. The RX line on the receiver can be checked to see if its at mark or space level.