How to switch between data stream and control using (UART) bus - c

This question is about firmware for an 8 outgoing channels IR transmitter. It is a micro-controller board with 8 IR leds. The goal is to have a transmitter capable of sending streams of data using one or multiple channels.
The data is delivered to the board over UART and then transmitted over one or multiple channels.
My transmitter circuit is faster than the UART, so no flow control is required.
Currently I have the channel fixed in the firmware, so each byte from the UART is transmitted directly. This means that there is no way to set the desired channel over UART, which is what I want.
Of course, the easiest solution is to append the data byte with a control byte in which each bit represents one channel. This had the advantage that each byte can be routed to one or more channels, but of course increases overhead dramatically.
Because of the stream type of transmission, I am trying to avoid a length field in my transmitter.
My research work is in the network stack on top of this.
My question is if there are schemes or good practices to solve this. I expect that similar problems are in robotics, where sensor data streams cross control signals all the time, but I could not find a simple and elegant solution.

I generally use the SLIP transmission protocol in my projects. It is very fast, easy to implement, and works very good to frame ANY packet you want.
http://www.tcpipguide.com/free/t_SerialLineInternetProtocolSLIP.htm
https://www.google.com/webhp?sourceid=chrome-instant&ion=1&espv=2&ie=UTF-8#q=slip%20protocol
Basically, you feed each byte to be transmitted or received into a function that uses 0xC0 as both a header and a footer. Since 0xC0 is a valid byte in the data packet that you could be sending a few transformations are made to data bytes of 0xC0 in order to GUARANTEED that 0xC0 will only be a header and footer.
Then using the reverse algorithm on the other side you can frame the incoming data and look for 0xC0 twice in the right order. This signifies a full packet that will can be buffered up and flagged for main cpu processing.
The SLIP will guarantee the right framing of the packet.
Then it is up to you to define your own packet format that exists as the data field inside the SLIP packet has correctly framed the packet.
I often do the following...
<0xC0> ...<0xC0>
Use different opcodes for your different channels. You can easily add another layer with Acknowledgements if you want.

Seems like the only sensible solution is to create a carrier protocol for the UART data. You might want this anyway, since UART has poor immunity to EMI. You can make it more reliable by including a CRC check to the protocol. (Please note that the built-in error handling of UART through start/stop/parity is very naive and very much outdated since the mid 70s or so.)
Typically these protocols go like <sync token> <header> <data> <checksum>, where the header may contain a data length and the data can then be of variable length.
Probably not an option at this point, but SPI would have been a much more pleasant interface to work with for this. You could then have one shift register per 8 IR diodes and select channel through the SPI slave select through some MUX/DEMUX circuit. Everything would work synchronously and no carrier protocol is needed. And it would completely remove the need for a MCU between the data sender and the diodes.

Related

Linux UART imx8 how to quickly detect frame end?

I have an imx8 module running Linux on my PCB and i would like some tips or pointers on how to modify the UART driver to allow me to be able to detect the end of frame very quickly (less than 2ms) from my user space C application. The UART frame does not have any specific ending character or frame length. The standard VTIME of 100ms is much too long
I am reading from a Sim card, i have no control over the data, no control over the size or content of the data. I just need to detect the end of frame very quickly. The frame could be 3 bytes or 500. The SIM card reacts to data that it receives, typically I send it a couple of bytes and then it will respond a couple of ms later with an uninterrupted string of bytes of unknown length. I am using an iMX8MP
I thought about using the IDLE interrupt to detect the frame end. Turn it on when any byte is received and off once the idle interrupt fires. How can I propagate this signal back to user space? Or is there an existing method to do this?
Waiting for an "idle" is a poor way to do this.
Use termios to set raw mode with VTIME of 0 and VMIN of 1. This will allow the userspace app to get control as soon as a single byte arrives. See:
How to read serial with interrupt serial?
How do I use termios.h to configure a serial port to pass raw bytes?
How to open a tty device in noncanonical mode on Linux using .NET Core
But, you need a "protocol" of sorts, so you can know how much to read to get a complete packet. You prefix all data with a struct that has (e.g.) A type and a payload length. Then, you send "payload length" bytes. The receiver gets/reads that fixed length struct and then reads the payload which is "payload length" bytes long. This struct is always sent (in both directions).
See my answer: thread function doesn't terminate until Enter is pressed for a working example.
What you have/need is similar to doing socket programming using a stream socket except that the lower level is the UART rather than an actual socket.
My example code uses sockets, but if you change the low level to open your uart in raw mode (as above), it will be very similar.
UPDATE:
How quickly after the frame finished would i have the data at the application level? When I try to read my random length frames currently reading in 512 byte chunks, it will sometimes read all the frame in one go, other times it reads the frame broken up into chunks. –
Engo
In my link, in the last code block, there is an xrecv function. It shows how to read partial data that comes in chunks.
That is what you'll need to do.
Things missing from your post:
You didn't post which imx8 board/configuration you have. And, which SIM card you have (the protocols are card specific).
And, you didn't post your other code [or any code] that drives the device and illustrates the problem.
How much time must pass without receiving a byte before the [uart] device is "idle"? That is, (e.g.) the device sends 100 bytes and is then finished. How many byte times does one wait before considering the device to be "idle"?
What speed is the UART running at?
A thorough description of the device, its capabilities, and how you intend to use it.
A uart device doesn't have an "idle" interrupt. From some imx8 docs, the DMA device may have an "idle" interrupt and the uart can be driven by the DMA controller.
But, I looked at some of the linux kernel imx8 device drivers, and, AFAICT, the idle interrupt isn't supported.
I need to read everything in one go and get this data within a few hundred microseconds.
Based on the scheduling granularity, it may not be possible to guarantee that a process runs in a given amount of time.
It is possible to help this a bit. You can change the process to use the R/T scheduler (e.g. SCHED_FIFO). Also, you can use sched_setaffinity to lock the process to a given CPU core. There is a corresponding call to lock IRQ interrupts to a given CPU core.
I assume that the SIM card acts like a [passive] device (like a disk). That is, you send it a command, and it sends back a response or does a transfer.
Based on what command you give it, you should know how many bytes it will send back. Or, it should tell you how many optional bytes it will send (similar to the struct in my link).
The method you've described (e.g.) wait for idle, then "race" to get/process the data [for which you don't know the length] is fraught with problems.
Even if you could get it to work, it will be unreliable. At some point, system activity will be just high enough to delay wakeup of your process and you'll miss the window.
If you're reading data, why must you process the data within a fixed period of time (e.g. 100 us)? What happens if you don't? Does the device catch fire?
Without more specific information, there are probably other ways to do this.
I've programmed such systems before that relied on data races. They were unreliable. Either missing data. Or, for some motor control applications, device lockup. The remedy was to redesign things so that there was some positive/definitive way to communicate that was tolerant of delays.
Otherwise, I think you've "fallen in love" with "idle interrupt" idea, making this an XY problem: https://meta.stackexchange.com/questions/66377/what-is-the-xy-problem

How to design a test case to validate throttling capacity of a packet decoder?

I am implementing a packet decoder on a micro-controller. The packets are of 32-bytes each, received through a UART, every 10 milliseconds. The UART ISR (Interrupt Service Routine) keeps the received bytes in a ring buffer, and a thread scheduled every 7.5ms decodes the packets from ring buffer. There are instrumentation routines implemented to report the number of times ring buffer was full, error count after decoding, dropped bytes count. The micro-controller can send these packets back to PC running my test case through a different UART.
How do I design a test case to check if the system is meeting my performance requirements. These are the test cases which I should take care of --
The transmitter clock may run slightly faster (Sending a packet every 8ms, rather than the nominal 10ms).
The channel may introduce errors to data bits. There are checksum fields included in packet to cope up with that. How to simulate the channel errors?
The test case should be maintainable and extendable.
I already have a simulator through which I tested the decoder (implemented in micro-controller) for functional correctness. This simulator sends packets at programmable intervals, and the value of data fields can be changed through a UI. How can this simulator be modified to do this?
Are there standard practices/test cases to handle such throttling tests? Are there some edge cases I am missing? I need to make sure that the ring buffer has enough space to handle the higher rates of packets sent by the receiver.

Send data from a microcontroller to another

I'm making a project that require send data from one microcontroller to another via wireless (I'm going to use 433Mhz RF modules or 2.4Ghz, didn't decide it yet). Specifically, I'm making a joystick which control 4 dc motors. So my question is: When I write the code, should I put the command to accelerate 'x' motor into the receiver's microcontroller (the board that control the motors) or into the transmitter's microcontroller (the joystick's board)? For example, if I put the joystick to the left to accelerate motor 3 and 4, where would I wrote this code?
I'm doing this project in arduino (ATmega328 with arduino bootloader).
Well this is most definitely a "primarily opinion based" question and may very well get closed...but...
This is your design, you need to design it. There isnt a wrong answer in this case, you can have the joystick board simply pass the joystick switch state or analog state depending on what kind they are...OR...you can have the joystick board using timing and how long the joystick is held in a position and perhaps compute what you think the devices position should be and just pass that (you need to rotate m steps this way and n steps that way). Or depending on your overall system design could come up with other solutions to balance the load between the microcontrollers. I assume what you are really wanting to do here is "simply" use the wireless and the extra mcu as a way to have the joystick not wired. If the joystick were wired and you did it with one mcu and these are switch based (not analog resistance) joysticks then that one mcu would simply be reading the switch state at some interval, perhaps debouncing. So then when you add wireless to it you would want the wireless solution to simply pass the same switch state, not in the same register space and not necessarily the same bit positions in whatever data, but just the switch state. As needed you have the added benefit of the joystick mcu can debounce. Also perhaps only send state changes and not have to bother the motor mcu every N units of time.
I tried modules like these recently and was very disappointed, but am considering trying them again or a different vendor/build of them. They should "feel" like a wire for say serial data at some slow rate 4800 baud, 2400 baud, etc. First try I had to do things like constantly send a pattern say the same byte 0xAA, then every so often have a small payload show up in there so search for 0xAA then when you see something else count out so many bytes that is the payload. It went downhill from there when I used separate computers/supplies to send the data as if most of my success although marginal was perhaps through ground noise not actually wireless. They sell tons of these and folks use them so no doubt it was something I did or bought garbage modules, or maybe I shouldnt have bought the ones with antennas, and soldered those on, etc.
I would recommend two things. First get both mcus up and running the application using a wired connection, just some jumper wires. You might have to come back to this as you perhaps change your protocol or baud rate, etc if needed. Perhaps (most likely) you want to send the payload out often even if the data has not changed if the wireless connection is not perfectly reliable, allowing the occasional lost data packet to be handled on the next one. Second learning to use the wireless connection is a separate experiment, the code on both sides should just be working that one problem how to I talk over this interface, no motor drivers no joysticks, just code to talk from one to the other.
Likely a good idea, esp if you dont need to be constantly transmitting, is to design your own packet. Ideally one or more features, a sync pattern and length or a sync pattern and checksum or sync pattern, length and checksum (as well as a payload). So maybe a 0x7E byte as a sync pattern, perhaps you are always going to send 32 bits or 4 bytes of payload so dont need a length and then maybe a checksum of the payload or checksum of the whole thing sync pattern and payload. your reciever then takes in bytes from the uart into a circular buffer basically or not even that just use a very simple state machine, if searching for sync and byte is not 0x7E then discard, if 0x7e then go to payload state with a count of zero, receive the next four bytes (payload) summing them into a checksum as each arrives. after payload the next byte is checksum, if it matches the the whole thing is good pass the payload to the control system to update the switch state and/or just shove it into the global variable that the control system is watching as if it were a direct connection to the joystick switch state.
You can get as complicated or not as you want, but I would never in this kind of a design assume that both sides are always in sync and every N bytes is the N bytes sent in the right position, even if you never drop a byte, this is a one direction communication path so if the receiver comes up while the transmitter is mid payload the receiver comes up in the middle of the payload/packet and is always off in its interpretation of the data. You can do a slip/ppp thing whichever it is and insure the sync pattern is never in the data by adding more work. Or not, I doubt you need to be that complicated as I assume you are likely only sending one or a few bytes of actual data per update.
why I tried the continuous stream as way back the first time i tried products like these, long before they were a buck or so from china on ebay, we had to insure "transition density" you could not string more than a couple bit cells of either state in a row, couldnt have more than say two zeros in a row or two ones in a row basically had to do a biphase-l (manchester II) in software before shoving the bytes into the uart, then undoing it on the other side. Hopefully the hardware does that for you and you dont have to, but push comes to shove you might have to take every byte and only every other bit in the byte is payload the other bit next to it is the inverse might have to send the byte 0x01 (00000001) as (0101010101010110) 0x55 0x56. Hopefully you dont have to do that, it is not much fun.
Your microcontroller development toolbox should include some usb to serial solutions, can get them for a buck or two on ebay from china or from adafruit or sparkfun for 10-15 bucks. The china ones sometimes have a 5v or 3.3v jumper or switch. One way to start with these modules is to hook one to one of these usb to serial breakouts 5V or 3.3 as needed for that rx or tx module. For tx put the uart out (tx) on the data pin, for rx the rx in on the data pin. And then fire up two copies of minicom or whatever dumb terminal program you use, set both to some slow speed like 1200baud, and type in one and see if what you type comes out the other. Hold down a character, U is a good one as with 8n1 it produces a square wave. does that come through clean? what if you slow down both sides? what if you speed up. From simple experiments like that you can start to develop your protocol. If you have a scope (having one is easier these days but can still be pricy, a few hundred bucks, but access to one is pretty important) you can do this even easier, generate the signal either with microcontroller or usb to serial or whatever then look at what shows up on the other side with the scope, how ugly/clean is it? Can you make it better looking by changing the data or by pounding the interface with continuous data.

Receive message of undefined size in UART in C

I'm writing my own drivers for LPC2148 and a question came to mind.
How do I receive a message of unspecified size in UART?
The only 2 things that come to mind are: 1 - Configure a watchdog and end the receiving when the time runs out. 2- make it so that whenever a meswsage is sent to it there must be an end of message character.
The first choice seems better in my opinion, but I'd like to know if anybody has a better answer, and I know there must be.
Thank you very much
Just give the caller whatever bytes you have received so far. The UART driver shouldn't try to implement the application protocol, the application should do that.
It looks like a wrong use for a watchdog. I ended up with three solutions for this problem:
Use fixed-size packets and DMA; so, you receive one packet per transaction. Apparently, it is not possible in your case.
Receive message char-by-char until the end-of-message character is received. Kind of error-prone, since the EOM char may appear in the data, probably.
Use a fixed-size header before every packet. In the header, store payload size and/or message type ID.
The third approach is probably the best one. You may combine it with the first one, i.e. use DMA to receive header and then data (in the second transaction, after the data size is known from the header). It is also one of the most flexible approaches.
One more thing to worry about is to keep bytestream in sync. There may be rubbish laying in the UART input buffers, which may get read as data, or you can get only a part of a packet after your MCU is powered (i.e. the beginning of the packet had already been sent by that time). To avoid that, you can add magic bytes in your packet header, and probably CRC.
EDIT
OK, one more option :) Just store everything you receive in a growing buffer for later use. That is basically what PC drivers do.
Real embedded uart drivers usually use a ring buffer. Bytes are stored in order and the clients promise to read from the buffer before it's full.
A state machine can then process the message in multiple passes with no need for a watchdog to tell it reception is over
better to go for option 2) append end of transmission character to the transmission string.
but i suggest to add start of transmission also to validate that you are receiving actual transmission.
Watchdog timer is used to reset system when there is a unexpected behavior of device. I think it is better to use a buffer which can store size of data that your application requires.

Data structure for storing serial port data in firmware

I am sending data from a linux application through serial port to an embedded device.
In the current implementation a byte circular buffer is used in the firmware. (Nothing but an array with a read and write pointer)
As the bytes come in, it is written to the circular bufffer.
Now the PC application appears to be sending the data too fast for the firmware to handle. Bytes are missed resulting in the firmware returning WRONG_INPUT too mant times.
I think baud rate (115200) is not the issue. A more efficient data structure at the firmware side might help. Any suggestions on choice of data structure?
A circular buffer is the best answer. It is the easiest way to model a hardware FIFO in pure software.
The real issue is likely to be either the way you are collecting bytes from the UART to put in the buffer, or overflow of that buffer.
At 115200 baud with the usual 1 start bit, 1 stop bit and 8 data bits, you can see as many as 11520 bytes per second arrive at that port. That gives you an average of just about 86.8 µs per byte to work with. In a PC, that will seem like a lot of time, but in a small microprocessor, it might not be all that many total instructions or in some cases very many I/O register accesses. If you overfill your buffer because bytes are arriving on average faster than you can consume them, then you will have errors.
Some general advice:
Don't do polled I/O.
Do use a Rx Ready interrupt.
Enable the receive FIFO, if available.
Empty the FIFO completely in the interrupt handler.
Make the ring buffer large enough.
Consider flow control.
Sizing your ring buffer large enough to hold a complete message is important. If your protocol has known limits on the message size, then you can use the higher levels of your protocol to do flow control and survive without the pains of getting XON/XOFF flow to work right in all of the edge cases, or RTS/CTS to work as expected in both ends of the wire which can be nearly as hairy.
If you can't make the ring buffer that large, then you will need some kind of flow control.
There is nothing better than a circular buffer.
You could use a slower baud rate or speed up the application in the firmware so that it can handle data coming at full speed.
If the output of the PC is in bursts it may help to make the buffer big enough to handle one burst.
The last option is to implement some form of flow control.
What do you mean by embedded device ? I think most of current DSP and processor can easily handle this kind of load. The problem is not with the circular buffer, but how do you collect bytes from the serial port.
Does your UART have a hardware fifo ? If yes, then you should enable it. If you have an interrupt per byte, you can quickly get into trouble, especially if you are working with an OS or with virtual memory, where the IRQ cost can be quit high.
If your receiving firmware is very simple (no multitasking), and you don't have an hardware fifo, polled mode can be a better solution than interrupt driven, because then your processor is doing only UART data reception, and you have no interrupt overhead.
Another problem might be with the transfer protocol. For example if you have long packet of data that you have to checksum, and you do the whole checksum at the end of the packet, then all the processing time of the packet is at the end of it, and that is why you may miss the beginning of the next packet.
So circular buffer is fine and you have to way to improve :
- The way you interact with the hardware
- The protocol (packet length, acknoledgment etc ...)
Before trying to solve the problem, first you need to establish what the problem really is. Otherwise you might waste time trying to fix something that isn't actually broken.
Without knowing more about your set-up it's hard to give more specific advice. But you should investigate further to establish what exactly the hardware and software is currently doing when the bytes come in, and then what is the weak point where they're going missing.
A circular buffer with Interrupt driven IO will work on the smallest and slowest of embedded targets.
First try it at the lowest baud rate and only then try at high speeds.
Using a circular buffer in conjunction with IRQ is an excellent suggestion. If your processor generates an interrupt each time a byte is received take that byte and store it in the buffer. How you decide to empty that buffer depends on if you are processing a stream of data or data packets. If you are processing a stream simply have your background process remove the bytes from the buffer and process them first-in-first-out. If you are processing packets then just keep filing the buffer until you have a complete packet. I've used the packet method successfully many times in the past. I would implement some type of flow control as well to signal to the PC if something went wrong like a full buffer or if packet-processing time is long to indicate to the PC when it is ready for the next packet.
You could implement something like IP datagram which contains data length, id, and checksum.
Edit:
Then you could hard-code some fixed length for the packets, for example 1024 byte or whatever that makes sense for the device. PC side would then check if the queue is full at the device every time it writes in a packet. Firmware side would run checksum to see if all data is valid, and read up till the data length.

Resources