I understand from the MSDN docs that the event DataReceived will not necessarily fire once per byte.
But does anyone know what exactly is the mechanism that causes the event to fire?
Does the receipt of each byte restart a timer that has to reach, say 10 ms between bytes, before the event fires?
I ask because I'm trying to write an app that reads XML data coming in from a serial port.
Because my laptop has no serial ports, I use a virtual serial port emulator. (I know, I know--I can't do anything about it ATM).
When I pass data through the emulated port to my app, the event fires once for each XML record (about 1500 bytes). Perfect. But when a colleague at another office tries it with two computers connected by an actual cable, the DataReceived event fires repeatedly, after every 10 or so bytes of XML, which totally throws off the app.
DataReceived can fire at any time one or more bytes are ready to read. Exactly when it is fired depends on the OS and drivers, and also there will be a small delay between the data being received and the event being fired in .NET.
You shouldn't rely on the timing of DataReceived events for control flow.
Instead, parse the underlying protocol and if you haven't received a complete message, wait for more. If you receive more than one message, make sure to keep the left overs from parsing the first message because they will be the start of the next message.
As Mark Byers pointed out, this depends on the OS and drivers. At the lowest level, a standard RS232 chip (for the life of me, I can't remember the designation of the one that everyone copied to make the 'standard') will fire an interrupt when it has data in its inbound register. The 'bottom end' of the driver has to go get that data (which could be any amount up to the buffer size of the chip), and store it in the driver's buffer, and signal to the OS that it has data. It's at this point that the .NET framework can start finding out that the data is available. Depending on when the OS signals the application that opened the serial port (which is an OS level operation, and provides the 'real' link from the .NET framework to the OS/driver level implementation), there could literally be any amount of data > 1 byte in the buffer, because the driver's bottom end could've loaded up more data in the meantime. My bet is that on your system, the driver is providing a huge buffer, and only signalling after a significant pause in the data stream. Your colleague's system, on the other hand, signals far more frequently. Again, Mark Byer's advice to parse the protocol is spot on. I've implemented a similar system over TCP sockets, and the only way to handle the situation is to buffer the data until you've got a complete protocol message, then hand the full message over to the application.
Related
I am currently working with the BeagleBone Black using Ubuntu and I am trying to find some direction. I have created a c program that listens for SIGIO and runs a read() to get the data on that line. From my research on the internet and looking through some books, it appears that this method is not very efficient in that using a loop listening for a Signal interrupt is bad because of the large amount of context switching (it should be noted that this I/O line will be busy so the SIGIO will trigger at least 4 times a second and this is an asynchronous). It was suggested to use hardware interrupts and have that trigger a response to take the data from the line and place it into a register and will be accessable from the User using Direct Memory Access preferably. So the question remains to be where can I look to get more info on how to do this, I find a lot of info on this topic but most of which just talk about how to OS does interrupts or using Signals, which with a busy line is pretty taxing.
If you are that much concerned about the timings and latency, you should probably use some real time system.
Fortunately, Beaglebone black has real-time processing cores on its SOC, called the PRU (Programmable real-time units).
If you are new to the concept of PRUs, you probably would like to start here and then, once you have understood the need and purpose of the PRUs, that same website has some tutorial to get started.
With the latest software support like remoteproc, rpmsg and Beaglescope project, PRUs can be used quite easily, once you have understood its working.
this might be a stupid question,
I was debugging a USB storage device on an ARM-CortexM4 platform (STM32F4 series) which runs embedded Linux. The ARM is working as USB host, and tries to communicate with a thumb drive in USB full speed (12Mb/s).
Now here is the problem. After successful enumeration and several SCSI commands thru BULK transfers, the capacity and everything can be read correctly. However, after about 15 seconds when I try to send these SCSI commands again (under same condition), the USB host controller just returns 'Transaction Error', which looks like the device is not responding to BULK transfers anymore (not ACKing) and the host controller times out. The question is, is there any timeout mechanism for USB mass-storage class or SCSI system such that, after a timeout the system must be re-enumerated or re-probed, otherwise it won't respond anymore?
I understand this might be due to a stupid error in my program, or due to some limitations on the specific hardware. However when I used usbmon module in Linux on a PC to capture the transfers on the very same thumb drive, I can see the operating system actually sends a sequence probing command (Read-max-Lun followed by Test-unit-ready) every 5 sec, which could be the reason why the thumb drive doesn't fail on my PC.
Thanks! I'm looking forward to any replies.
I think you're on the right track with the Test Unit Ready commands.. I am in the middle of writing a mass storage device driver for an embedded device and When testing on OS X, after the initial SCSI queries, my device receives Test Unit Ready command about once every second when no other activity is occurring. Since your post is quite old, I recommend you post your own solution if you've since solved your problem.
Otherwise try adding periodic test unit ready commands from the host side when there is no other activity.. You could set and activate a timer whenever USB activity is occurring. If the timer fires, u can send a Test unit ready command.. Rinse repeat.
I am developing a proxy server using WinSock 2.0 in Windows. If I wanted to develop it in blocking model, select() was the way to wait for client or remote server to receive data from. Is there any applicable way to do this so using I/O Completion Ports?
I used to have two Contexts for two directions of data using I/O Completion Ports. But having a WSARecv pending couldn't receive any data from remote server! I coudn't find the problem.
Thanks in advance.
EDIT. Here's the WorkerThread Code on currently developed I/O Completion Ports. But I am asking about how to implement select() equivalence.
I/O Completion Ports provide an indication of when an I/O operation completes, they do not indicate when it is possible to initiate an operation. In many situations this doesn't actually matter. Most of the time the overlapped I/O model will work perfectly well if you assume it is always possible to initiate an operation. The underlying operating system will, in most cases, simply do the right thing and queue the data for you until it is possible to complete the operation.
However, there are some situations when this is less than ideal. For example you can always send to a socket using overlapped I/O. You can do this even when the remote peer is not reading and the TCP stack has started to use flow control and has filled the TCP window... This simply uses resources on your local machine in a completely uncontrolled manner (not entirely uncontrolled, but controlled by the peer, which is not ideal). I write about this here and in many situations you DO need to actively manage this kind of thing by tracking how many outstanding I/O write requests you have and using that as an indication of 'readiness to send'.
Likewise if you want a 'readiness to recv' indication you could issue a 'zero byte' read on the socket. This is a read which is issued with a zero length buffer. The read returns when there is data to read but no data is returned. This would give you the indication that there is data to be read on the connection but is, IMHO, pointless unless you are suffering from the very unlikely situation of hitting the I/O page lock limit, as you may as well read the data when it becomes available rather than forcing multiple kernel to user mode transitions.
In summary, you don't really need an answer to your question. You need to look at how the API works and write your code to work with it rather than trying to force the API to work in a way that other APIs that you are familiar with work.
I am developing a WPF application which will read data on serial port, parse it and will display it on the UI.
I have to use serial port Buad rate - 115200, data bits - 8, Stop bit - 1.
I am sending 10000 bytes per second on the serial port which will be read by my WPF application.
But, Here I am facing issue with the UI. As I starts reading the COM port my UI freezes. It doesn't allow anyone to do anything. As per my investigation it is due to the high speed of data.
I am reading com port on different thread and UI is running on different thread.
The data transmission between threads are done using a common circular buffer.
I do have used BeginInvoke methods to update UI fields so function gets return immediately.
I do have used lock mechanism while accessing the circular buffer in both the threads.
Is there any way to handle this situation? I have read that lot of people facing same kind of issue. What is the solution using which such issue can be resolve.
Thanks,
Vishal N
It sounds as though you have either set up your Thread object(s) incorrectly, or you are passing feedback back to the UI too often.
If you are not comfortable about working with Thread objects directly, perhaps the BackgroundWorker class might help you. Check out the BackgroundWorker Class page at MSDN.
I am writing a simple multi-drop RS485 protocol for serial communications within a distributed system. I am using an addressable model where slave devices are given a window of 20ms to respond. The master uC polls the connected devices for updates and they respond accordingly. I've employed checksums and take the necessary overrun precautions to ensure that connected devices will not respond to malformed messages. This method has proved effective in approximately 99% of situations, but I lose the packet if a new device is introduced during a communication session. Plugging in a new device "hot" will have negative effects on the signal being monitored by the slave devices, if only for an extremely short time. I'm on the software side of engineering, but how I can mitigate this situation without trying to recreate TCP? We use a polling model because it is fast and does the job well for our application, no need for RTOS functionality. I have an abundance of cycles on each cpu, think in basic terms.
Sending packets over the RS485 is not a reliable communication. You will have to handle the lost of packets anyway. Of course, you won't have to reinvent TCP. But you will have to detect lost packets by means of timeout monitoring and sequence numbers. In simple applications this can be done at application level, what keeps you far off from the complexity of TCP. When your polling model discards all packets with invalid checksum this might be integrated with less effort.
If you want to check for collisions, that can be caused by hot plugs or misbehaving devices there are probably some improvements. Some hardware allows to read back the own transmissing. If you find a difference between sent data and receive data, you can assume a collision and repeat the packet. This will also require a kind of sequence numbering.
Perhaps I've missed something in your question, but can't you just write the master so that if a response isn't seen from a device within the allowed time, it re-polls that device?