UI freezes while reading data on serial port - wpf

I am developing a WPF application which will read data on serial port, parse it and will display it on the UI.
I have to use serial port Buad rate - 115200, data bits - 8, Stop bit - 1.
I am sending 10000 bytes per second on the serial port which will be read by my WPF application.
But, Here I am facing issue with the UI. As I starts reading the COM port my UI freezes. It doesn't allow anyone to do anything. As per my investigation it is due to the high speed of data.
I am reading com port on different thread and UI is running on different thread.
The data transmission between threads are done using a common circular buffer.
I do have used BeginInvoke methods to update UI fields so function gets return immediately.
I do have used lock mechanism while accessing the circular buffer in both the threads.
Is there any way to handle this situation? I have read that lot of people facing same kind of issue. What is the solution using which such issue can be resolve.
Thanks,
Vishal N

It sounds as though you have either set up your Thread object(s) incorrectly, or you are passing feedback back to the UI too often.
If you are not comfortable about working with Thread objects directly, perhaps the BackgroundWorker class might help you. Check out the BackgroundWorker Class page at MSDN.

Related

Execution Patter of Multi-Threaded Server on Linux

I like to know what should be the execution pattern of Multiple Threads of a Server to implement TCP in request-response cycle of hi-performance Server (like dozens of packets with single or no system call on Linux using Packet MMAP or some other way).
Design 1) For simplicity, Start two thread in main at the start of a Server program. one thread just getting packets directly from network interface(s) like wlan0/eth0. and once number of packets read in one cycle (using while loop with poll() in Linux). wake up the other thread using conditional variable signal call. and after waking up, other thread (sender) process and send packet as tcp response.
Design 2) Start receiver thread at the start of main program. The packet receiver thread reads packets from interfaces using while loop and poll(). When number of packets received, create sender thread and pass number of packets received in one cycle to sender as parameter. Sender thread process the packets and respond as tcp response.
(I think, Design 2 will be more easy to implement but is there any design issue or possible performance issue with this approach this is the question). Since creating buffer to pass to sender thread from receiver thread need to be allocated prior to receiving packets. So I know the size of buffer to allocate. Also in this execution pattern I am creating new thread (which will return and end execution after processing packets and responding tcp response). I like to know what will be the performance issue with this approach since I am creating new thread every time I get a batch of packet from interfaces.
In first approach I am not creating more than two threads (or limited number of threads and threads can be tracked easily for logging and debugging since I will know how many thread are initially created) In second approach I don't know how many threads are hanging around and executing concurrently.
I need any advise how real website like youtube/ or others may have handled this in there hi-performance server if they had followed this way of implementing their front facing servers.
First when going to a 'real' website the magic lies in having a load balancers and a whole bunch of worker nodes to take the load and you easily exceed the boundary of a single system. For example take a look at the following AWS reference architecture for serving web pages at scale AWS Cloud Architecture for serving web whitepaper.
That being said taking this one level down it is always interesting to look at how other well-known products have solved this issue. For example NGINX has an excellent infographic available and matching blogpost describing their architecture and threading.

PRISM controlling external devices

I plan to use the PRISM libraries for a project running on a PC that controls one or multiple instruments and visualizes and stores the data of the device(s) and lets the user enter some control data. The devices have various digital and analog sensors and actors. They can be of different type and intelligence. Most often they have no 'real' intelligence and all the control logic sits in the PC.
This 'intelligence' needs to be constantly reading the data from a device. The communication can be of various kind, like a COM port, TCP/IP socket, HTTP to a web interface, etc.
I am not sure what's the best solution for that 'intelligent logic'. Since it needs a continuous communication with the device, it needs to be separated from all the UI tasks. It will need some kind of state-machine in a background worker or thread to build the higher process logic.
Question: Should it be an instance per device registered in PRISM as a service with a reference to that background worker? Or should that background worker be created and linked to the ViewModel I need for each configured instrument to handle it's data to show and edit? Or is there another best solution?
I think this si more general architecture question than a specific PRISM one...
I've done something similar with other MVVM framework and my solution was based on a single listener (I had only TCP sockets to coomunicate with instruments) registerd as a service. In your application you can either have multiple queues or a single queue with multiple producers.
All messages from the devices were inserted in a concurrent queue and each ViewModel (one for each device) read from that queue.
Communication from ViewModel to device happened directly without going through an "output" queue.
The whole application was build on await/async pattern to decouple UI from communication. I was able to send and receive multiple commands and notifications from serval devices at the same time without any issue.
But again this is really a broad question and mine is a broad answer lot of things depend of how you have to interact with dvices. My solution balances complexity with flexibility, but a lot of other architectures are available.

Linux, Using hardware interrupts on I/O to place data into user accessable area via Direct memory access

I am currently working with the BeagleBone Black using Ubuntu and I am trying to find some direction. I have created a c program that listens for SIGIO and runs a read() to get the data on that line. From my research on the internet and looking through some books, it appears that this method is not very efficient in that using a loop listening for a Signal interrupt is bad because of the large amount of context switching (it should be noted that this I/O line will be busy so the SIGIO will trigger at least 4 times a second and this is an asynchronous). It was suggested to use hardware interrupts and have that trigger a response to take the data from the line and place it into a register and will be accessable from the User using Direct Memory Access preferably. So the question remains to be where can I look to get more info on how to do this, I find a lot of info on this topic but most of which just talk about how to OS does interrupts or using Signals, which with a busy line is pretty taxing.
If you are that much concerned about the timings and latency, you should probably use some real time system.
Fortunately, Beaglebone black has real-time processing cores on its SOC, called the PRU (Programmable real-time units).
If you are new to the concept of PRUs, you probably would like to start here and then, once you have understood the need and purpose of the PRUs, that same website has some tutorial to get started.
With the latest software support like remoteproc, rpmsg and Beaglescope project, PRUs can be used quite easily, once you have understood its working.

select() equivalence in I/O Completion Ports

I am developing a proxy server using WinSock 2.0 in Windows. If I wanted to develop it in blocking model, select() was the way to wait for client or remote server to receive data from. Is there any applicable way to do this so using I/O Completion Ports?
I used to have two Contexts for two directions of data using I/O Completion Ports. But having a WSARecv pending couldn't receive any data from remote server! I coudn't find the problem.
Thanks in advance.
EDIT. Here's the WorkerThread Code on currently developed I/O Completion Ports. But I am asking about how to implement select() equivalence.
I/O Completion Ports provide an indication of when an I/O operation completes, they do not indicate when it is possible to initiate an operation. In many situations this doesn't actually matter. Most of the time the overlapped I/O model will work perfectly well if you assume it is always possible to initiate an operation. The underlying operating system will, in most cases, simply do the right thing and queue the data for you until it is possible to complete the operation.
However, there are some situations when this is less than ideal. For example you can always send to a socket using overlapped I/O. You can do this even when the remote peer is not reading and the TCP stack has started to use flow control and has filled the TCP window... This simply uses resources on your local machine in a completely uncontrolled manner (not entirely uncontrolled, but controlled by the peer, which is not ideal). I write about this here and in many situations you DO need to actively manage this kind of thing by tracking how many outstanding I/O write requests you have and using that as an indication of 'readiness to send'.
Likewise if you want a 'readiness to recv' indication you could issue a 'zero byte' read on the socket. This is a read which is issued with a zero length buffer. The read returns when there is data to read but no data is returned. This would give you the indication that there is data to be read on the connection but is, IMHO, pointless unless you are suffering from the very unlikely situation of hitting the I/O page lock limit, as you may as well read the data when it becomes available rather than forcing multiple kernel to user mode transitions.
In summary, you don't really need an answer to your question. You need to look at how the API works and write your code to work with it rather than trying to force the API to work in a way that other APIs that you are familiar with work.

what causes the .NET SerialPort class DataReceived event to fire?

I understand from the MSDN docs that the event DataReceived will not necessarily fire once per byte.
But does anyone know what exactly is the mechanism that causes the event to fire?
Does the receipt of each byte restart a timer that has to reach, say 10 ms between bytes, before the event fires?
I ask because I'm trying to write an app that reads XML data coming in from a serial port.
Because my laptop has no serial ports, I use a virtual serial port emulator. (I know, I know--I can't do anything about it ATM).
When I pass data through the emulated port to my app, the event fires once for each XML record (about 1500 bytes). Perfect. But when a colleague at another office tries it with two computers connected by an actual cable, the DataReceived event fires repeatedly, after every 10 or so bytes of XML, which totally throws off the app.
DataReceived can fire at any time one or more bytes are ready to read. Exactly when it is fired depends on the OS and drivers, and also there will be a small delay between the data being received and the event being fired in .NET.
You shouldn't rely on the timing of DataReceived events for control flow.
Instead, parse the underlying protocol and if you haven't received a complete message, wait for more. If you receive more than one message, make sure to keep the left overs from parsing the first message because they will be the start of the next message.
As Mark Byers pointed out, this depends on the OS and drivers. At the lowest level, a standard RS232 chip (for the life of me, I can't remember the designation of the one that everyone copied to make the 'standard') will fire an interrupt when it has data in its inbound register. The 'bottom end' of the driver has to go get that data (which could be any amount up to the buffer size of the chip), and store it in the driver's buffer, and signal to the OS that it has data. It's at this point that the .NET framework can start finding out that the data is available. Depending on when the OS signals the application that opened the serial port (which is an OS level operation, and provides the 'real' link from the .NET framework to the OS/driver level implementation), there could literally be any amount of data > 1 byte in the buffer, because the driver's bottom end could've loaded up more data in the meantime. My bet is that on your system, the driver is providing a huge buffer, and only signalling after a significant pause in the data stream. Your colleague's system, on the other hand, signals far more frequently. Again, Mark Byer's advice to parse the protocol is spot on. I've implemented a similar system over TCP sockets, and the only way to handle the situation is to buffer the data until you've got a complete protocol message, then hand the full message over to the application.

Resources