I plan to use the PRISM libraries for a project running on a PC that controls one or multiple instruments and visualizes and stores the data of the device(s) and lets the user enter some control data. The devices have various digital and analog sensors and actors. They can be of different type and intelligence. Most often they have no 'real' intelligence and all the control logic sits in the PC.
This 'intelligence' needs to be constantly reading the data from a device. The communication can be of various kind, like a COM port, TCP/IP socket, HTTP to a web interface, etc.
I am not sure what's the best solution for that 'intelligent logic'. Since it needs a continuous communication with the device, it needs to be separated from all the UI tasks. It will need some kind of state-machine in a background worker or thread to build the higher process logic.
Question: Should it be an instance per device registered in PRISM as a service with a reference to that background worker? Or should that background worker be created and linked to the ViewModel I need for each configured instrument to handle it's data to show and edit? Or is there another best solution?
I think this si more general architecture question than a specific PRISM one...
I've done something similar with other MVVM framework and my solution was based on a single listener (I had only TCP sockets to coomunicate with instruments) registerd as a service. In your application you can either have multiple queues or a single queue with multiple producers.
All messages from the devices were inserted in a concurrent queue and each ViewModel (one for each device) read from that queue.
Communication from ViewModel to device happened directly without going through an "output" queue.
The whole application was build on await/async pattern to decouple UI from communication. I was able to send and receive multiple commands and notifications from serval devices at the same time without any issue.
But again this is really a broad question and mine is a broad answer lot of things depend of how you have to interact with dvices. My solution balances complexity with flexibility, but a lot of other architectures are available.
Related
Suppose an embedded system which is running different set of softwares which should be communicated together asynchronously, i implemented the data communication mechanism using shared memory ( its fast and simple to use ).
In case of IPC in multiple softwares with different technologies, i was getting into the problem how to notify softwares like a bus mechanism. while there are good IPC/notify mechanism such as unix signal/eventfd/shared semaphores/unix sockets and ..., as i know all of them can be used as point-to-point notification system and i can't find any native solution for bus like notification system.
In BUS notification system, one can notify multiple slaves in single bus notification, rather than creating multiple notification objects for each slave and call notify for all of them.
I know there are already working systems such as D-BUS, but D-BUS is considered too complex for small embedded system and i am looking for native solutions.
Is there any simple, lightweight and native event notification system like D-BUS in linux/unix ?
I found that inotify could be used in such case, but i think is there any
other method exists which was designed for notification propose only?
EDIT:
I think multicast IPC (which was answered here is sightly different from publish/subscribe or BUS.
To be clear, i found that inotify could be used in this situation, suppose on file and multiple file watchers which resembles publish/subscribe IPC pattern, i want to know is there any other solution to this problem?!
My Goal is to design and implement a portable Communication Stack on CAN.
To be simple let's assume that the protocol stack i want implement is composed of the following Layers :
1) Data Link Layer : CAN driver and so on
2) Communication Layer: Handle the filtering of the frame in reception and Manage the sending of periodic / Event triggered frames
3) Transport Layer :Manage the segmentation of messages (Standard CAN protocol only allows a frames with length of 8 Byte)
4) Application Layer : defined by the end user
The choice of my design is to build the comunication stack around a non preemptif scheduler and to consider each layer as a task of the scheduler the communication between the different layers is done using communication mechanisms mutex and queues ext.
The questions are:
1) Could this be a good design or there is much easier one
2) How do the Communication stacks really work? , I mean what is the "engine" behind the application layer, Is it a scheduler? or the management of the communication between layers is defined by the end user?
3) Could anyone point me to a free and easy implementation (Ideally in c) of a communication stack (Not necessary for CAN)
Thank you in advance
You should consider using an existing protocol on top of CAN. This could be CANopen. A free implementation is CAN Festival
Transport Layer :Manage the segmentation of messages
No, this is the application layer. It doesn't make sense to handle segmentation unless you have a high-level protocol specifying which CAN identifiers to use and the nature of the data.
The application layer in this case needs to be implemented by you and not the end user. Otherwise you are not making a protocol stack, but merely some glorified CAN driver. Which identifiers are there? The nature of the data? Priorities? How are messages scheduled on the bus over time? Is the system sending data repeatedly and synchronously, or is it event-driven? Are RTR frames used and how? And so on.
How do the Communication stacks really work? , I mean what is the "engine" behind the application layer, Is it a scheduler? or the management of the communication between layers is defined by the end user?
This is quite a broad question, but generally such stacks are event-driven. There a message pump directing incoming data to whoever needs it. The CAN stack need to implement some sort of hardware timer for a given hardware port, to keep track of message timing, but possibly also to keep track of itself.
Some stacks have the possibility to take a "time slice" as parameter and then schedule themselves in a way. Others are built on the concept of doing as little as possible each time they are called, but instead count on constantly getting called repeatedly from the main loop. Whichever makes most sense depends on the end application, really. The former concept might make most sense in applications with RTOS or low power applications. The latter makes most sense for high integrity, fast response systems, like for example a car ECU.
3) Could anyone point me to a free and easy implementation (Ideally in c) of a communication stack (Not necessary for CAN)
(Please note that asking for external resources like libraries is off-topic on SO.)
http://www.canfestival.org is a free CANopen stack. I haven't used it myself, so I have no idea of the quality.
Suppose from the main wpf window (WMain) I create a number of instances of other windows (WA, WB WC ..) all of the same type WModel and each on a seperate thread.
Would the following be a good idea for exchanging information between WMain and WModel?
I am considering to let WMain host a wcf service that can be called from WModel.
And also let WModel host another wcf service that can be called form WMain.
Performance will not be an issue as the communication is limited.
There is no need to use something like WFC if all of the windows are running in the same process.
WCF is for communication between external processes.
If you want to do communication between threads in the same process there are plenty of patterns, starting with something as simple as a threadsafe singleton as a global state container, to using something like a event bus to push events from publishers / subscribers.
I am planning to build a micro controller (a switch will be attached to the embedded system which contains this micro controller) and this embedded system will be connected through a wire to mobile phone. My objective is to dial a particular number through the connected mobile phone network when the user presses the switch on the embedded system. ( planning to use AT commands for dialing). After extensive search, I have found that it is possible to do this above task. Some of the questions I have on this :
a) Do we have to install any drivers on the micro controller to communicate with mobile phone (for sending AT commands) i.e., is it sufficient if we simply code the related AT commands in the micro-controller (in C++) ?
b) Many people were using F-bus protocol for this above objective. Is there any other general protocol similar to this which can help for communicating with all mobiles (samsung,nokia,sony..)
I have read extensively in SO also. But, I have not found any question regarding the drivers. I would appreciate any kind of help
Thanks
A driver is nothing more than a software that allows your system to interact other devices, and is usually associated with Operating Systems (the driver might provide an abstraction layer for your communication). Do you plan to use an Operating System at all?
In any case, it is quite obvious that if you want to communicate to another device you need the software to do so. The question is if you write it your self or if you get an "off the shelf" solution.
In many cases, particularly when a device uses a proprietary communication protocol, you have no option but to get a driver to communicate with it, and that most likely will require you to have an Operating System.
If cellular communication is all you need, there are MUCH easier solutions available (particularly if you intend of turning your project into a product). Search for "embedded modems" or M2M solutions. There are lots of available modems to which you connect using RS232, and can send the AT commands directly. Telit and Multitech are two providers I've worked with and are really easy to interface with.
I am working on a server application for an embedded ARM platform. The ARM board is connected to various digital IOs, ADCs, etc that the system will consistently poll. It is currently running a Linux kernel with the hardware interfaces developed as drivers. The idea is to have a client application which can connect to the embedded device and receive the sensory data as it is updated and issue commands to the device (shutdown sensor 1, restart sensor 2, etc). Assume the access to the sensory devices is done through typical ioctl.
Now my question relates to the design/architecture of this server application running on the embedded device. At first I was thinking to use something like libevent or libev, lightweight C event handling libraries. The application would prioritize the sensor polling event (and then send the information to the client after the polling is done) and process client commands as they are received (over a typical TCP socket). The server would typically have a single connection but may have up to a dozen or so, but not something like thousands of connections. Is this the best approach to designing something like this? Of the two event handling libraries I listed, is one better for embedded applications or are there any other alternatives?
The other approach under consideration is a multi-threaded application in which the sensor polling is done in a prioritized/blocking thread which reads the sensory data and each client connection is handled in separate thread. The sensory data is updated into some sort of buffer/data structure and the connection threads handle sending out the data to the client and processing client commands (I supposed you would still need an event loop of sort in these threads to monitor for incoming commands). Are there any libraries or typical packages used which facilitate designing an application like this or is this something you have to start from scratch?
How would you design what I am trying to accomplish?
I would use a unix domain socket -- and write the library myself, can't see any advantages to using libvent since the application is tied to linux, and libevent is also for hundreds of connections. You can do all of what you are trying to do with a single thread in your daemon. KISS.
You don't need a dedicated master thread for priority queues you just need to write your threads so that it always processes high priority events before anything else.
In terms of libraries, you will possibly benifit from Google's protocol buffers (for serialization and representing your protocol) -- however it only has first class supports for C++, and the over the wire (serialization) format does a bit of simple bit shifting to numeric data. I doubt it will add any serious overhead. However an alternative is ASN.1 (asn1c).
My suggestion would be a modified form of your 2nd proposal. I would create a server that has two threads. One thread polling the sensors, and another for ALL of your client connections. I have used in embedded devices (MIPS) boost::asio library with great results.
A single thread that handles all sockets connections asynchronously can usually handle the load easily (of course, it depends on how many clients you have). It would then serve the data it has on a shared buffer. To reduce the amount and complexity of mutexes, I would create two buffers, one 'active' and another 'inactive', and a flag to indicate the current active buffer. The polling thread would read data and put it in the inactive buffer. When it finished and had created a 'consistent' state, it would flip the flag and swap the active and inactive buffers. This could be done atomically and should therefore not require anything more complex than this.
This would all be very simple to set up since you would pretty much have only two threads that know nothing about the other.