There are a lot of examples how to write realtime code for RT-Linux by FSMLabs but this distro has been abandoned many years ago. Currently PREEMPT_RT patch for vanilla kernel is actively developed but there are only few code examples on official Wiki. First let me introduce my issue.
I'm writing a project containing 2 programs:
Virtual machine of byte code - it must work as realtime application - it has 64 KB of I/O memory and 64 KB for byte code.
Client program - it will read and write I/O memory, start/pause machine, load new programs, set parameters, etc. It doesn't have to be realtime.
How to communicate between these processes to keep process (1) realtime and avoid page faults or other behaviors that can interfere realtime app?
Approach 1. Use only threads
There are 2 threads:
virtual machine thread with highest priority
client thread with normal priority that communicates with user and machine
Both threads have access to all global variables by name. I can create additional buffer for incoming/outcoming data after each machine cycle. However, if client thread causes application crash, machine thread will terminate too. It's also more difficult to implement remote access.
Approach 2. Shared memory
In old FSMLabs it's recommended to use shared global memory between processes. Modern PREEMPT_RT's Wiki page recommends using mmap() for process data sharing but in the same article it discourages mmap() because of page faults.
Approach 3. Named pipes
It's more flexible way to communicate between processes. However, I'm new to programming in Linux. We want to share memory between machine and client but it should also provide a way to load new program (file path or program code), stop/start machine, etc. Old FSMLabs RT-Linux implemented its own FIFO queues (named pipes). Modern PREEMPT_RT doesn't. Can using names pipes break realtime behavior? How to do it properly? Should I read data with O_NONBLOCK flag or create another thread for reading/writing data from/to pipe?
Do you know other ways to communicate between processes where one process must be realtime? Maybe I need only threads. However, consider a scenario that more clients are connected to virtual machine process.
For exchanging data between processes executing on the same host operating system you can also use UNIX domain sockets.
Related
I am implementing a very basic C server that allows clients to chat. Right now I am using fork(), but I am having trouble having clients see each others' messages.
It also seems that all clients get the same file descriptor from accept(). Basically, I have a while loop where I test if someone wants to connect with select(), accept() their connection, and fork(). After that I read input and try to pass them to all users (whom I am keeping in a list). I can copy/paste my code if necessary.
So, is it possible to have the clients communicate with processes, or do I have to use pthreads?
Inter-process communication -IPC- (in general) don't care about client vs server (except at connect phase). A given process can have both a client and a server role (on different sockets), and would use poll(2) or the older select on several sockets in some event loop.
Notice that processes have each their own virtual address space, while threads share the same virtual address space (the one of their containing process). Read some pthread tutorial, and some book on POSIX programming (perhaps the old ALP). Be aware that a lot of information regarding processes can be queried on Linux thru /proc/ (see proc(5) for more). In particular, the virtual address space of process of pid 1234 can be obtained thru /proc/1234/maps and its opened file descriptors thru /proc/1234/fd/ and /proc/1234/fdinfo/ etc....
However, it is simpler to code a common server keeping the shared state, and dispatching messages to clients.
You could design a protocol where the clients have some way to initiate that IPC. For example, if all the processes are on the same machine, you could have a protocol which transmits a file path used as unix(7) socket address, or as fifo(7), and each "client" process later initiate (with some connect) a direct communication with another "client". It might be unwise to do so.
Look also into libraries like 0mq. They often are free software, so you can study their source code.
I want to use a shared library with multiple processes running concurrently. My library contains UART open/write/read/close, Each process writes a specific UART command and expects related response. Application calls APIs in LIB, Inside API open UART port, writes command to UART and read response from UART, process response buffer and send back to user [API takes 2 to 3 seconds for execution].
I have 30 such APIs and 5 processes running concurrently using these APIs.
How can I provide synchronisation across all these processes, such that only one process uses UART at a time and all other blocks on UART.
Regards & Thanks,
Anil.
You're asking a very general question about how to co-ordinate multiple processes. This is a vast and deep subject, and there are lots of routes you could take. Here are some ideas:
1) create a lock file in /var/lock. This will work with other programs that use the serial port. When one program is done, the others will race to create the lock, and a random one will win.
2) have your library create a shared memory segment. In the shared memory segment, write down who has the 'lock'. As with lock files, you'll want to write down the PID, so the others can steal the lock if the owner dies. This has the least amount of overhead.
3) Split your serial code into a "UART control daemon" and a client library that calls the daemon. The daemon listens on a unix socket (or TCP/UDP, or other IPC) and handles the serial port exclusively. (You can easily find 'chat server' code written in any language). This has several advantages:
The daemon can tell callers how many requests are "in the queue"
The daemon can try to maintain the FIFO order, or handle priority requests if it wants.
The daemon can cache responses when multiple clients are asking the same question at once.
If you don't want the daemon running all the time, you can have xinetd start it. (Make sure it's in singe-server mode.)
Instead of each process having to be linked to a serial library, it uses the simpler standard unix sockets (or TCP).
Your API calling programs become much easier to test (no need for hardware, you can simulate responses)
If an API calling program dies, the UART isn't left in a bad state
I would like to inject a shared library into a process (I'm using ptrace() to do that part) and then be able to get output from the shared library back into the debugger I'm writing using some form of IPC. My instinct is to use a pipe, but the only real requirements are:
I don't want to store anything on the filesystem to facilitate the communication as it will only last as long as the debugger is running.
I want a portable Unix solution (so Unix-standard syscalls would be ideal).
The problem I'm running into is that as far as I can see, if I call pipe() in the debugger, there is no way to pass the "sending" end of the pipe to the target process, and vice versa with the receiving end. I could set up shared memory, but I think that would require creating a file somewhere so I could reference the memory segment from both processes. How do other debuggers capture output when they attach to a process after it has already begun running?
I assume that you are in need of a debugging system for your business logic code (I mean application). From my experience, this kind of problem is tackled with below explained system design. (My experience is in C++, I think the same must hold good for the C based system also.)
Have a logger system (a separate process). This will contain - logger manager and the logging code - which will take the responsibility of dumping the log into hard disk.
Each application instance (process running in Unix) will communicate to this process with sockets. So you can have your own messaging protocol and communicate with the logger system with socket based communication.
Later, for each of this application - have a switch which can switch off/on the log.So that you can have a tool - to send signal to this process to switch on/off the message logging.
At a high level, this is the most generic way to develop a logging system. In case you need any information - Do comment it. I will try to answer.
Using better search terms showed me this question is a dup of these guys:
Can I share a file descriptor to another process on linux or are they local to the process?
Can I open a socket and pass it to another process in Linux
How to use sendmsg() to send a file-descriptor via sockets between 2 processes?
The top answers were what I was looking for. You can use a Unix-domain socket to hand a file descriptor off to a different process. This could work either from debugger to library or vice versa, but is probably easier to do from debugger to library because the debugger can write the socket's address into the target process while it injects the library.
However, once I pass the socket's address into the target process, I might as well just use the socket itself instead of using a pipe in addition.
(This is for a low latency system)
Assuming I have some code which transfers received UDP packets to a region of shared memory, how can I then notify the application (in user mode) that it is now time to read the shared memory? I do not want the application continuously polling eating up cpu cycles.
Is it possible to insert some code in the network stack which can call my application code immediately after it has written to the shared memory?
EDIT I added a C tag, but the application would be in C++
One way to signal an event from one Unix process to another is with POSIX semaphores. You would use sem_open to initialize and open a named semaphore that you can use cross-process.
See How can I get multiple calls to sem_open working in C?.
The lowest latency method to signal an event between processes on the same host is to spin-wait looking for a (shared) memory location to change... this avoids a system call. You expressly said you do not want the application polling, however in a multi-threaded application running on a multi-core system it may not be a bad tradeoff if you really care about latency.
Unless you are planning to use a real-time OS, there is no "immediate" protocol. The CPU resources are available in quantums of few milliseconds, and usually it takes some time for your user thread to understand it can continue.
Considering all above, any form of IPC would do: local sockets, signals, pipes, event descriptors etc. Practical difference on performance would be miserable.
Furthermore, usage of shared memory can lead to unnessessary complications in maintaining/debugging, but that's the designer's choice.
I am working on a server application for an embedded ARM platform. The ARM board is connected to various digital IOs, ADCs, etc that the system will consistently poll. It is currently running a Linux kernel with the hardware interfaces developed as drivers. The idea is to have a client application which can connect to the embedded device and receive the sensory data as it is updated and issue commands to the device (shutdown sensor 1, restart sensor 2, etc). Assume the access to the sensory devices is done through typical ioctl.
Now my question relates to the design/architecture of this server application running on the embedded device. At first I was thinking to use something like libevent or libev, lightweight C event handling libraries. The application would prioritize the sensor polling event (and then send the information to the client after the polling is done) and process client commands as they are received (over a typical TCP socket). The server would typically have a single connection but may have up to a dozen or so, but not something like thousands of connections. Is this the best approach to designing something like this? Of the two event handling libraries I listed, is one better for embedded applications or are there any other alternatives?
The other approach under consideration is a multi-threaded application in which the sensor polling is done in a prioritized/blocking thread which reads the sensory data and each client connection is handled in separate thread. The sensory data is updated into some sort of buffer/data structure and the connection threads handle sending out the data to the client and processing client commands (I supposed you would still need an event loop of sort in these threads to monitor for incoming commands). Are there any libraries or typical packages used which facilitate designing an application like this or is this something you have to start from scratch?
How would you design what I am trying to accomplish?
I would use a unix domain socket -- and write the library myself, can't see any advantages to using libvent since the application is tied to linux, and libevent is also for hundreds of connections. You can do all of what you are trying to do with a single thread in your daemon. KISS.
You don't need a dedicated master thread for priority queues you just need to write your threads so that it always processes high priority events before anything else.
In terms of libraries, you will possibly benifit from Google's protocol buffers (for serialization and representing your protocol) -- however it only has first class supports for C++, and the over the wire (serialization) format does a bit of simple bit shifting to numeric data. I doubt it will add any serious overhead. However an alternative is ASN.1 (asn1c).
My suggestion would be a modified form of your 2nd proposal. I would create a server that has two threads. One thread polling the sensors, and another for ALL of your client connections. I have used in embedded devices (MIPS) boost::asio library with great results.
A single thread that handles all sockets connections asynchronously can usually handle the load easily (of course, it depends on how many clients you have). It would then serve the data it has on a shared buffer. To reduce the amount and complexity of mutexes, I would create two buffers, one 'active' and another 'inactive', and a flag to indicate the current active buffer. The polling thread would read data and put it in the inactive buffer. When it finished and had created a 'consistent' state, it would flip the flag and swap the active and inactive buffers. This could be done atomically and should therefore not require anything more complex than this.
This would all be very simple to set up since you would pretty much have only two threads that know nothing about the other.