I want to use a shared library with multiple processes running concurrently. My library contains UART open/write/read/close, Each process writes a specific UART command and expects related response. Application calls APIs in LIB, Inside API open UART port, writes command to UART and read response from UART, process response buffer and send back to user [API takes 2 to 3 seconds for execution].
I have 30 such APIs and 5 processes running concurrently using these APIs.
How can I provide synchronisation across all these processes, such that only one process uses UART at a time and all other blocks on UART.
Regards & Thanks,
Anil.
You're asking a very general question about how to co-ordinate multiple processes. This is a vast and deep subject, and there are lots of routes you could take. Here are some ideas:
1) create a lock file in /var/lock. This will work with other programs that use the serial port. When one program is done, the others will race to create the lock, and a random one will win.
2) have your library create a shared memory segment. In the shared memory segment, write down who has the 'lock'. As with lock files, you'll want to write down the PID, so the others can steal the lock if the owner dies. This has the least amount of overhead.
3) Split your serial code into a "UART control daemon" and a client library that calls the daemon. The daemon listens on a unix socket (or TCP/UDP, or other IPC) and handles the serial port exclusively. (You can easily find 'chat server' code written in any language). This has several advantages:
The daemon can tell callers how many requests are "in the queue"
The daemon can try to maintain the FIFO order, or handle priority requests if it wants.
The daemon can cache responses when multiple clients are asking the same question at once.
If you don't want the daemon running all the time, you can have xinetd start it. (Make sure it's in singe-server mode.)
Instead of each process having to be linked to a serial library, it uses the simpler standard unix sockets (or TCP).
Your API calling programs become much easier to test (no need for hardware, you can simulate responses)
If an API calling program dies, the UART isn't left in a bad state
Related
I am implementing a very basic C server that allows clients to chat. Right now I am using fork(), but I am having trouble having clients see each others' messages.
It also seems that all clients get the same file descriptor from accept(). Basically, I have a while loop where I test if someone wants to connect with select(), accept() their connection, and fork(). After that I read input and try to pass them to all users (whom I am keeping in a list). I can copy/paste my code if necessary.
So, is it possible to have the clients communicate with processes, or do I have to use pthreads?
Inter-process communication -IPC- (in general) don't care about client vs server (except at connect phase). A given process can have both a client and a server role (on different sockets), and would use poll(2) or the older select on several sockets in some event loop.
Notice that processes have each their own virtual address space, while threads share the same virtual address space (the one of their containing process). Read some pthread tutorial, and some book on POSIX programming (perhaps the old ALP). Be aware that a lot of information regarding processes can be queried on Linux thru /proc/ (see proc(5) for more). In particular, the virtual address space of process of pid 1234 can be obtained thru /proc/1234/maps and its opened file descriptors thru /proc/1234/fd/ and /proc/1234/fdinfo/ etc....
However, it is simpler to code a common server keeping the shared state, and dispatching messages to clients.
You could design a protocol where the clients have some way to initiate that IPC. For example, if all the processes are on the same machine, you could have a protocol which transmits a file path used as unix(7) socket address, or as fifo(7), and each "client" process later initiate (with some connect) a direct communication with another "client". It might be unwise to do so.
Look also into libraries like 0mq. They often are free software, so you can study their source code.
I am using a Linux machine to communicate with a PLC. The PLC and Linux-machine are connected within a local network, and use UDP/IP as the base protocol. Also, the port number is fixed on both sides.
Such a communication needs to achieve:
Requirement 1: Linux machine could send commands (one command each time) to the PLC. After each command received, the PLC will response the Linux machine with a success/failure message within 50ms.
Requirement 2: Vise versa, PLC could send commands to the Linux machine. The Linux machine has to response back with a message within 50ms. The PLC sending is asynchronous to the Linux machine. Therefore the Linux machine needs to monitor(or listen to) the port continuously.
Simple C/C++ code has been used for testing the communication separately regarding the aforementioned requirements. It worked. But blocking mechanism was conducted.
Here comes the challenging part. I would like to use pthreads for such a communication. My solution is to simply create two threads for each requirement. I sketched my thought using the attached pic https://www.dropbox.com/s/vriyrprl7j6tntx/multi-thread%20solution.png?dl=0, with 'thread 0' denoting main thread, 'thread 1' denoting Requirement 1 thread and 'thread 2' denoting Requirement 2 thread. 'shared data' indicates the data that could be shared throughout all the child threads. 'thread 1 data' is dedicated for thread 1 usage, and other threads will not access. Likewise, 'thread 2 data' is only used by thread 2.
My concern rises considering two threads will be conducting system calls on the same port. Hence, I need reviews on my solution, and it would be awesome to get more working solutions. P.S. I am not too worried about thread synchronization and creation. And it is totally cool to me if thread sync and creation are necessary in your solution.
Thanks in advance.
There is no general problem with two threads executing system calls on the same socket. You may encounter some specific issues, though:
If you call recvfrom() in both threads (one waiting for the PLC to send a request, and the other waiting for the PLC to respond to a command from the server), you don't know which one will receive the response. To get around this, you can dedicate one thread to reading from the PLC, and have it pass reply messages from the PLC to the sending thread using shared queue or similar structure.
You have to be careful when you close a socket that could be in use by another thread - because of the way file descriptors are reused, it's easy to have a race condition that ends up with a thread acting on the wrong socket.
There are a lot of examples how to write realtime code for RT-Linux by FSMLabs but this distro has been abandoned many years ago. Currently PREEMPT_RT patch for vanilla kernel is actively developed but there are only few code examples on official Wiki. First let me introduce my issue.
I'm writing a project containing 2 programs:
Virtual machine of byte code - it must work as realtime application - it has 64 KB of I/O memory and 64 KB for byte code.
Client program - it will read and write I/O memory, start/pause machine, load new programs, set parameters, etc. It doesn't have to be realtime.
How to communicate between these processes to keep process (1) realtime and avoid page faults or other behaviors that can interfere realtime app?
Approach 1. Use only threads
There are 2 threads:
virtual machine thread with highest priority
client thread with normal priority that communicates with user and machine
Both threads have access to all global variables by name. I can create additional buffer for incoming/outcoming data after each machine cycle. However, if client thread causes application crash, machine thread will terminate too. It's also more difficult to implement remote access.
Approach 2. Shared memory
In old FSMLabs it's recommended to use shared global memory between processes. Modern PREEMPT_RT's Wiki page recommends using mmap() for process data sharing but in the same article it discourages mmap() because of page faults.
Approach 3. Named pipes
It's more flexible way to communicate between processes. However, I'm new to programming in Linux. We want to share memory between machine and client but it should also provide a way to load new program (file path or program code), stop/start machine, etc. Old FSMLabs RT-Linux implemented its own FIFO queues (named pipes). Modern PREEMPT_RT doesn't. Can using names pipes break realtime behavior? How to do it properly? Should I read data with O_NONBLOCK flag or create another thread for reading/writing data from/to pipe?
Do you know other ways to communicate between processes where one process must be realtime? Maybe I need only threads. However, consider a scenario that more clients are connected to virtual machine process.
For exchanging data between processes executing on the same host operating system you can also use UNIX domain sockets.
I have an ARM device running a Linux 2.6 Kernel, with total ram of 64 MB RAM.
There is a data source, which consists of a meter that is queried by the Linux box, through RS485 and ModBus as app protocol.
There is another task, that consists of reading these values and making a json object, then HTTP POST to a specific server.
Network operation might be slower than serial, especially on low GPRS Coverage.
I need concurrency, program is written in C.
Which way would you have concurrency? Using select() or using pthreads?
When analyzing this particular application there's really only one question relevant to choosing pthreads:
Do the sensor reader and network writer need to share an address space?
In this instance I think the answer is clearly "no". Of course that isn't the only possible question, but the only germane one. There are reasons to prefer separate processes:
the two halves of the application have no common code; RS485 is wildly different from HTTP/JSON
segregation of responsibility: if the RS485 side is waiting on a UART, do you really want to block the HTTP side?
letting the OS do its job so you don't have to: if using pthreads, you have to handle a lot of the synchronization and preemption that the kernel does for you for free and code that you don't have to write has no new bugs.
Further analysis would require more detail than you've given, but here is one additional way to think about the choice: threads were invented to mitigate some limitations of the process model. Unless you know that you are going to hit those limitations, use separate processes.
added in response to comments:
I half agree with psusi's suggested design. There need only be two processes, one (let's say the sensor reader, that's a fine choice) which forks one and only one http sender. The two processes can communicate using traditional IPC like a pipe. The sensor process sends data down the pipe when it has some and the child (http) process packs it up in json and sends it on its way.
It only takes two long-lived processes, it uses probably about the same amount of core as would a pthread implementation and it is far, far easier to get right.
select() is more efficient, because it avoids the context switching that comes with multiple threads. And threads would be more efficient than separate processes, because you avoid having to copy the data (unless you setup shared memory, but at that point you might as well have gone with threads). However, writing non-blocking I/O, as with select(), is harder to do and get right, and doesn't enjoy the multitasking that comes with multiple threads. And multiple processes is likely to be the easiest implementation, especially because you can use curl rather than writing the HTTP POST half yourself.
Why you need concurrency? Is the meter has to be polled in a strict time interval?
If the answer is YES: Just use two processes, one poll the meter data and write to a ring buffer in nand storage, the other read the data from the ring buffer and send HTTP data.
If the answer is NO: You don't need concurrency and non-block at all. Use a big loop in main() is enough.
When writing a non-blocking program (handling multiple sockets) which at a certain point needs to open files using open(2), stat(2) files or open directories using opendir(2), how can I ensure that the system calls do not block?
To me it seems that there's no other alternative than using threads or fork(2).
As Mel Nicholson replied, for everything file descriptor based you can use select/poll/epoll. For everything else you can have a proxy thread-per-item (or a thread pool) with the small stack that would convert (by means of the kernel scheduler) any synchronous blocking waits to select/poll/epoll-able asynchronous events using eventfd or a unix pipe (where portability is required).
The proxy thread shall block till the operation completes and then write to the eventfd or to the pipe to wake up the select/poll/epoll.
Indeed there is no other method.
Actually there is another kind of blocking that can't be dealt with other than by threads and that is page faults. Those may happen in program code, program data, memory allocation or data mapped from files. It's almost impossible to avoid them (actually you can lock some pages to memory, but it's privileged operation and would probably backfire by making the kernel do a poor job of memory management somewhere else). So:
You can't really weed out every last chance of blocking for a particular client, so don't bother with the likes of open and stat. The network will probably add larger delays than these functions anyway.
For optimal performance you should have enough threads so some can be scheduled if the others are blocked on page fault or similar difficult blocking point.
Also if you need to read and process or process and write data during handling a network request, it's faster to access the file using memory-mapping, but that's blocking and can't be made non-blocking. So modern network servers tend to stick with the blocking calls for most stuff and simply have enough threads to keep the CPU busy while other threads are waiting for I/O.
The fact that most modern servers are multi-core is another reason why you need multiple threads anyway.
You can use the poll( ) command to check any number of sockets for data using a single thread.
See here for linux details, or man poll for the details on your system.
open( ) and stat( ) will block in the thread they are called from in all POSIX compliant systems unless called via an asynchronous tactic (like in a fork)