C/C++ code using pthreads to execute sync and async communications - c

I am using a Linux machine to communicate with a PLC. The PLC and Linux-machine are connected within a local network, and use UDP/IP as the base protocol. Also, the port number is fixed on both sides.
Such a communication needs to achieve:
Requirement 1: Linux machine could send commands (one command each time) to the PLC. After each command received, the PLC will response the Linux machine with a success/failure message within 50ms.
Requirement 2: Vise versa, PLC could send commands to the Linux machine. The Linux machine has to response back with a message within 50ms. The PLC sending is asynchronous to the Linux machine. Therefore the Linux machine needs to monitor(or listen to) the port continuously.
Simple C/C++ code has been used for testing the communication separately regarding the aforementioned requirements. It worked. But blocking mechanism was conducted.
Here comes the challenging part. I would like to use pthreads for such a communication. My solution is to simply create two threads for each requirement. I sketched my thought using the attached pic https://www.dropbox.com/s/vriyrprl7j6tntx/multi-thread%20solution.png?dl=0, with 'thread 0' denoting main thread, 'thread 1' denoting Requirement 1 thread and 'thread 2' denoting Requirement 2 thread. 'shared data' indicates the data that could be shared throughout all the child threads. 'thread 1 data' is dedicated for thread 1 usage, and other threads will not access. Likewise, 'thread 2 data' is only used by thread 2.
My concern rises considering two threads will be conducting system calls on the same port. Hence, I need reviews on my solution, and it would be awesome to get more working solutions. P.S. I am not too worried about thread synchronization and creation. And it is totally cool to me if thread sync and creation are necessary in your solution.
Thanks in advance.

There is no general problem with two threads executing system calls on the same socket. You may encounter some specific issues, though:
If you call recvfrom() in both threads (one waiting for the PLC to send a request, and the other waiting for the PLC to respond to a command from the server), you don't know which one will receive the response. To get around this, you can dedicate one thread to reading from the PLC, and have it pass reply messages from the PLC to the sending thread using shared queue or similar structure.
You have to be careful when you close a socket that could be in use by another thread - because of the way file descriptors are reused, it's easy to have a race condition that ends up with a thread acting on the wrong socket.

Related

Execution Patter of Multi-Threaded Server on Linux

I like to know what should be the execution pattern of Multiple Threads of a Server to implement TCP in request-response cycle of hi-performance Server (like dozens of packets with single or no system call on Linux using Packet MMAP or some other way).
Design 1) For simplicity, Start two thread in main at the start of a Server program. one thread just getting packets directly from network interface(s) like wlan0/eth0. and once number of packets read in one cycle (using while loop with poll() in Linux). wake up the other thread using conditional variable signal call. and after waking up, other thread (sender) process and send packet as tcp response.
Design 2) Start receiver thread at the start of main program. The packet receiver thread reads packets from interfaces using while loop and poll(). When number of packets received, create sender thread and pass number of packets received in one cycle to sender as parameter. Sender thread process the packets and respond as tcp response.
(I think, Design 2 will be more easy to implement but is there any design issue or possible performance issue with this approach this is the question). Since creating buffer to pass to sender thread from receiver thread need to be allocated prior to receiving packets. So I know the size of buffer to allocate. Also in this execution pattern I am creating new thread (which will return and end execution after processing packets and responding tcp response). I like to know what will be the performance issue with this approach since I am creating new thread every time I get a batch of packet from interfaces.
In first approach I am not creating more than two threads (or limited number of threads and threads can be tracked easily for logging and debugging since I will know how many thread are initially created) In second approach I don't know how many threads are hanging around and executing concurrently.
I need any advise how real website like youtube/ or others may have handled this in there hi-performance server if they had followed this way of implementing their front facing servers.
First when going to a 'real' website the magic lies in having a load balancers and a whole bunch of worker nodes to take the load and you easily exceed the boundary of a single system. For example take a look at the following AWS reference architecture for serving web pages at scale AWS Cloud Architecture for serving web whitepaper.
That being said taking this one level down it is always interesting to look at how other well-known products have solved this issue. For example NGINX has an excellent infographic available and matching blogpost describing their architecture and threading.

Multi threaded embedded linux application state machine design

Problem definition:
We are designing an application for an industrial embedded system running Linux.
The system is driven by events from the outside world. The inputs to the system could be any of the following:
Few inputs to the system in the form of Digital IO lines(connected
to the GPIOs of the processor like e-stop).
The system runs a web-server which allows for the system to be
controlled via the web browser.
The system runs a TCP server. Any PC or HMI device could send commands over TCP/IP.
The system needs to drive or control RS485 slave devices over UART using Modbus. The system also need to control few IO lines like Cooler ON/OFF etc.We believe that a state machine is essential to define this application. The core application shall be a multi threaded application which shall have the following threads...
Main thread
Thread to control the RS485 slaves.
Thread to handle events from the Web interface.
Thread to handle digital I/O events.
Thread to handle commands over TCP/IP(Sockets)
For inter-thread communication, we are using Pthread condition signal & wait. As per our initial design approach(one state machine in main thread), any input event to the system(web or tcp/ip or digital I/O) shall be relayed to the main thread and it shall communicate to the appropriate thread for which the event is destined. A typical scenario would be to get the status of the RS485 slave through the web interface. In this case, the web interface thread shall relay the event to the main thread which shall change the state and then communicate the event to the thread that control's the RS485 slaves & respond back. The main thread shall send the response back to the web interface thread.
Questions:
Should each thread have its own state machine thereby reducing the
complexity of the main thread ? In such a case, should we still need
to have a state machine in main thread ?
Any thread processing input event can communicate directly to the
thread that handles the event bypassing the main thread ? For e.g
web interface thread could communicate directly with the thread
controlling the RS485 slaves ?
Is it fine to use pthread condition signals & wait for inter thread
communication or is there a better approach ?
How can we have one thread wait for event from outside & response
from other threads ? For e.g. the web interface thread usually waits
for events on a POSIX message queue for Inter process communication
from web server CGI bins. The CGI bin's send events to the web
interface thread through this message queue. When processing this
event, the web interface thread would wait for response from other
threads. In such a situation, it couldn't process any new event from
the web interface until it has completed processing the previous
event and gets back to the wait on the POSIX message queues.
sorry for the too big explanation...I hope I have put forward my explanation in the best possible way for others to understand and help me.
I could give more inputs if needed.
What I always try to do with such requirements is to use one state machine, run by one 'SM' thread, which could be the main thread. This thread waits on an 'EventQueue' input producer-cosumer queue with a timeout. The timeout is used to run an internal delta-queue that can provide timeout events into the state-machine when they are required.
All other threads communicate their events to the state engine by pushing messages onto the EventQueue, and the SM thread processes them serial manner.
If an action routine in the SM decides that it must do something, it must not synchronously wait for anything and so it must request the action by pushing a request message to an input queue of whatever thread/susbsystem can perform it.
My message class, (OK, *struct in your C case), typically contains a 'command' enum, 'result' enum, a data buffer pointer, (in case it needs to transport bulk data), an error-message pointer, (null if no error), and as much other state as is necessary to allow the asynchronous queueing up of any kind of request and returning the complete result, (whether success or fail).
This message-passing, one SM design is the only one I have found that is capable of doing such tasks in a flexible, expandable manner without entering into a nightmare world of deadlocks, uncontrolled communications and unrepeatable, undebuggable interactions.
The first question that should be asked about any design is 'OK, how can the system be debugged if there is some strange problem?'. In my design above, I can answer straightaway: 'we log all events dequeued in the SM thread - they all come in serially so we always know exactly what actions are taken based on them'. If any other design is suggested, ask the above question and, if a good answer is not immediately forthcoming, it will never be got working.
So:
If a thread, or threaded subsystem, can use a separate state-machine to do its own INTERNAL functionality, OK, fine. These SM's should be invisible from the rest of the system.
NO!
Use the pthread condition signals & wait to implement producer-consumer blocking queues.
One input queue per thread/subsystem. All inputs go to this queue in the form of messages. Commands/state in each message identify the message and what should be done with it.
BTW, I would 100% do this in C++ unless shotgun-at-head :)
I have implemented a legacy embedded library that was originally written for a clone (EC115/EC270) of Siemens ES122C terminal controller. This library and OS included more or less what you describe. The original hardware was based on 80186 cpu. The OS, RMOS for Siemens, FXMOS for us (don't google it was never published) had all the stuff needed for basic controller work.
It had preemptive multi-tasking, task-to-task communication, semaphores, timers and I/O events, but no memory protection.
I ported that stuff to RaspberryPi (i.e. Linux).
I used the pthreads to simulate our legacy "tasks" because we hadn't memory protection, so threads are semantically the closest.
The rest of the implementation then turned around the epoll API. This means that everything generates an event. An event is when something happens, a timer expires, another thread sends data, a TCP socket is connected, an IO pin changes state, etc.
This requires that all the event sources be transformed in file descriptors. Linux provides several syscalls that do exactly that:
for task to task communication I used classic Unix pipes.
for timer events I used timerfd API.
for TCP communication I used normal sockets.
for serial I/O I simply opened the right device /dev/???.
signals are not necessary in my case but Linux provides 'signalfd' if necessary.
I have then epoll_wait wrapped around to simulate the original semantic.
I works like a charm.
TL;DR
take a deep look at the epoll API it does what you probably need.
EDIT: Yes and the advices of Martin James are very good especially 4. Each thread should only ever be in a loop waiting on an event via epoll_wait.

multiple unrelated process synchronisation UART

I want to use a shared library with multiple processes running concurrently. My library contains UART open/write/read/close, Each process writes a specific UART command and expects related response. Application calls APIs in LIB, Inside API open UART port, writes command to UART and read response from UART, process response buffer and send back to user [API takes 2 to 3 seconds for execution].
I have 30 such APIs and 5 processes running concurrently using these APIs.
How can I provide synchronisation across all these processes, such that only one process uses UART at a time and all other blocks on UART.
Regards & Thanks,
Anil.
You're asking a very general question about how to co-ordinate multiple processes. This is a vast and deep subject, and there are lots of routes you could take. Here are some ideas:
1) create a lock file in /var/lock. This will work with other programs that use the serial port. When one program is done, the others will race to create the lock, and a random one will win.
2) have your library create a shared memory segment. In the shared memory segment, write down who has the 'lock'. As with lock files, you'll want to write down the PID, so the others can steal the lock if the owner dies. This has the least amount of overhead.
3) Split your serial code into a "UART control daemon" and a client library that calls the daemon. The daemon listens on a unix socket (or TCP/UDP, or other IPC) and handles the serial port exclusively. (You can easily find 'chat server' code written in any language). This has several advantages:
The daemon can tell callers how many requests are "in the queue"
The daemon can try to maintain the FIFO order, or handle priority requests if it wants.
The daemon can cache responses when multiple clients are asking the same question at once.
If you don't want the daemon running all the time, you can have xinetd start it. (Make sure it's in singe-server mode.)
Instead of each process having to be linked to a serial library, it uses the simpler standard unix sockets (or TCP).
Your API calling programs become much easier to test (no need for hardware, you can simulate responses)
If an API calling program dies, the UART isn't left in a bad state

c linux multithreading networking

I have a network application on a gateway. It receives and sends packets. For most of them, my gateway acts as a router, but in some cases, it can receive packets too.
Should I have:
only one main thread
a main thread + a dispatch thread in charge of giving it to the correct flow handler
as many threads as there are flows
something else.
?
Doing multithreading correctly is no simple matter, in many cases a select and friends based solution will be a whole lot easier to create.
Your case sounds a lot like a typical Unix service daemon. The popular solution to your problem is not to use threads, but forks.
The idea is that your program listens on the socket and waits for connections. As soon as a connection arrives, it forks. The child process then continues to process the connection. The father process itself just continues in the loop and waits for incoming connections.
Advantages over threading:
Very simple program design
No problems with concurrency
Established method for Unix/Linux systems
Disadvantages:
Things get complicated when several connections interact with each other (your use case doesn't sound like they would)
Performance penalty on Windows systems (not on Unix systems!)
You can find many code examples online.
I don't know much about networking applications, but I think it's like this:
If you have the ability to react asynchronous to the requests you would probably use just one single thread (like in Node.JS). If you won't be able to react asynchronous the main thread would always block the other actions.
If you are not able to react asynchronous on your requests you have to use more than one thread. But you could achieve that in many different ways: you could create for every request a thread, or a limited number of threads and assign them then to your requests.
My personal preference is use one main thread and one worker thread per connection. No cap whatsoever. I am assuming that your server will be stateless like a HTTP server.
For stateful servers you will have to figure out some way to control number of threads.

C Multithreading Deadlock's Thread Events

I am trying to perform multithreading on a socket in C in order to develop a connector between two different software applications. I would like it to work in the following manner. One piece of software will start running as the server, it will be performing a variety of functions including listening for a socket connection on a designated port. This software will function by it self and only use data from the connected network socket when it is established and receiving reliable data. So for this piece I would like to be able to listen to a connection, and when one is made fork a process and when data is received from this socket set some variable that will be used by some other update thread to notify it that it has these extra precision information that can be considered. On the other side of this equation I want to create a program that when it boots up will attempt to connect to the port of the other application, once this connects it will then simply call a function that will send out the information in non blocking fashion. My whole goal is to create a connector that will allow the programmers of the other two pieces of code to feel as tho they aren't dealing with a socket what so ever.
I have been able to get multi threaded socket communication going but I am now trying to modify this so it will be usable as I have described and I am confused as to how to avoid multiple access to that variable that will notify the system on the server side that the data has arrived as well as create the non-blocking interaction on the client side. Any help ill be appreciated.
-TJ
The question is not so clear to me, but if you need to make different pieces of software talking easily you can consider using a framework message library like ZeroMQ www.zeromq.org
It seems like you have a double producer-consumer problem here:
Client side Server
producer -> sender thread -> receiver thread -> consumer thread
In this case, the most useful data structure to use is a blocking queue on both sides, like intel TBB's concurrent_bounded_queue.
This allows you to post tasks from one thread and have another thread pull the data when it's available in a thread-safe manner.

Resources