Adding gtk graphics to existing console application - shared memory? - c

I have concurent application (concurrent simulation of an airport) made with system v library (semaphores, message queue) and multiple processes.
I'm not allowed to use threads, that's why I have an processes thread and multiple 'planes' processes.
I would like to add some graphics to show traffic on an airport with GTK (with Cairo) library.
How to add graphic? When I tried to add it to airport process, gtk_main would block whole application. I thought about creating another process and add graphics to shared memory but I've read that it's not going to work.
What is the easiest/the best option?
Thank you very much!

It sounds like you should make a separate GUI process that the other processes can send messages to. One way to do this would be for your GUI process to export a DBus interface that the other processes can connect to. This way, when your GUI process receives a message from another process, your GTK main loop will emit a signal, and you can schedule a signal handler to deal with it and update the GUI accordingly.

Related

Multi threaded embedded linux application state machine design

Problem definition:
We are designing an application for an industrial embedded system running Linux.
The system is driven by events from the outside world. The inputs to the system could be any of the following:
Few inputs to the system in the form of Digital IO lines(connected
to the GPIOs of the processor like e-stop).
The system runs a web-server which allows for the system to be
controlled via the web browser.
The system runs a TCP server. Any PC or HMI device could send commands over TCP/IP.
The system needs to drive or control RS485 slave devices over UART using Modbus. The system also need to control few IO lines like Cooler ON/OFF etc.We believe that a state machine is essential to define this application. The core application shall be a multi threaded application which shall have the following threads...
Main thread
Thread to control the RS485 slaves.
Thread to handle events from the Web interface.
Thread to handle digital I/O events.
Thread to handle commands over TCP/IP(Sockets)
For inter-thread communication, we are using Pthread condition signal & wait. As per our initial design approach(one state machine in main thread), any input event to the system(web or tcp/ip or digital I/O) shall be relayed to the main thread and it shall communicate to the appropriate thread for which the event is destined. A typical scenario would be to get the status of the RS485 slave through the web interface. In this case, the web interface thread shall relay the event to the main thread which shall change the state and then communicate the event to the thread that control's the RS485 slaves & respond back. The main thread shall send the response back to the web interface thread.
Questions:
Should each thread have its own state machine thereby reducing the
complexity of the main thread ? In such a case, should we still need
to have a state machine in main thread ?
Any thread processing input event can communicate directly to the
thread that handles the event bypassing the main thread ? For e.g
web interface thread could communicate directly with the thread
controlling the RS485 slaves ?
Is it fine to use pthread condition signals & wait for inter thread
communication or is there a better approach ?
How can we have one thread wait for event from outside & response
from other threads ? For e.g. the web interface thread usually waits
for events on a POSIX message queue for Inter process communication
from web server CGI bins. The CGI bin's send events to the web
interface thread through this message queue. When processing this
event, the web interface thread would wait for response from other
threads. In such a situation, it couldn't process any new event from
the web interface until it has completed processing the previous
event and gets back to the wait on the POSIX message queues.
sorry for the too big explanation...I hope I have put forward my explanation in the best possible way for others to understand and help me.
I could give more inputs if needed.
What I always try to do with such requirements is to use one state machine, run by one 'SM' thread, which could be the main thread. This thread waits on an 'EventQueue' input producer-cosumer queue with a timeout. The timeout is used to run an internal delta-queue that can provide timeout events into the state-machine when they are required.
All other threads communicate their events to the state engine by pushing messages onto the EventQueue, and the SM thread processes them serial manner.
If an action routine in the SM decides that it must do something, it must not synchronously wait for anything and so it must request the action by pushing a request message to an input queue of whatever thread/susbsystem can perform it.
My message class, (OK, *struct in your C case), typically contains a 'command' enum, 'result' enum, a data buffer pointer, (in case it needs to transport bulk data), an error-message pointer, (null if no error), and as much other state as is necessary to allow the asynchronous queueing up of any kind of request and returning the complete result, (whether success or fail).
This message-passing, one SM design is the only one I have found that is capable of doing such tasks in a flexible, expandable manner without entering into a nightmare world of deadlocks, uncontrolled communications and unrepeatable, undebuggable interactions.
The first question that should be asked about any design is 'OK, how can the system be debugged if there is some strange problem?'. In my design above, I can answer straightaway: 'we log all events dequeued in the SM thread - they all come in serially so we always know exactly what actions are taken based on them'. If any other design is suggested, ask the above question and, if a good answer is not immediately forthcoming, it will never be got working.
So:
If a thread, or threaded subsystem, can use a separate state-machine to do its own INTERNAL functionality, OK, fine. These SM's should be invisible from the rest of the system.
NO!
Use the pthread condition signals & wait to implement producer-consumer blocking queues.
One input queue per thread/subsystem. All inputs go to this queue in the form of messages. Commands/state in each message identify the message and what should be done with it.
BTW, I would 100% do this in C++ unless shotgun-at-head :)
I have implemented a legacy embedded library that was originally written for a clone (EC115/EC270) of Siemens ES122C terminal controller. This library and OS included more or less what you describe. The original hardware was based on 80186 cpu. The OS, RMOS for Siemens, FXMOS for us (don't google it was never published) had all the stuff needed for basic controller work.
It had preemptive multi-tasking, task-to-task communication, semaphores, timers and I/O events, but no memory protection.
I ported that stuff to RaspberryPi (i.e. Linux).
I used the pthreads to simulate our legacy "tasks" because we hadn't memory protection, so threads are semantically the closest.
The rest of the implementation then turned around the epoll API. This means that everything generates an event. An event is when something happens, a timer expires, another thread sends data, a TCP socket is connected, an IO pin changes state, etc.
This requires that all the event sources be transformed in file descriptors. Linux provides several syscalls that do exactly that:
for task to task communication I used classic Unix pipes.
for timer events I used timerfd API.
for TCP communication I used normal sockets.
for serial I/O I simply opened the right device /dev/???.
signals are not necessary in my case but Linux provides 'signalfd' if necessary.
I have then epoll_wait wrapped around to simulate the original semantic.
I works like a charm.
TL;DR
take a deep look at the epoll API it does what you probably need.
EDIT: Yes and the advices of Martin James are very good especially 4. Each thread should only ever be in a loop waiting on an event via epoll_wait.

c linux multithreading networking

I have a network application on a gateway. It receives and sends packets. For most of them, my gateway acts as a router, but in some cases, it can receive packets too.
Should I have:
only one main thread
a main thread + a dispatch thread in charge of giving it to the correct flow handler
as many threads as there are flows
something else.
?
Doing multithreading correctly is no simple matter, in many cases a select and friends based solution will be a whole lot easier to create.
Your case sounds a lot like a typical Unix service daemon. The popular solution to your problem is not to use threads, but forks.
The idea is that your program listens on the socket and waits for connections. As soon as a connection arrives, it forks. The child process then continues to process the connection. The father process itself just continues in the loop and waits for incoming connections.
Advantages over threading:
Very simple program design
No problems with concurrency
Established method for Unix/Linux systems
Disadvantages:
Things get complicated when several connections interact with each other (your use case doesn't sound like they would)
Performance penalty on Windows systems (not on Unix systems!)
You can find many code examples online.
I don't know much about networking applications, but I think it's like this:
If you have the ability to react asynchronous to the requests you would probably use just one single thread (like in Node.JS). If you won't be able to react asynchronous the main thread would always block the other actions.
If you are not able to react asynchronous on your requests you have to use more than one thread. But you could achieve that in many different ways: you could create for every request a thread, or a limited number of threads and assign them then to your requests.
My personal preference is use one main thread and one worker thread per connection. No cap whatsoever. I am assuming that your server will be stateless like a HTTP server.
For stateful servers you will have to figure out some way to control number of threads.

Parallel processing in linux

I'm not sure how to go about handling asynchronous tasks in a program I am writing and I'm hoping someone more experienced can at least point me in the right direction.
I'm running Angstrom Linux on an embedded ARM processor. My program controls several servos through exposed hardware PWM and a camera over PTP. Additionally it is socket daemon which takes commands from an arbitrary client (Android in this instance). The camera PTP is slow, and I don't want to wait around for it to finish its task because the rest of the program needs to be responsive.
I've tried threads, but any problems in the camera thread seems to kill the whole process. Ideally I want to send the camera off on its own to do its thing and when it is finished let the main function know. Is this an appropriate forking technique or have I implemented threading improperly?
Additionally, I would like to stay away from large secondary libraries to avoid any more cross compiling issues then I already have. Thanks in advance for any suggestions.
Your problem sounds like a classic case for multiple processes, communicating with inter-process communications (IPC) of some sort.
The camera should have its own process, and if that process dies, the main process should not have a problem. You could even have the init(8) process manage the camera process; that can automatically restart the process if it dies for any reason.
You could set up a named pipe permanently, and then the camera process could re-open it any time it restarts after failure.
Here is some documentation about named pipes:
http://www.tldp.org/LDP/lpg/node15.html
I found this from the Wikipedia page:
http://en.wikipedia.org/wiki/Named_pipe
I searched StackOverflow and found a discussion of named pipes vs. sockets:
IPC performance: Named Pipe vs Socket
Take the basic method of steveha's answer but skip the init(8) and named pipes.
fork() a child containing your camera code and communicate through regular pipes or domain sockets. Code a signal handler for SIGCHLD in the parent.If the child dies interrogate the reasons why with the return code from wait(). If it died on its own then cleanup and restart it; if it ended normally do what is appropriate in that case. Communicate with the child through whichever IPC you end up choosing. This give you more control over the child than init and domain sockets or pipes, in particular, will make it easier to set up and communicate between parent and child than messing with the funky semantics of FIFOs.
Of course, if there is really problems with the camera code all you have really done is make the failures somewhat more manageable by not taking down the whole program. Ideally you should get the camera code to work flawlessly if that is within your power.
I've tried threads, but any problems in the camera thread seems to kill the whole process.
When you say kill the whole process, what actually happens?
I put it to you that you are better off debugging the above problem, than trying to wrap the bug away in a forked process. You would rather have a reliable system including a reliable camera, than a reliable core system with an unreliable camera.

Can anyone think of a simple way to attribute a socket to the thread that created it?

I'm trying to attribute an already created socket to a particular thread on the windows platform.
I'm trying to do this without debug APIs and stack traces. It may come down to diverting the ws2_32 library calls.
You can't; Windows does not keep track of this information. Sockets (and all kernel objects) are created by processes without regard for which thread created them. GDI objects have thread affinity, but kernel objects don't.

What have you used sysv/posix message queues for?

I've never seen any project or anything utilizing posix or sysv message queues - and being curious, what problems or projects have you guys used them for ?
I had a series of commands that needed to be executed in order, but the main program flow did not depend on their completion so I queued them up and passed them to another process via a System V message queue to be executed independently of the main program. Since message queues provide an asynchronous communications protocol, they were a good fit for this task.
To be honest, I used System V message queues because I had never used them before and I wanted to. I'm sure there are other IPC methods I could have used.
It's been a while since I've done any real VxWorks programming, but you can also find message queues used in VxWorks applications. According to the VxWorks Application Programmer's Guide (Google search), the primary intertask communication mechanism within a single CPU is message queues. VxWorks uses two message queue subroutine libraries (POSIX and VxWorks).
I once wrote a text-mode I/O generator utility that had one thread in charge of updating the UI and a number of worker threads to do the actual I/O work. When a worker thread completed an I/O, it sent an update message to the UI thread. I implemented this message system using a POSIX message queue.
Why implement it like this? It sounded like a good idea at the time, and I was curious about how they worked. I figured I could solve the problem and learn something at the same time. There were many different techniques I could have used, and I don't suppose there was any profound reason why I chose this technique. I didn't realize it until later, but I was glad I used a POSIX queue when I had to port the utility to another system (it was also POSIX compliant, so I didn't have to worry about porting external libraries to get my app to run).
You can use it for IPC for sure because it is an IPC mechanism. With this mechanism you can write multi-process event processing applications in which all of the applications are using the queue and each of which are waiting for a special type of message (an special event to occur). When the message arrives that process takes the message, processes that and puts the result back into the queue so that the other process can use it.
Once i wrote such an application using message queues. It is pretty easy to work with and does not need Inter-process synchronization mechanisms such as semaphores. You can use it in place of Shared Memory of Memory Mapped files as well, in situations in which all you need is just sending a structure or some kind of packed data to other processes Message Queues are far easier to use than any other IPC mechanism.
This book contains all information you need to know about Message Queues and other IPC mechanisms in Linux.

Resources