How can a dbus user get notified when the dbus service it is using exits/crashes/restarts?
The tutorial suggests there is a way to do this, but in the specification I only found a signal intended for the name owner.
The dbus tutorial says:
Names have a second important use, other than routing messages. They are used to track lifecycle. When an application exits (or crashes), its connection to the message bus will be closed by the operating system kernel. The message bus then sends out notification messages telling remaining applications that the application's names have lost their owner. By tracking these notifications, your application can reliably monitor the lifetime of other applications.
The dbus specification has a section about the NameLost signal:
org.freedesktop.DBus.NameLost
This signal is sent to a specific application when it loses ownership of a name.
One way to find this out is to listen for the org.freedesktop.DBus.NameOwnerChanged signal, as specified in the D-Bus specification
Your client needs to have some logic implemented to analyse the arguments of the signal to figure out when a name has been claimed, when a service has been restarted, when it is gone etc. But the above signal can be used to receive the relevant information at least.
In your handler function you can check if the name argument matches the service name you want to know about. If the old_owner argument is empty, then the service has just claimed the name on the bus. If new_owner is empty then the service has gone away from the bus (for whatever reason).
Related
I'm trying to write a program which will interact with VLC over D-Bus.
When an instance of VLC is running I can execute things like this in the shell
qdbus org.mpris.MediaPlayer2.vlc /org/mpris/MediaPlayer2 org.mpris.MediaPlayer2.Player.Pause
qdbus org.mpris.MediaPlayer2.vlc /org/mpris/MediaPlayer2 org.mpris.MediaPlayer2.Player.Play
VLC pauses and resumes as expected. Great.
What if there is more than one instance of VLC running, how do I choose which instance the command is sent to? I know its PID. The DBus client doesn't have to be qdebus.
no
every d-bus connection gets unique name and you can ask for other name later using org.freedesktop.DBus.RequestName call, but it has to be unique as well. See "message bus names" part of spec. Note that one process can create multiple connections to bus (and thus have multiple names associated with it)
When you make a d-bus function call you use service name, object path, interface name and method name. First one is used by bus daemon to route your message, and service itself decides how to interpret path/interface/method/parameters part of message.
You can get pid the other way: well behaving dbus clients have to support org.freedesktop.DBus interface and you have org.freedesktop.DBus.GetConnectionUnixProcessID. You could iterate over all connections ( ListNames method ) and compare connection pid with what you have. This does not guarantee one to one mapping.
Problem definition:
We are designing an application for an industrial embedded system running Linux.
The system is driven by events from the outside world. The inputs to the system could be any of the following:
Few inputs to the system in the form of Digital IO lines(connected
to the GPIOs of the processor like e-stop).
The system runs a web-server which allows for the system to be
controlled via the web browser.
The system runs a TCP server. Any PC or HMI device could send commands over TCP/IP.
The system needs to drive or control RS485 slave devices over UART using Modbus. The system also need to control few IO lines like Cooler ON/OFF etc.We believe that a state machine is essential to define this application. The core application shall be a multi threaded application which shall have the following threads...
Main thread
Thread to control the RS485 slaves.
Thread to handle events from the Web interface.
Thread to handle digital I/O events.
Thread to handle commands over TCP/IP(Sockets)
For inter-thread communication, we are using Pthread condition signal & wait. As per our initial design approach(one state machine in main thread), any input event to the system(web or tcp/ip or digital I/O) shall be relayed to the main thread and it shall communicate to the appropriate thread for which the event is destined. A typical scenario would be to get the status of the RS485 slave through the web interface. In this case, the web interface thread shall relay the event to the main thread which shall change the state and then communicate the event to the thread that control's the RS485 slaves & respond back. The main thread shall send the response back to the web interface thread.
Questions:
Should each thread have its own state machine thereby reducing the
complexity of the main thread ? In such a case, should we still need
to have a state machine in main thread ?
Any thread processing input event can communicate directly to the
thread that handles the event bypassing the main thread ? For e.g
web interface thread could communicate directly with the thread
controlling the RS485 slaves ?
Is it fine to use pthread condition signals & wait for inter thread
communication or is there a better approach ?
How can we have one thread wait for event from outside & response
from other threads ? For e.g. the web interface thread usually waits
for events on a POSIX message queue for Inter process communication
from web server CGI bins. The CGI bin's send events to the web
interface thread through this message queue. When processing this
event, the web interface thread would wait for response from other
threads. In such a situation, it couldn't process any new event from
the web interface until it has completed processing the previous
event and gets back to the wait on the POSIX message queues.
sorry for the too big explanation...I hope I have put forward my explanation in the best possible way for others to understand and help me.
I could give more inputs if needed.
What I always try to do with such requirements is to use one state machine, run by one 'SM' thread, which could be the main thread. This thread waits on an 'EventQueue' input producer-cosumer queue with a timeout. The timeout is used to run an internal delta-queue that can provide timeout events into the state-machine when they are required.
All other threads communicate their events to the state engine by pushing messages onto the EventQueue, and the SM thread processes them serial manner.
If an action routine in the SM decides that it must do something, it must not synchronously wait for anything and so it must request the action by pushing a request message to an input queue of whatever thread/susbsystem can perform it.
My message class, (OK, *struct in your C case), typically contains a 'command' enum, 'result' enum, a data buffer pointer, (in case it needs to transport bulk data), an error-message pointer, (null if no error), and as much other state as is necessary to allow the asynchronous queueing up of any kind of request and returning the complete result, (whether success or fail).
This message-passing, one SM design is the only one I have found that is capable of doing such tasks in a flexible, expandable manner without entering into a nightmare world of deadlocks, uncontrolled communications and unrepeatable, undebuggable interactions.
The first question that should be asked about any design is 'OK, how can the system be debugged if there is some strange problem?'. In my design above, I can answer straightaway: 'we log all events dequeued in the SM thread - they all come in serially so we always know exactly what actions are taken based on them'. If any other design is suggested, ask the above question and, if a good answer is not immediately forthcoming, it will never be got working.
So:
If a thread, or threaded subsystem, can use a separate state-machine to do its own INTERNAL functionality, OK, fine. These SM's should be invisible from the rest of the system.
NO!
Use the pthread condition signals & wait to implement producer-consumer blocking queues.
One input queue per thread/subsystem. All inputs go to this queue in the form of messages. Commands/state in each message identify the message and what should be done with it.
BTW, I would 100% do this in C++ unless shotgun-at-head :)
I have implemented a legacy embedded library that was originally written for a clone (EC115/EC270) of Siemens ES122C terminal controller. This library and OS included more or less what you describe. The original hardware was based on 80186 cpu. The OS, RMOS for Siemens, FXMOS for us (don't google it was never published) had all the stuff needed for basic controller work.
It had preemptive multi-tasking, task-to-task communication, semaphores, timers and I/O events, but no memory protection.
I ported that stuff to RaspberryPi (i.e. Linux).
I used the pthreads to simulate our legacy "tasks" because we hadn't memory protection, so threads are semantically the closest.
The rest of the implementation then turned around the epoll API. This means that everything generates an event. An event is when something happens, a timer expires, another thread sends data, a TCP socket is connected, an IO pin changes state, etc.
This requires that all the event sources be transformed in file descriptors. Linux provides several syscalls that do exactly that:
for task to task communication I used classic Unix pipes.
for timer events I used timerfd API.
for TCP communication I used normal sockets.
for serial I/O I simply opened the right device /dev/???.
signals are not necessary in my case but Linux provides 'signalfd' if necessary.
I have then epoll_wait wrapped around to simulate the original semantic.
I works like a charm.
TL;DR
take a deep look at the epoll API it does what you probably need.
EDIT: Yes and the advices of Martin James are very good especially 4. Each thread should only ever be in a loop waiting on an event via epoll_wait.
We've got a situation where it would be advantageous to limit write access to a logging directory to a specific subset of user processes. These particular processes (say, for example, telnet and the like) have been modified by us to generate a logging record whenever a significant user action takes place (like a remote connection, etc). What we do not want is for the user to manually create these records by copying and editing existing logging records.
syslog comes close but still allows the user to generate spurious records, SELinux seems plausible but has a terrible reputation of being an unmanageable beast.
Any insight is appreciated.
Run a local logging daemon as root. Have it listen on an Unix domain socket (typically /var/run/my-logger.socket or similar).
Write a simple logging library, where event messages are sent to the locally running daemon via the Unix domain socket. With each event, also send the process credentials via an ancillary message. See man 7 unix for details.
When the local logging daemon receives a message, it checks for the ancillary message, and if none, discards the message. The uid and gid of the credentials tell exactly who is running the process that has sent the logging request; these are verified by the kernel itself, so they cannot be spoofed (unless you have root privileges).
Here comes the clever bit: the daemon also checks the PID in the credentials, and based on its value, /proc/PID/exe. It is a symlink to the actual process binary being executed by the process that send the message, something the user cannot fake. To be able to fake a message, they'd have to overwrite the actual binaries with their own, and that should require root privileges.
(There is a possible race condition: a user may craft a special program that does the same, and immediately exec()s a binary they know to be allowed. To avoid that race, you may need to have the daemon respond after checking the credentials, and the logging client send another message (with credentials), so the daemon can verify the credentials are still the same, and the /proc/PID/exe symlink has not changed. I would personally use this to check the message veracity (by the logger asking for confirmation for the event, with a random cookie, and have the requester respond with both the checksum and the cookie whether the event checksum is correct. Including the random cookie should make it impossible to stuff the confirmation in the socket queue before exec().)
With the pid you can do also further checks. For example, you can trace the process parentage to see how the human user has connected by tracking parents till you detect a login via ssh or console. It's a bit tedious, since you'll need to parse /proc/PID/stat or /proc/PID/status files, and nonportable. OSX and BSDs have a sysctl call you can use to find out the parent process ID, so you can make it portable by writing a platform-specific parent_process_of(pid_t pid) function.
This approach will make sure your logging daemon knows exactly 1) which executable the logging request came from, and 2) which user (and how connected, if you do the process tracing) ran the command.
As the local logging daemon is running as root, it can log the events to file(s) in a root-only directory, and/or forward the messages to a remote machine.
Obviously, this is not exactly lightweight, but assuming you have less than a dozen events per second, the logging overhead should be completely neglible.
Generally there's two ways of doing this. One, run these processes as root and write protect the directory (mentioned mainly for historical purposes). Then no one but root can write there. The second, and more secure is to run them as another user (not root) and give that user, but no one else, write access to the log directory.
The approach we went with was to use a setuid binary to allow write access to the logging directory, the binary was executable by all users but would only allow a log record to be written if the parent process path as defined by /proc/$PPID/exe matched the subset of modified binary paths we placed on the system.
I've been working on a complex server-client system in C and I'm not sure how to implement the socket communication.
In a nutshell, the system is a server application which communicates with a database and uses a UNIX socket to communicate with one or more child processes created with fork(). The purpose of the children is to run game servers. The process of launching a game server is like this:
The server/"manager" identifies a game server in the database that is to be made. (Assume database communication is already sorted.)
The manager forks a child (the "game controller").
The game controller sets up two pipe pairs, then forks, replacing its child's stdin with a pipe, and it's stdout and stderr with another pipe.
The game controller's child then runs execlp() to begin running the actual game server executable.
My experience with sockets is fairly minimal. I have used select() on a server application before to 'multiplex' numerous clients, as demonstrated by the simple example in the GNU C documentation here.
I now have a new challenge, as the system must be able to do more: the manager needs to be able to arbitrarily send commands to the game controller children (that it will find by periodically checking the database) and get replies, but also expect incoming arbitrary commands/errors from them and send replies back.
So, I need a sort-of "context" system, where sockets are meaningful only between themselves. In other words, when a command is sent from the manager to the game controller, each party needs to be aware of who is asking and know what the reply is (and, therefore, which command it is a reply to).
Because select() is only useful for knowing when we have incoming data, and a thread should block on it, would I need another thread that sends data and gets the replies? Will this require each game controller, although technically a 'client', to use a listening socket and use select() as well?
I hope I've explained the system and the problem concisely; I will add more detail if required. Thanks!
Ok, I am still not really sure I understand exactly where your trouble is, so I will just spout off some things about writing a client/server app. If I am off track, just let me know.
The way that the server will know which clients corresponds to which socket is that the clients will tell the server. Essentially, you need to have a log-in protocol. When the game controller connects to the server, it will send a message that says "Hi, i am registering as controller foo1 on host xyz, port abc..." and whatever else the server needs to know about its clients. The server will keep a data structure that maps sockets to client metadata, state, etc. Whenever it gets a new message, it can easily map from the incoming host/port to its metadata. Or your protocol can require that on each incoming message, the will client send the name it registered with as a field.
Handling the request/response can be done several ways. First lets deal with the networking part of it on the server side. One way to manage this, as you mentioned, is by using select (or poll, or epoll) to multiplex the sockets. This is actually usually considered the more complicated way to do things. Another way is to spawn off a thread (or fork a process, which is less common these days) for each incoming client. Each spawned thread can read its own assigned socket, responding to messages one at a time without worrying about the fact that there are other clients besides the own it is dealing with. This simple one to one thread to socket model breaks down if there are many clients, but if that is not the case, then it is worth consideration.
Part 2 really covers only the client sending the server a message, and the server replying. What happens when the server wants to initiate communication? How does it do it and how does the client handle it? Also, how do you model the model the communication at the application level, meaning assuming we have the read/write part down, how do we know what to send? You will probably want to model things in terms of state machines. There is also a lot more to deal with like what happens when a client crashes? What about when the server crashes? Also, what if you really have your heart set of using select, perhaps because you expect many client? I will try to add more to this answer tomorrow.
While implementing an applicative server and its client-side libraries in C++, I am having trouble finding a clean and reliable way to stop client processes on server shutdown on Windows.
Assuming the server and its clients run under the same user, the requirements are:
the solution should work in the following cases:
clients may each feature either a console or a gui.
user may be unprivileged.
clients may be or become unresponsive (infinite loop, deadlock).
clients may or may not be children of the server (direct or indirect).
unless prevented by a client-side defect, clients shall be allowed the opportunity to exit cleanly (free their ressources, sync some data to disk...) and some reasonable time to do so.
all client return codes shall be made available (if possible) to the server during the shutdown procedure.
server shall wait until all clients are gone.
As of this edit, the majority of the answers below advocate the use of a shared memory (or another IPC mechanism) between the server and its clients to convey shutdown orders and client status. These solutions would work, but require that clients successfully initialize the library.
What I did not say, is that the server is also used to start the clients and in some cases other programs/scripts which don't use the client library at all. A solution that did not rely on a graceful communication between server and clients would be nicer (if possible).
Some time ago, I stumbled upon a C snippet (in the MSDN I believe) that did the following:
start a thread via CreateRemoteThread in the process to shutdown.
had that thread directly call ExitProcess.
Unfortunately now that I'm looking for it, I'm unable to find it and the search results seem to imply that this trick does not work anymore on Vista. Any expert input on this ?
If you use thread, a simple solution is to use a named system event, the thread sleeps on the event waiting for it to be signaled, the control application can signal the event when it wants the client applications to quit.
For the UI application it (the thread) can post a message to the main window, WM_ CLOSE or QUIT I forget which, in the console application it can issue a CTRL-C or if the main console code loops it can check some exit condition set by the thread.
Either way rather than finding the client applications an telling them to quit, use the OS to signal they should quit. The sleeping thread will use virtually no CPU footprint provided it uses WaitForSingleObject to sleep on.
You want some sort of IPC between clients and servers. If all clients were children, I think pipes would have been easiest; since they're not, I guess a server-operated shared-memory segment can be used to register clients, issue the shutdown command, and collect return codes posted there by clients successfully shutting down.
In this shared-memory area, clients put their process IDs, so that the server can forcefully kill any unresponsive clients (modulo server privileges), using TerminateProcess().
If you are willing to go the IPC route, make the normal communication between client and server bi-directional to let the server ask the clients to shut down. Or, failing that, have the clients poll. Or as the last resort, the clients should be instructed to exit when the make a request to server. You can let the library user register an exit callback, but the best way I know of is to simply call "exit" in the client library when the client is told to shut down. If the client gets stuck in shutdown code, the server needs to be able to work around it by ignoring that client's data structures and connection.
Use PostMessage or a named event.
Re: PostMessage -- applications other than GUIs, as well as threads other than the GUI thread, can have message loops and it's very useful for stuff like this. (In fact COM uses message loops under the hood.) I've done it before with ATL but am a little rusty with that.
If you want to be robust to malicious attacks from "bad" processes, include a private key shared by client/server as one of the parameters in the message.
The named event approach is probably simpler; use CreateEvent with a name that is a secret shared by the client/server, and have the appropriate app check the status of the event (e.g. WaitForSingleObject with a timeout of 0) within its main loop to determine whether to shut down.
That's a very general question, and there are some inconsistencies.
While it is a not 100% rule, most console applications run to completion, whereas GUI applications run until the user terminates them (And services run until stopped via the SCM). Hence, it's easier to request a GUI to close. You send them the equivalent of Alt-F4. But for a console program, you have to send them the equivalent of Ctrl-C and hope they handle it. In both cases, you simply wait. If the process sticks around, you then shoot it down (TerminateProcess) and pray that the damage is limited. But your HDD can fill up with temporary files.
GUI application in general do not have exit codes - where would they go? And a console process that is forcefully terminated by definition does not exit, so it has no exit code. So, in a server shutdown scenario, don't expect exit codes.
If you've got a debugger attached, you generally can't shutdown the process from another application. That would make it impossible for debuggers to debug exit code!