I need to send a message from one executable to another.
Executable #1 (my main program -- always running) needs to send a string variable and run Executable #2 (Executable #1 will wait until Executable #2 has sent a string back).
Executable #2 will use this string to complete a task.
Once the task is complete, Executable #2 will send Executable #1 the result (a string).
Executable #2 will end itself once completed.
I have searched the web for solutions and have had no luck.
There are several ways of doing IPC (inter process communication), but the simplest way might simply be a shared file. Executable #2 will periodically poll to see if there is anything in the file. Executable #1 will write the string to that file when ready.
This method is very simple and in fact, used very successfully for integration between trading systems in the financial industry.
You can even have simple TCP/IP communication between the processes but that would be more work. If you're on Linux, you can use named pipes as well.
Try using named pipes for an example click this link How to: Use Named Pipes to Communicate Between Processes over a Network this can also work for processes running on the same desktop.
Related
I have a TCP server application which occasionally needs to reconfigure the bound ports by closing them and then opening them at a later time.
The application also needs to execute an external binary with communication to it. This is currently done using a popen() call. The external binary run time can span the period when the network ports need to be reconfigured.
The problem is, when the main application closes a port it is taken on by the 'forked' process which popen has created to run the binary.
That makes sense (it's discussed at What happens when a tcp server binds and forks before doing an accept ? Which process would handle the client requests? ), but is undesirable, as the main application is then unable to reopen the port.
Is this where FD_CLOEXEC O_CLOEXEC as available in popen(3) can be used? The application needs the pipe that popen(3) offers as stdin to the binary which is executed, is that filehandle left open when CLOEXEC closes the others.
Is there a better way to run the binary, which won't result in a forked process which will hold on to a closed port?
There is another, possibly related question at How to fork process without inheriting handles?
No, you cannot start another program and get back from it without fork(2) followed by some execve(2) code (which is what popen, posix_spawn, and system are doing). You'll better avoid popen or system and code the pipe+fork+execve explicitly yourself (since you know which file descriptors should be kept and which to close(2), generally after the fork and before the execve in the child process...), see this.
(every process & program, except /sbin/init and some hotplug things, is started with fork + execve; and your shell is constantly using fork + execve for most commands, except the builtin ones like cd)
the fork(2) call could be implemented by clone(2).
Read some good book like the Advanced Linux Programming book freely available online here. See also syscalls(2). Use (at least for debugging and to understand things) strace(1) and your debugger (gdb). Study the source code of popen and of system in your free software libc (GNU libc or musl-libc), and the source code of your shell.
You could nearly mimic execve(2) by a tricky sequence of mmap(2) (and related munmap) calls, but not quite (in particular w.r.t. close-on-exec etc...). You'll probably need also to call the obsolete setcontext(3) (or write the equivalent assembler code).
You might consider communicating with a specialized shell-like server-like program doing the fork & execve (see my execicar.c for example and inspiration). Maybe you'll find daemon(3) useful.
A better alternative might be to embed some interpreter (like lua or guile) in your application and/or to dlopen(3) some plugin. The disadvantage is that a bug (in the interpreted script or the plugin) affects your entire server.
You can definitely use the close-on-exec flag to avoid newly started processes inheriting open files or sockets (and there's no other way I'm aware of). If you open a file with open(2) you can set the flag at that moment by adding O_CLOEXEC to the file creation flags. If the file is already open you can set it using fcntl() (and if you opened the file using fopen(3) you can get the file descriptor required for this via fileno(3)). For a socket you can also set the flag when opening it with socket(2) by setting SOCK_CLOEXEC. No file with this flag set will be inherited by a process spawned by your program, be it directly via fork + exec or any other "disguise" of that combination, like system(3) or popen(3).
I need to update a log file according to the messages produced by two different modules which may be running simultaeously.
So is it possible to open and write a file simultaneously in two programs?
Sys Spec: SLES 11 x86_64.
You can do one of the following:
Use flock() (or a similar mechanism) to synchronize the writes on the open file descriptors (as already answered).
Use open() and close() (or similar) repeatedly on systems that support (or even enforce) exclusive open().
Depend on buffered output to send out log lines uninterrupted. This is often used with stderr logging, as a possible race condition isn't usually a problem here.
Use a logging service and only open() the file there. Other processes communicate with the logging service via IPC. You can use a custom logging service or a tool like syslog or journald. Both of them AFAIK support logging from non-root processes as well.
I would personally prefer the last option because its design is the cleanest one and it doesn't depend so much on OS-specific behavior. If your application consists of multiple processes started by the main process, then the main process may perform as the logging service as well and create pipes before spawning the child processes. If the processes are started separately, you can have a separate service that listens on a TCP/IP socket or (if your system supports it) a local domain socket.
Yes. A file can be opened by several processes/programs simulatneously. Multiple processes/programs can read & write in a file simultaneously but the end result of writing in the same file at the same time may be undefined. So it is better to use locks.
On Linux you can use: flocks
I would like to inject a shared library into a process (I'm using ptrace() to do that part) and then be able to get output from the shared library back into the debugger I'm writing using some form of IPC. My instinct is to use a pipe, but the only real requirements are:
I don't want to store anything on the filesystem to facilitate the communication as it will only last as long as the debugger is running.
I want a portable Unix solution (so Unix-standard syscalls would be ideal).
The problem I'm running into is that as far as I can see, if I call pipe() in the debugger, there is no way to pass the "sending" end of the pipe to the target process, and vice versa with the receiving end. I could set up shared memory, but I think that would require creating a file somewhere so I could reference the memory segment from both processes. How do other debuggers capture output when they attach to a process after it has already begun running?
I assume that you are in need of a debugging system for your business logic code (I mean application). From my experience, this kind of problem is tackled with below explained system design. (My experience is in C++, I think the same must hold good for the C based system also.)
Have a logger system (a separate process). This will contain - logger manager and the logging code - which will take the responsibility of dumping the log into hard disk.
Each application instance (process running in Unix) will communicate to this process with sockets. So you can have your own messaging protocol and communicate with the logger system with socket based communication.
Later, for each of this application - have a switch which can switch off/on the log.So that you can have a tool - to send signal to this process to switch on/off the message logging.
At a high level, this is the most generic way to develop a logging system. In case you need any information - Do comment it. I will try to answer.
Using better search terms showed me this question is a dup of these guys:
Can I share a file descriptor to another process on linux or are they local to the process?
Can I open a socket and pass it to another process in Linux
How to use sendmsg() to send a file-descriptor via sockets between 2 processes?
The top answers were what I was looking for. You can use a Unix-domain socket to hand a file descriptor off to a different process. This could work either from debugger to library or vice versa, but is probably easier to do from debugger to library because the debugger can write the socket's address into the target process while it injects the library.
However, once I pass the socket's address into the target process, I might as well just use the socket itself instead of using a pipe in addition.
Night people,
I have what I believe to be a simple problem, but can't figure out how to solve it:
I want to create a multi-thread multi-user application which will be launched in the same computer through multiple terminals, a game for example.
The application should be executed through the terminal like
./foo
And after 3, for example, terminals have called this then the game should begin:
Terminal 1:
./foo
Waiting for other users...
Terminal 2:
./foo
Waiting for other users...
Terminal 3:
./foo
Starting...
I just don't see a mechanism to do that once each time I call ./foo from a terminal it creates another process. How can I make it "count" how many times it was called instead of creating another process? If there is another approach (and probably there is), which one?
There is not, every time you launch it again you will be creating a new process, but you can make the program create a unix socket, and then all the next ones will connect to the same socket and communicate with each other in some way defined by you.
Here is a guide for InterProcessCommunication: http://beej.us/guide/bgipc/
There are multiple techniques to do this:
Pipes
Message Queues
Shared Memory
Sockets
Consult the Guide for examples to each technique.
I Would like to capture the process entry, exit and maintain a log for the entire system (probably a daemon process).
One approach was to read /proc file system periodically and maintain the list, as I do not see the possibility to register inotify for /proc. Also, for desktop applications, I could get the help of dbus, and whenever client registers to desktop, I can capture.
But for non-desktop applications, I don't know how to go ahead apart from reading /proc periodically.
Kindly provide suggestions.
You mentioned /proc, so I'm going to assume you've got a linux system there.
Install the acct package. The lastcomm command shows all processes executed and their run duration, which is what you're asking for. Have your program "tail" /var/log/account/pacct (you'll find its structure described in acct(5)) and voila. It's just notification on termination, though. To detect start-ups, you'll need to dig through the system process table periodically, if that's what you really need.
Maybe the safer way to move is to create a SuperProcess that acts as a parent and forks children. Everytime a child process stops the father can find it. That is just a thought in case that architecture fits your needs.
Of course, if the parent process is not doable then you must go to the kernel.
If you want to log really all process entry and exits, you'll need to hook into kernel. Which means modifying the kernel or at least writing a kernel module. The "linux security modules" will certainly allow hooking into entry, but I am not sure whether it's possible to hook into exit.
If you can live with occasional exit slipping past (if the binary is linked statically or somehow avoids your environment setting), there is a simple option by preloading a library.
Linux dynamic linker has a feature, that if environment variable LD_PRELOAD (see this question) names a shared library, it will force-load that library into the starting process. So you can create a library, that will in it's static initialization tell the daemon that a process has started and do it so that the process will find out when the process exits.
Static initialization is easiest done by creating a global object with constructor in C++. The dynamic linker will ensure the static constructor will run when the library is loaded.
It will also try to make the corresponding destructor to run when the process exits, so you could simply log the process in the constructor and destructor. But it won't work if the process dies of signal 9 (KILL) and I am not sure what other signals will do.
So instead you should have a daemon and in the constructor tell the daemon about process start and make sure it will notice when the process exits on it's own. One option that comes to mind is opening a unix-domain socket to the daemon and leave it open. Kernel will close it when the process dies and the daemon will notice. You should take some precautions to use high descriptor number for the socket, since some processes may assume the low descriptor numbers (3, 4, 5) are free and dup2 to them. And don't forget to allow more filedescriptors for the daemon and for the system in general.
Note that just polling the /proc filesystem you would probably miss the great number of processes that only live for split second. There are really many of them on unix.
Here is an outline of the solution that we came up with.
We created a program that read a configuration file of all possible applications that the system is able to monitor. This program read the configuration file and through a command line interface you was able to start or stop programs. The program itself stored a table in shared memory that marked applications as running or not. A interface that anybody could access could get the status of these programs. This program also had an alarm system that could either email/page or set off an alarm.
This solution does not require any changes to the kernel and is therefore a less painful solution.
Hope this helps.