I have an executable file which performs data acquisition from an interfaced FPGA and stores it in a specific format. Once in a while, randomly, the acquisition code stops citing receive error. However, re-running the executable works.
Hence one temporary arrangement is to run the executable in a shell script. The corresponding process needs to be monitored. If the acquisition stops (and the process ends), the script should re-run the executable.
Any hints on how to go about it?
By your description, it sounds like you simply want an endless loop which calls the collector again and again.
while true; do
collector
done >output
Redirecting the output outside of the loop is more efficient (you only open the file for writing once) as well as simpler (you don't have to figure out within the loop whether to overwrite or append). If your collector doesn't produce data on standard output, then of course, this detail is moot.
grep for the executable process in the shell script wrapper. If the PID is not found, then restart. You need to schedule the Shell wrapper as a cron job.
Related
I´m trying to get some values displayed on an eInk-Display (via SPI). I already wrote the software to initialize the display and display the values passed as command-line arguments. The problem is, because of the eInk-technology it takes a few seconds for the display to have fully actualized, so the display-program is also running for this time.
The other ("Master"-) program collects the values and does other stuff. It has a main loop, which has to be cycled through at least 10x/second.
So I want to start the displaying program from within the main loop and immediately continue with the loop.
When using system() or execl(), the Master-program either waits till the display program is finished or exits into the new process.
Is there a way to just start other programs out of other ones without any further connection between them? It should run on Linux.
May fork() be a solution?
quick and dirty way: use system with a background suffix (&)
char cmd[200];
sprintf("%190s &","your_command");
system(cmd);
note that it's not portable because it depends on the underlying shell. For windows you would do:
sprintf("start %190s","your_command");
The main drawback of the quick & dirty solution is that it's "fire & forget". If the program fails to execute properly, you'll still have a 0 return code as long as the shell could launch the process.
A portable method (also allowing to take care of the return code of the process) is slightly more complex, involving running a system call from a thread or a forked executable. The quick & dirty solution does a fork + exec of a shell command behind the scenes.
If the fork and exec patter is used just to run a program without freeze the current program, what's the advantage, for example, over using this single line:
system("program &"); // run in background, don't freeze
The system function creates a new shell instance for running the program, which is why you can run it in the background. The main difference from fork/exec is that using system like this actually creates two processes, the shell and the program, and that you can't communicate directly with the new program via anonymous pipes.
fork+exec is much more lightweight than system(). The later will create a process for the shell, the shell will parse the command line given and invoke the required executables. This means more memory, more execution time, etc. Obviously, if the program will run in background, these extra resources will be consumed only temporarily, but depending on how frequently you use it, the difference will be quite noticeable.
The man page for system clearly says that system executes the command by "calling /bin/sh -c command", which means system creates at least two processes: /bin/sh and then the program (the shell startup files may spawn much more than one process)
This can cause a few problems:
portability (what if a system doesn't have access to /bin/sh, or does not use & to run a process in the background?)
error handling (you can't know if the process exited with an error)
talking with the process (you can't send anything to the process, or get anything out of it)
performance, etc
The proper way to do this is fork+exec, which creates exactly one process. It gives you better control over the performance and resource consumption, and it's much easier to modify to do simple, important things (like error handling).
I have written a DTrace script which measures the time spent inside a function in my C program. The program itself runs, outputs some data and then exits.
The problem is that it finishes way to fast for me to get the process id and start DTrace.
At the moment I have a sleep() statement inside my code which gives me enough time to start DTrace. Having to modify your code in order to get information on it kinda defeats the purpose of Dtrace... right.
Basically what I'm after is to make DTrace wait for a process id to show up and then run my script against it.
Presumably you're using the pid provider, in which case there's no way to enable those probes before the process has been created. The usual solution to this is to invoke the program from dtrace itself with its "-c" option.
If for some reason you can't do that (i.e. your process has to be started in some environment set up by some other process), you can try a more complex approach: use proc:::start or proc:::exec-success to trace when your program is actually started, use the stop() action to stop the program at that point, and then use system() to run another DTrace invocation that uses the pid provider, and then "prun" the program again.
I have a C program which has multiple worker threads. There is a main thread which periodically (every 0.2s) does some basic checks (i.e. has a thread finished, has a signal been received, etc). At each check, I would like to write to a log file any data that any of the threads may have in their log buffer to a single log file.
My initial idea was to simply open the log file, write the data from all the threads and then close it again. I am worried that this might be too much of an overhead seeing as these checks occur every 0.2s.
So my question is - is this scenario inefficient?
If so, can anyone suggest a better solution?
I thought of leaving the file descriptor open and just writing new data on every check, but then there is the problem if somehow the physical file gets deleted, the program would never know (without rechecking, and in this case we might as well just open the file again) and logging data would be lost.
(This program is designed to run for very long periods of time, so the fact that log file will be deleted at some point is basically guaranteed due to log rotation.)
I thought of leaving the file descriptor open and just writing new data on every check, but then there is the problem if somehow the physical file gets deleted, the program would never know (without rechecking, and in this case we might as well just open the file again) and logging data would be lost.
The standard solution on UNIX is to add a signal handler for SIGHUP which closes and re-opens the log file. Many UNIX daemons do this for precisely this purpose, to support log rotation. Call kill -HUP <pid> in your log rotation script and you're good to go.
(Some programs will also treat SIGHUP as a cue to re-read their configuration files, so you can make configuration changes on the fly without having to restart processes.)
Currently, there isn't much of a good solution. I would suggest to write a timer that runs separately from your main 0.2s check, and checks the logfile buffers and write them to disk.
I am working on something network based that could solve this (I have had the same problem) with excellent performance, fire me a message on github for details.
I am creating a C program that is called from a shell script when a specific event is occurring. The C program gets an argument from the shell script like this:
> ./c-program.bin HELLO
Now the C program is running until it recieves a specific character as an argument. The problem is that if a second event occurs, and the C program is now called like this:
./c-program.bin WORLD
Then it is a new instance of the program that is started that knows nothing about the string from the first event. What i would like to achieve is something like this:
[EVENT0] ./c-program.bin HELLO
[EVENT1] ./c-program.bin WORLD
[EVENT2] ./c-program.bin *
c-program output:
HELLO WORLD
Any ideas on how to only have one instance of the program? The platform is Linux. The project is in its planning fase therefore, i do not have any specific code so far, i am trying to sort out the different problems first.
In outline, you have a magical "output everything" argument and want to accumulate all other arguments across multiple calls until you get the request to be magical? Easy enough, but requires some shared state. First hack would put that state in the filesystem (a database would be better, but is more complex).
I would use a pid file stored somewhere.
If you start the program, it should unsure that an other one is not running (using the pid file + verification). If not, create a named pipe where he would read data. Then put it in background, thanks to sub-process (fork) for instance.
Then an other instance is started, finding the pidfile and detecting that an instance is already launched. This second program would send argv[i] to the named pipe, so the first instance should print the data.
An other idea is to use a unix socket in a given file, like mysql.sock for instance.
Your program should check if another instance is running, and if that is the case pass its argument to that instance, and exit. The first instance to come alive should become the "server", that is stay alive ready to receive data. You can use shared memory or pipes for passing data between processes.
What you want isn't very clear. I think you are after a long-running process that accumulates "messages" of some sort, and then allows retrieval of those messages.
Here's one possible solution:
Try to create a fifo at a known location. If this succeeds, fork a daemon that listens on that fifo, accumulating messages until it receives a *, at which point it echos all messages back to the fifo, including the *.
The original process (whether it spawned a daemon or not) opens the fifo for writing, and pushes the message.
If the argument was *, it then reads the fifo and outputs everything it gets until it receives the echoed *.
You either need a intermediate app, or probably a separate shell script
The intermediate script would cache the parameters to disk until you pass a '*', at which point it then passes the entire cached string to the final c program.
So your first script would call the second script, passing:
secondscript HELLO
secondscript WORLD
secondscript *
When the second script receives a parameter of * it passes all the previous parameters to your c program
./c-program.bin HELLO WORLD
Compared to the later answers, this still seems by far the easiest option. You could script this in 10 minutes. Pseudocode for the intermediate app/script:
passedstring = args[0]
if passedstring = "*"
string cache = readcontents(cachefile)
call c program passing cache
clearcontents(cachefile)
else
append(cachefile, passedstring)
If however you want the challenge of developing a fancy dancin app that can check for another version of itself in memory, then pass stuff around in sockets, go for it :)