I´m trying to get some values displayed on an eInk-Display (via SPI). I already wrote the software to initialize the display and display the values passed as command-line arguments. The problem is, because of the eInk-technology it takes a few seconds for the display to have fully actualized, so the display-program is also running for this time.
The other ("Master"-) program collects the values and does other stuff. It has a main loop, which has to be cycled through at least 10x/second.
So I want to start the displaying program from within the main loop and immediately continue with the loop.
When using system() or execl(), the Master-program either waits till the display program is finished or exits into the new process.
Is there a way to just start other programs out of other ones without any further connection between them? It should run on Linux.
May fork() be a solution?
quick and dirty way: use system with a background suffix (&)
char cmd[200];
sprintf("%190s &","your_command");
system(cmd);
note that it's not portable because it depends on the underlying shell. For windows you would do:
sprintf("start %190s","your_command");
The main drawback of the quick & dirty solution is that it's "fire & forget". If the program fails to execute properly, you'll still have a 0 return code as long as the shell could launch the process.
A portable method (also allowing to take care of the return code of the process) is slightly more complex, involving running a system call from a thread or a forked executable. The quick & dirty solution does a fork + exec of a shell command behind the scenes.
Related
I noticed that the Unix bc program does not print out it's usual prompt (the three symbols ">>> ") when being started as a background process (like if you execute it as "bc &"). This is confusing to me because from my limited knowledge of Unix, starting a program as a background job will keep it running until as soon as it tries to read from stdin, at which point it will receive a signal to stop itself.
But running bc as a background job ("bc &") will not cause it to at least print out the ">>> " prompt before stopping itself which tells me that the program handles that somehow. I am curious as to how it does this. When I wrote a naive program that only tries to emulate the input/output interaction, it still prints out ">>> " before being suspended which doesn't look very clean at all and the behavior gets even more bizarre on certain shells.
I tried looking through the Unix bc source code and I was able to trace the code to parts where it is printing out the ">>> " prompt, but how it was handling not printing out the prompt when started as a background process was beyond me. And I know that obviously you would never start an input/output interactive program in the background as that goes against intended functionality and common sense, but I am more interested in the concepts behind it like if this was implemented with signal handling and/or if this is some more advanced input/output stream buffering or some other Unix concept that I am not familiar with.
The first thing your version of bc does is call the tcsetattr function. This function, when called from a background process, causes the SIGTTOU signal to be sent to the process, which by default causes the process to stop.
Any program that manipulates terminal attributes (vim, bash, anything that uses readline or curses, ...) will probably behave exactly the same way.
I ran across some problems with GtkSubprocess, and I figured out that it is related to using threads, and is there a way to make it immune to concurrency problems?
I have this program that does some operations on a file, which are individually represented by GtkListBoxRows. When the GSubprocess finishes, and I attempt to remove the list box row, the program segfaults. BTW, each file has its own process, so if a user loads 10 files, there will be 10 threads (this is managed by GThreadPool). Interestingly, if I comment out the code that launches the process, and the code that blocks the thread function till the process finishes, the program does not segfault. So I deduced that GSubprocess is having problems with concurrency. The error produced varies a lot, so this must be due to time-related problems.
I wanted to use GSubprocess because it is relatively easy to get the output of the command, which I need. Will I need to move my invocations of GSubprocess outside of the thread function?
I found out that it is not safe, due to its internal implementation in the GTK+ source code. And you should not even use threads in an application as well, as stated here. Here is my workaround: create the process in the main loop, and wait for the process to terminate using the async version of the call. Thus you avoid threads.
If the fork and exec patter is used just to run a program without freeze the current program, what's the advantage, for example, over using this single line:
system("program &"); // run in background, don't freeze
The system function creates a new shell instance for running the program, which is why you can run it in the background. The main difference from fork/exec is that using system like this actually creates two processes, the shell and the program, and that you can't communicate directly with the new program via anonymous pipes.
fork+exec is much more lightweight than system(). The later will create a process for the shell, the shell will parse the command line given and invoke the required executables. This means more memory, more execution time, etc. Obviously, if the program will run in background, these extra resources will be consumed only temporarily, but depending on how frequently you use it, the difference will be quite noticeable.
The man page for system clearly says that system executes the command by "calling /bin/sh -c command", which means system creates at least two processes: /bin/sh and then the program (the shell startup files may spawn much more than one process)
This can cause a few problems:
portability (what if a system doesn't have access to /bin/sh, or does not use & to run a process in the background?)
error handling (you can't know if the process exited with an error)
talking with the process (you can't send anything to the process, or get anything out of it)
performance, etc
The proper way to do this is fork+exec, which creates exactly one process. It gives you better control over the performance and resource consumption, and it's much easier to modify to do simple, important things (like error handling).
I have written a DTrace script which measures the time spent inside a function in my C program. The program itself runs, outputs some data and then exits.
The problem is that it finishes way to fast for me to get the process id and start DTrace.
At the moment I have a sleep() statement inside my code which gives me enough time to start DTrace. Having to modify your code in order to get information on it kinda defeats the purpose of Dtrace... right.
Basically what I'm after is to make DTrace wait for a process id to show up and then run my script against it.
Presumably you're using the pid provider, in which case there's no way to enable those probes before the process has been created. The usual solution to this is to invoke the program from dtrace itself with its "-c" option.
If for some reason you can't do that (i.e. your process has to be started in some environment set up by some other process), you can try a more complex approach: use proc:::start or proc:::exec-success to trace when your program is actually started, use the stop() action to stop the program at that point, and then use system() to run another DTrace invocation that uses the pid provider, and then "prun" the program again.
I'm writing a linux daemon in C which gets values from an ADC by SPI interface (ioctl). The SPI (spidev - userland) seems to be a bit unstable and freezes the daemon at random times.
I need to have some better control of the calls to the functions getting the values, and I was thinking of making it as a thread which I could wait for to finish and get the return value and if it times out assume that it froze and kill it without this new thread taking down the daemon itself. Then I could apply measures like resetting the ADC before restarting. Is this possible?
Pseudo example of what I want to achieve:
(function int get_adc_value(int adc_channel, float *value) )
pid = thread( get_adc_value(1,&value); //makes thread calling the function
wait_until_finish(pid, timeout); //waits until function finishes/timesout
if(timeout) kill pid, start over //if thread do not return in given time, kill it (it is frozen)
else if return value sane, continue //if successful, handle return variable value and continue
Thanks for any input on the matter, examples highly appreciated!
I would try looking at using the pthreads library. I have used it for some of my c projects with good success and it gives you pretty good control over what is running and when.
A pretty good tutorial can be found here:
http://www.yolinux.com/TUTORIALS/LinuxTutorialPosixThreads.html
In glib there is too a way to check the threads, using GCond (look for it in the glib help).
In resume you should periodically set a GCond in the child thread and check it in the main thread with a g_cond_timed_wait. It's the same with the glib or the pthread.
Here is an example with the pthread:
http://koders.com/c/fidA03D565734AE2AD9F5B42AFC740B9C17D75A33E3.aspx?s=%22pthread_cond_timedwait%22#L46
I'd recommend a different approach.
Write a program that takes samples and writes them to standard output. It simply need have alarm(TIMEOUT); before every sample collection, and should it hang the program will exit automatically.
Write another program that runs that first program. If it exits, it runs it again. It looks something like this:
main(){for(;;){system("sampler");sleep(1);}}
Then in your other program, use FILE*fp=popen("supervise_sampler","r"); and read the samples from fp. Better still: Have the program simply read the samples from stdin and insist users start your program like this:
(while true;do sampler;sleep 1; done)|program
Splitting up the task like this makes it easier to develop and easier to test, for example, you can collect samples and save them to a file and then run your program on that file:
sampler > data
program < data
Then, as you make changes to program, you can simply run it again on the same data over and over again.
It's also trivial to enable data logging- so should you find a serious issue you can run all your data through your program again to find the bugs.
Something very interesting happens to a thread when it executes an ioctl(), it goes into a very special kind of sleep known as disk sleep where it can not be interrupted or killed until the call returns. This is by design and prevents the kernel from rotting from the inside out.
If your daemon is getting stuck in ioctl(), its conceivable that it may stay that way forever (at least till the ADC is re-set).
I'd advise dropping something, like a file with a timestamp prior to calling ioctl() on a known buggy interface. If your thread does not unlink that file in xx amount of seconds, something else needs to re-start the ADC.
I also agree with the use of pthreads, if you need example code, just update your question.