Catching signal from Python child process using GLib - c

I'm trying to control the way my cursor looks during certain points of my program execution. To be specific, I want it to be a "spinner" when a Python script is executing, and then a standard pointer when it's done executing. Right now, I have a leave-event-notify callback in Glade that changes the spinner when it leaves a certain area, but this is non-ideal since the user might not know to move the cursor and the cursor doesn't accurately represent the state of the program.
I have my Python program signalling SIGUSR1 at the end of execution. I am spawning the Python script from a C file using GLib's g_spawn_async_with_pipes. Is there any way to catch a signal from the child process that this creates? Thanks!

Pass the G_SPAWN_DO_NOT_REAP_CHILD flag to g_spawn_async_with_pipes() and then call g_child_watch_add() to get a notification when your Python subprocess exits. You don’t need to bother with SIGUSR1 if the process exits when it’s done.
It’s a bit hard to provide a more specific answer unless you post a minimal reproducible example of your code.

Related

Implementing shell-like job control in C

I am trying to implement simple shell in C language and i am having a hard time implementing job control. Everything online seems complicated enough and i think some simplicity is always good. So let me ask this ... After fork() is called can i handle Ctrl-Z signal with just 2 function and just with the pid ?
I want to call a function e.x. put_background(pid_t pid) when i hit Ctrl-Z and make process with pid = pid to run background and finally call another function e.x. put_foreground(pid_t pid) when i write fg and i want the process with pid = pid to go to foreground again.
So, is this possible? Any help is appreciated.. code more however.
I am trying to implement simple shell in C language and i am having a
hard time implementing job control. Everything online seems
complicated enough and i think some simplicity is always good.
So let
me ask this ... After fork() is called can i handle Ctrl-Z signal with
just 2 function and just with the pid ?
Note that Ctrl-Z is meaningful primarily to the terminal driver. It causes a SIGTSTP to be sent to the foreground process group of the terminal in which that character was typed -- that is, the process group that has that terminal as its controlling one, and has permission to read from it. By default, this causes the processes in that group to stop, but that's it. You don't need to do anything to achieve that.*
I want to call a function e.x. put_background(pid_t pid) when i hit
Ctrl-Z and make process with pid = pid to run background and finally
call another function e.x. put_foreground(pid_t pid) when i write fg
and i want the process with pid = pid to go to foreground again.
By definition and design, at most one process group has control of a given terminal at any particular time. Thus, to move a foreground job to the background, all you need to do is move a different one to the foreground. That can be the shell itself or some other job under its control. The tcsetpgrp() library function accomplishes this. Unless it's the shell itself, you would also want to send a SIGCONT to that process group in case it was stopped.
You additionally need a mechanism to resume a stopped background job, but that's easy: just send that process group a SIGCONT.
So, is this possible? Any help is appreciated.. code more however.
Well sure, you could write one function for moving a job to the foreground and resuming it, and one for resuming a background job. The only information these functions need about the jobs they operate on is their process group IDs (which is the same as the process IDs of their initial processes).
But you also need to maintain some bookkeeping of the current active jobs, and you need to take some care about starting new jobs, and you need to monitor current jobs -- especially the foreground job -- so as to be able to orchestrate all of the transitions appropriately.
The GLIBC manual has an entire chapter on job control, including a substantial section specifically on implementing a job-control shell. This would probably be useful to you even if you are not writing for a GLIBC-based system. The actual code needed is not all that complicated, but getting it right requires a good understanding of a fairly wide range of concepts.
*But you do need to ensure that your shell puts commands it launches into process groups different from its own, else a Ctrl-Z will stop it, too.

Debugging signal handling in multithreaded application

I have this multithreaded application using pthreads. My threads actually wait for signals using sigwait. Actually, I want to debug my application, see which thread receives which signal at what time and then debug it. Is there any method, I can do this. If I directly run my program, then signals are generated rapidly and handled by my handler threads. I want to see which handler wakes up from the sigwait call and processes the signal and all.
The handy strace utility can print out a huge amount of useful information regarding system calls and signals. It would be useful to log timing information or collect statistics regarding the performance of signal usage.
If instead you are interested in getting a breakpoint inside of an event triggered by a specific signal, you could consider stashing enough relevant information to identify the event in a variable and setting a conditional breakpoint.
One of the things you may try with gdb is set breakpoints by thread (e.g. just after return from sigwait), so you know which thread wakes up:
break file.c thread [thread_nr]
Don't forget to tell gdb to pass signals to your program e.g.:
handle SIGINT pass
You may want to put all of this into your .gdbinit file to save yourself a lot of typing.
Steven Schlansker is definitely right: if that happens to significantly change timing patterns of your program (so you can see that your program behaves completely different under debugger, than "in the wild") then strace and logging is your last hope.
I hope that helps.

dtracing a short lived application

I have written a DTrace script which measures the time spent inside a function in my C program. The program itself runs, outputs some data and then exits.
The problem is that it finishes way to fast for me to get the process id and start DTrace.
At the moment I have a sleep() statement inside my code which gives me enough time to start DTrace. Having to modify your code in order to get information on it kinda defeats the purpose of Dtrace... right.
Basically what I'm after is to make DTrace wait for a process id to show up and then run my script against it.
Presumably you're using the pid provider, in which case there's no way to enable those probes before the process has been created. The usual solution to this is to invoke the program from dtrace itself with its "-c" option.
If for some reason you can't do that (i.e. your process has to be started in some environment set up by some other process), you can try a more complex approach: use proc:::start or proc:::exec-success to trace when your program is actually started, use the stop() action to stop the program at that point, and then use system() to run another DTrace invocation that uses the pid provider, and then "prun" the program again.

C functions invoked as threads - Linux userland program

I'm writing a linux daemon in C which gets values from an ADC by SPI interface (ioctl). The SPI (spidev - userland) seems to be a bit unstable and freezes the daemon at random times.
I need to have some better control of the calls to the functions getting the values, and I was thinking of making it as a thread which I could wait for to finish and get the return value and if it times out assume that it froze and kill it without this new thread taking down the daemon itself. Then I could apply measures like resetting the ADC before restarting. Is this possible?
Pseudo example of what I want to achieve:
(function int get_adc_value(int adc_channel, float *value) )
pid = thread( get_adc_value(1,&value); //makes thread calling the function
wait_until_finish(pid, timeout); //waits until function finishes/timesout
if(timeout) kill pid, start over //if thread do not return in given time, kill it (it is frozen)
else if return value sane, continue //if successful, handle return variable value and continue
Thanks for any input on the matter, examples highly appreciated!
I would try looking at using the pthreads library. I have used it for some of my c projects with good success and it gives you pretty good control over what is running and when.
A pretty good tutorial can be found here:
http://www.yolinux.com/TUTORIALS/LinuxTutorialPosixThreads.html
In glib there is too a way to check the threads, using GCond (look for it in the glib help).
In resume you should periodically set a GCond in the child thread and check it in the main thread with a g_cond_timed_wait. It's the same with the glib or the pthread.
Here is an example with the pthread:
http://koders.com/c/fidA03D565734AE2AD9F5B42AFC740B9C17D75A33E3.aspx?s=%22pthread_cond_timedwait%22#L46
I'd recommend a different approach.
Write a program that takes samples and writes them to standard output. It simply need have alarm(TIMEOUT); before every sample collection, and should it hang the program will exit automatically.
Write another program that runs that first program. If it exits, it runs it again. It looks something like this:
main(){for(;;){system("sampler");sleep(1);}}
Then in your other program, use FILE*fp=popen("supervise_sampler","r"); and read the samples from fp. Better still: Have the program simply read the samples from stdin and insist users start your program like this:
(while true;do sampler;sleep 1; done)|program
Splitting up the task like this makes it easier to develop and easier to test, for example, you can collect samples and save them to a file and then run your program on that file:
sampler > data
program < data
Then, as you make changes to program, you can simply run it again on the same data over and over again.
It's also trivial to enable data logging- so should you find a serious issue you can run all your data through your program again to find the bugs.
Something very interesting happens to a thread when it executes an ioctl(), it goes into a very special kind of sleep known as disk sleep where it can not be interrupted or killed until the call returns. This is by design and prevents the kernel from rotting from the inside out.
If your daemon is getting stuck in ioctl(), its conceivable that it may stay that way forever (at least till the ADC is re-set).
I'd advise dropping something, like a file with a timestamp prior to calling ioctl() on a known buggy interface. If your thread does not unlink that file in xx amount of seconds, something else needs to re-start the ADC.
I also agree with the use of pthreads, if you need example code, just update your question.

Suspending the execution of a remote process (C, Windows)

I can suspend a thread of another process by using SuspendThread(). Is there any way to also suspend the execution of that process altogether?
If yes, please post code.
Thanks.
PS:
Since you will ask "Why do you want to do this" I'll post it here.
I am dealing with legacy software that is not maintained anymore. I don't have access to the source code. Right now I need it to pause until a file is filled with data and then resume the execution.
The only way is to suspend all threads of that process.
If you want to see actual code, check the sample here.
> The only way is to suspend all threads of that process.
No.
Use Undocumented Kernel apis (exported since NT 3.1) to suspend the Pid.
If the process has or spawns many threads rapidly or asynchronously, your subject to a race condition with SuspendThread().
A way to accomplish the same thing (that is process wide) is to attach as a debugger to the target process with DebugActiveProcess() and then simply call DebugBreakProcess. When a process is at a break point, no new threads will be created and all execution, process wide will stop.

Resources