Given a single processor virtual machine running lubuntu, I was wondering if it is possible to tie up the processor so that no other program can run any instructions.
For example, if program A and program B were to be run at nearly the same time, is it possible to set the priority of program A (in its source using the setpriority() function) to run before program B and then tie up the processor so that program B cannot execute?
You could call kill with SIGSTOP and a pid value of -1 to stop every process that you can (i.e., have permission to) stop other than init and the calling process, which, if you are root, should stop every process other than init and the process that calls kill.
You'd want to use a scripting language rather than the kill binary, as the kill binary would exit after sending the signal and not give the shell you ran the kill binary from would have been stopped, preventing you from launching your app.
E.g., in ruby, you could do,
#Broadcast the STOP signal
Process.kill(:STOP, -1)
#Run your process with the playground having been cleared
system('the_high_priority_app')
#Resume the stopped processes
Process.kill(:CONT, -1)
The above is a bit of a hack, though, and not very safe if you have many processes that do some IPC by sending the SIGSTOP and SIGCONT signals among themselvesj -- you could be sending SIGCONT to processes that had been stopped by other processes. You could get a list of processes that were stopped at the time of broadcasting the SIGSTOP signal and skip those when you broadcast the SIGCONT signal, but the set of sigstop processes could theoretically change between your scanning for them and your broadcasting of the SIGSTOP signal.
With the right priviliges it is possible to call 'sched_setscheduler' to give a process real time priority. Such a process will not be interrupted by ordinary processes or other real time processes with lower priority. Such real time processes will only lose CPU when they give it up by doing some call like sleep or waiting for IO. They will also be given the CPU back as soon as they are able to work again and the CPU is not needed by any real time process with higher priority.
Related
I would like to read the PID of the previous executed process in C.
For example, I have process 1 which sleeps for 1s and when wakes up reads the PID of the process executing before the context switch. Process 1 runs with higher priority than the other processes so as when the sleep time finishes it immediately scheduled to run.
How can I get the information on which process PID run before its execution was interrupted so as process 1 executes?
I would like to read the PID of the previous executed process in C.
How can one determine which process was executed before or after, more on in a multiprocessor architecture in which several threads (even in the same process) can be executing fork() at the same time?
For example, I have process 1 which sleeps for 1s and when wakes up reads the PID of the process executing before the context switch.
how can it get the pid of the process that was executing before that context switch? In a context switch the process deciding the next process to execute doesn't tell the next process to execute which was the last. Even when a context switch happens by means of a hardware interrupt in which the process is not making a sleep()/wait()/pause() system call. System calls return information on how the system call went, but there's no place to store what process lead to a context switch and provide the number of the process id (indeed, it should be the thread id, as a process can have several threads and not only one)
Process 1 runs with higher priority than the other processes so as when the sleep time finishes it immediately scheduled to run.
Why do you assume process 1 runs at higher priority if the scheduler assigns priorities dynamically when considering which process to run next?
How can I get the information on which process PID run before its execution was interrupted so as process 1 executes?
You cannot. It's impossible as the context switch can happen because a hardware interrupt, when your process is not capable of storing it anywhere. Even signal handlers run in user mode, so there's no way in which you can have access to the last executed process. More because in a multiprocessor computer the kernel can be handling several context switches at the same time.
Indeed, telling a user process the pid of the process that was run last, could represent a security breach, as not all the processes belong to the same user. Remember today systems are not only multithread/multiprocess/multiprocessor, but also multiuser.
When I call kill() on a process, it returns immediately, because it just send a signal. I have a code where I am checking some (foreign, not written nor modifiable by me) processes in a loop infinitely and if they exceed some limits (too much ram eaten etc) it kills them (and write to a syslog etc).
Problem is that when processes are heavily swapped, it takes many seconds to kill them, and because of that, my process executes the same check against same processes multiple times and attempts to send the signal many times to same process, and write this to syslog as well. (this is not done on purpose, it's just a side effect which I am trying to fix)
I don't care how many times it send a signal to process, but I do care how many times it writes to syslog. I could keep a list of PID's that were already sent the kill signal, but in theory, even if there is low probability, there could be another process spawned with same pid as previously killed one had, which might also be supposed to be killed and in this case, the log would be missing.
I don't know if there is unique identifier for any process, but I doubt so. How could I kill a process either synchronously, or keep track of processes that got signal and don't need to be logged again?
Even if you could do a "synchronous kill", you still have the race condition where you could kill the wrong process. It can happen whenever the process you want to kill exits by its own volition, or by third-party action, after you see it but before you kill it. During this interval, the PID could be assigned to a new process. There is basically no solution to this problem. PIDs are inherently a local resource that belongs to the parent of the identified process; use of the PID by any other process is a race condition.
If you have more control over the system (for example, controlling the parent of the processes you want to kill) then there may be special-case solutions. There might also be (Linux-specific) solutions based on using some mechanisms in /proc to avoid the race, though I'm not aware of any.
One other workaround may be to use ptrace on the target process as if you're going to debug it. This allows you to partially "steal" the parent role, avoiding invalidation of the PID while you're still using it and allowing you to get notification when the process terminates. You'd do something like:
Check the process info (e.g. from /proc) to determine that you want to kill it.
ptrace it, temporarily stopping it.
Re-check the process info to make sure you got the process you wanted to kill.
Resume the traced process.
kill it.
Wait (via waitpid) for notification that the process exited.
This will make the script wait for process termination.
kill $PID
while [ kill -0 $PID 2>/dev/null ]
do
sleep 1
done
kill -0 [pid] tests the existence of a process
The following solution works for most processes that aren't debuggers or processes being debugged in a debugger.
Use ptrace with argument PTRACE_ATTACH to attach to the process. This stops the process you want to kill. At this point, you should probably verify that you've attached to the right process.
Kill the target with SIGKILL. It's now gone.
I can't remember whether the process is now a zombie that you need to reap or whether you need to PTRACE_CONT it first. In either case, you'll eventually have to call waitpid to reap it, at which point you know it's dead.
If you are writing this in C you are sending the signal with the kill system call. Rather than repeatedly sending the terminating signal just send it once and then loop (or somehow periodically check) with kill(pid, 0); The zero value of signal will just tell you if the process is still alive and you can act appropriately. When it dies kill will return ESRCH.
when you spawn these processes, the classical waitpid(2) family can be used
when not used anywhere else, you can move the processes going to be killed into an own cgroup; there can be notifiers on these cgroups which get triggered when process is exiting.
to find out, whether process has been killed, you can chdir(2) into /proc/<pid> or open(2) this directory. After process termination, the status files there can not be accessed anymore. This method is racy (between your check and the action, the process can terminate and a new one with the same pid be spawned).
Trying to build a debugger in C for fuzzing.
Basically in linux, I just want to start a process via fork and then execve(), then monitor this process to see if it crashes after 1 second.
On linux, is this done via creating the process then monitoring the SIGNALs it generates for anything that looks like a crash? Or is it about monitoring the application and? I'm not sure.
Use the ptrace(2) system call:
While being traced, the child will stop each time a signal is
delivered, even if the signal is being ignored. (The exception is
SIGKILL, which has its usual effect.) The parent will be notified at
its next wait(2) and may inspect and modify the child process while it
is stopped. The parent then causes the child to continue, optionally
ignoring the delivered signal (or even delivering a different signal
instead).
The signals you should be interested in, regarding to the process having crashed are SIGSEGV (restricted memory access), SIGBUS (unaligned data access), SIGILL (illegal instruction), SIGFPE (illegal floating-point operation), etc.
While running a thread program and repeatedly killing the main program using Ctrl + C, i see unexpected results in the program in second run. However, if i let the program run and voluntarily exit, there are no issues.
So, my doubt is, does Ctrl + C, kill threads also along with the main process?
Thanks in advance.
In multithreaded programming, signals are delivered to a single thread (usually chosen unpredictably among the threads that don't have that particular signal blocked). However, this does not mean that a signal whose default action is to kill the process only terminates one thread. In fact, there is no way to kill a single thread without killing the whole process.
As long as you leave SIGINT with its default action of terminating the process, it will do so as long as at least one thread leaves SIGINT unblocked. It doesn't matter which thread has it unblocked as long as at least one does, so library code creating threads behind the application's back should always block all signals before calling pthread_create and restore the signal mask in the calling thread afterwards.
Well, the only thing that Ctrl + C does is sending SIGINT to one thread in the process that is not masking the signal. Signals can be handled or ignored.
If the program does handle Ctrl+C, the usual behavior is self-termination, but once again, it could be used for anything else.
In your case, SIGINT is being received by one thread, which probably does kill itself, but does not kill the others.
Under Linux 2.6 using NPTL threads: I am assuming that the process uses the default signal handler, or calls exit() in it: Yes it does. The C library exit() call maps to the exit_group system call which exits all the threads immediately; the default signal handler calls this or something similar.
Under Linux 2.4 using Linuxthreads (or using 2.6 if your app still uses Linuxthreads for some weird reason): Not necessarily.
The Linuxthreads library implements threads using clone(), creating a new process which happens to share its address-space with the parent. This does not necessarily die when the parent dies. To fix this, there is a "master thread" which pthreads creates. This master thread does various things, one of them is to try to ensure that all the threads get killed when the process exits (for whatever reason).
It does not necessarily succeed
If it does succeed, it is not necessarily immediate, particularly if there are a large number of threads.
So if you're using Linuxthreads, possibly not.
The other threads might not exit immediately, or indeed at all.
However, no matter what thread library you use, forked child processes will continue (they might receive the signal if they are still in the same process-group, but can freely ignore it)
people. For a academic exercise i have to implement a program in c for nix platform which is to synchronize multiple processes through signal handling, using only signal,pause,kill and fork basic functions.
I searched google and have not found any clear example: I hope the wisdom of one of you will light my way.
Thanks!
pause doesn't return until a signal is received. The basic design is thus:
fork to create the necessary workers
catch SIGINT in each worker. The handler sets a flag meaning the process should exit after finishing it's current job.
each process does work while it can, then pauses. Repeat unless SIGINT is received (test before & after pause).
when one process has work available for another process, it signals the other process with SIGCONT
when there is no more work for a process, signal it with SIGINT.
This doesn't quite include synchronized access to shared data. To do that, you could add an additional rule:
when a process signals another that work is available, it should pause
Of course, this rather defeats the purpose of concurrent programming.
Since most system calls are interrupted by signals (causing them to return -1, with errno set to EINTR), you'll have to handle this contingency, repeating each affected system call until it's successful. For example:
while ((readCount = read(...)) < 0 && errno == EINTR) {}
One important thing to be aware of is that Linux (at least, and possibly many other Unices) can collapse multiple signals of the same type into a single instance. So if you send a process one signal of value x, that process is guaranteed to receive it; but if you send 2 or more signals of value x, the process is only guaranteed to receive at least one of those signals.
Also, signals are not guaranteed to be received in the order they are sent.
(Why? Under the hood, Linux maintains a bitmask for each process recording which outstanding signals have been sent. Whenever a process is woken up by the scheduler, signal handlers for all outstanding signals are run, in some arbitrary order.)
What all this means is that signals are generally inappropriate for synchronising processes. They only work reliably when time intervals between signals are large with respect to the interval between wake-up times of the receiving process. And if a process spends a lot of time blocked, wake-up events can be arbitrarily far apart.
Conclusion: don't using signals for IPC.