Linux: FIFO scheduler isn't working as expected - c

I am currently trying to work with Linux FIFO schedulers.
I want to run two processes: process A and process B with the same priority in a FIFO way.
To do this, I have made a shell script in which I run process A first, followed by process B. In FIFO format, process B should start its execution only after the completion of process A, i.e., there should be no overlapping between the execution of these processes. But this isn't what is happening.
I am actually observing that both of the processes are running in an overlapping fashion, i.e., the print statements are printing in both the processes in an interleaved format.
Here is the code of the shell script.
gcc -o process_a process_a.c
gcc -o process_b process_b.c
sudo taskset --cpu-list 0 chrt -f 50 ./process_a &
sleep 0.1
sudo taskset --cpu-list 0 chrt -f 50 ./process_b &
sleep 30
exit
To make sure that both the processes run on the same CPU, I have used taskset command. Also, I am using chrt command to set the scheduler.
Here is the code for process_a.c
#include <stdio.h>
#include <sys/time.h>
#include <unistd.h>
#include <sched.h>
int main(int argc, char *argv[])
{
printf("Process A begins!\n");
fflush(stdout);
long long int i=0, m = 1e8;
while(i<2e10){
if(i%m == 0){
printf("Process A running\n");
fflush(stdout);
}
i++;
}
printf("Process A ended \n");
fflush(stdout);
return 0;
}
Here is the code for process_b.c
#include <stdio.h>
#include <sys/time.h>
#include <unistd.h>
#include <sched.h>
int main(int argc, char *argv[])
{
printf("Process B begins!\n");
fflush(stdout);
long long int i=0, m = 1e8;
while(i<2e10){
if(i%m == 0){
printf("Process B running\n");
fflush(stdout);
}
i++;
}
printf("Process B ended \n");
fflush(stdout);
return 0;
}
Please help me understand why this is happening.
Many thanks in advance.

The sched(7) manual page says this:
A SCHED_FIFO thread runs until either it is blocked by an I/O request, it is preempted by a higher priority thread, or it calls sched_yield(2).
In this case, you're performing an I/O request to the terminal via printf, which calls write under the hood. Your file descriptor is in blocking mode, so it is likely that at least some blocking occurs in this case since generally I/O is slow, especially to terminals, causing the other process to run.
If you wanted a better example of one process preventing the other from running, you'd probably want to do something like write into a shared memory segment instead, which wouldn't be blocked by an I/O request.

Related

Why does my RT (Real-Time) Linux freeze when I create a RT process with infinite loop?

I build my RT Linux kernel with RT-Preempt Patch(kernel is build under FULL PREEMPT option) on Ubuntu 20.04, the kernel version is 5.9.1, but the RT system got freezes when I run my test process. The test process just forks a RT child process with infinite loop, I really don't know what happens. The source code acts as follows:
#define _GNU_SOURCE
#include <sys/ptrace.h>
#include <sys/types.h>
#include <sys/wait.h>
#include <sched.h>
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <assert.h>
#include <pthread.h>
void pinCPU(unsigned coreNo) {
cpu_set_t mask;
CPU_ZERO(&mask);
CPU_SET(coreNo, &mask);
assert(0 == sched_setaffinity(0, sizeof(cpu_set_t), &mask));
}
void setFIFO() {
pid_t pid = getpid();
struct sched_param param;
param.sched_priority = sched_get_priority_max(SCHED_FIFO)-10;
assert(0 <= sched_setscheduler(pid, SCHED_FIFO, &param));
}
int main() {
pid_t pid = fork();
if (pid == 0) {
// child process
pinCPU(1);
setFIFO();
assert(-1 != ptrace(PTRACE_TRACEME, 0, 0, 0));
while(1);
} else if (pid > 0) {
int count = 0, status;
printf("child:%d count: %d\n", pid, count);
while(++count <= 3) {
status = -1;
waitpid(-1, &status, 0);
printf("child:%d count: %d\n", pid, count);
assert(-1 != ptrace(PTRACE_CONT, pid, 0, 0));
}
kill(pid, SIGKILL);
puts("kill child and exit!");
}
}
When I change the body of infinite loop as follows, the CPU utilization lowered from 99% to 40%, the RT Linux didn't freeze anymore. So does the RT process with high CPU utilization crash the RT kernel? But I cannot tell any reasonable explanation with it.
unsigned long long t;
while(1) {
t = 1 << 63;
while(--t);
usleep(100);
}
Note: Both test processes I've mentioned above work well on Ubuntu 20.04 without RT-Preempt patched kernel.
From man 7 sched:
Limiting the CPU usage of real-time and deadline processes
A nonblocking infinite loop in a thread scheduled under the SCHED_FIFO,
SCHED_RR, or SCHED_DEADLINE policy can potentially block all other
threads from accessing the CPU forever. Prior to Linux 2.6.25, the
only way of preventing a runaway real-time process from freezing the
system was to run (at the console) a shell scheduled under a higher
static priority than the tested application. This allows an emergency
kill of tested real-time applications that do not block or terminate as
expected.
Since Linux 2.6.25, there are other techniques for dealing with runaway
real-time and deadline processes. One of these is to use the
RLIMIT_RTTIME resource limit to set a ceiling on the CPU time that a
real-time process may consume. See getrlimit(2) for details.
...
So the answer to
So does the RT process with high CPU utilization crash the RT kernel?
is "No"; the effect observed is being described above.
If the kernel would crash, you'd either see some message or experience a reboot.
(I'm considering "the kernel freezes" and "the kernel crashes" to be two different things, and I don't even believe the "kernel freezes" for the reasons given above.)

Profile a process via its child and kill the child afterwards

I am trying to figure out a way to profile in C a process via its child and after a moment the parent process kills its child to stop profiling. I am using perf to profile my application. perf is going to output its result in a file when killed. It looks like this in a bash script :
./run &
perf stat -o perf.data -p <pid_run> &
kill -9 <pid_perf>
What I have done so far :
#include <stdio.h>
#include <stdlib.h>
#include <signal.h>
#include <sys/types.h>
#include <unistd.h>
#include <fcntl.h>
static pid_t perf_id;
void start() {
char *filename="test_data";
char ppid_str[24];
pid_t pid = fork();
if (pid == 0){
pid_t ppid = getppid();
sprintf(ppid_str, "%d",ppid);
char *args[] = {"/usr/bin/perf", "stat","-p",ppid_str,"-o", filename, NULL};
execvp(args[0], args);
}
else {
perf_id = pid
}
}
void stop() {
kill(perf_id,SIGKILL);
}
I have an issue getting the output of perf.
This is an example of code that could run the parent process :
int main() {
start();
int a = 0;
a+=1;
stop();
// ... // There are other instructions after the stop
return 0;
}
I am not getting any output from perf when running this code. I have to kill the parent process to get an output.
If I put a sleep call before killing the child process, then the program will output an empty file.
EDIT :
stat argument is an example in my command, I want also to use the record argument
As mentioned by Zulan, if I use SIGINT instead of SIGKILL, I will get an output, but I can get one only if the main process sleeps for 1 second.
You should send a SIGINT instead of a SIGKILL in order to allow perf to shutdown cleanly and produce a valid output file. The synchronization between the perf child process and the main process will still be imperfect - so if the main process doesn't take significant time as in your example, it is easily possible that no output file is generated at all. This also affects the accuracy of collected data. With the setup of using perf as a child process rather than vice-versa, you cannot really improve it.
the problem is that perf attaches itself to the process and then waits for process termination to print counters. try adding for example the
-I msec
option to perf like -I 1000 to print counters every 1s.
changing your args to execvp to
char *args[] = {"/usr/bin/perf", "stat", "-p",ppid_str,"-o", filename, "-I", "1000", NULL};
and your inc to a loop of something like
while (a < 10) {
a += 1;
sleep(1);
}
while yield results although the file is not properly closed() in this approach.
I would create a small binary that execs perf with a timeout and gracefully closes the file and run that from the child.

Make processes run at the same time using fork

#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <sys/wait.h>
int main(int argc, char **argv) {
FILE *file;
file = fopen(argv[1], "r");
char buf[600];
char *pos;
pid_t parent = fork();
if(parent == 0) {
while (fgets(buf, sizeof(buf), file)) {
pid_t child = fork();
if(child == 0) {
/* is there a function I can put here so that it waits
once the parent is exited to then run?*/
printf("%s\n", buf);
return(0);
}
}
return(0);
}
wait(NULL);
return(0);
}
The goal here to print out the line of a file all at the same time, parallel.
For example:
Given a file
a
b
c
$ gcc -Wall above.c
$ ./a.out file
a
c
b
$ ./a.out file
b
c
a
As in the processes ran at the exact same time. I think I can get this to work if there was a wait clause that waits for the parent to exit then start running the child. As shown in the comments above. Once the parent exits then all the processes would start at the print statement as wanted.
If you had:
int i = 10;
while (i > 0)
{
pid_t child = fork();
if(child == 0) {
printf("i: %d\n", i--);
exit(0);
}
}
then the child processes are running concurrently. And depending on the number of cores and your OS scheduler, they might even run literally at the same time. However, printf is buffer, so the order in which the lines appear on screen cannot be determined and will vary between executions of your program. And because printf is buffered, you will most likely not see lines overlapping other other. However if you were using write directly to stdout, then the outputs might overlap.
In your scenario however, the children die so fast and because you are reading
from a file (which might take a while to return), by the time the next fork is executed,
the previous child is already dead. But that doesn't change the fact, that if
the children would run long enough, they would be running concurrently and the
order of the lines on screen cannot be determined.
edit
As Barmar points out in the comments, write is atomic. I looked up in my
man page and in the BUGS section it says this:
man 2 write
According to POSIX.1-2008/SUSv4 Section XSI 2.9.7 ("Thread Interactions with Regular File Operations"):
All of the following functions shall be atomic with respect to each other in the effects specified in POSIX.1-2008 when they operate on regular files or symbolic links: ...
Among the APIs subsequently listed are write() and writev(2). And among the effects that should be atomic across threads (and pro‐
cesses) are updates of the file offset. However, on Linux before version 3.14, this was not the case: if two processes that share an
open file description (see open(2)) perform a write() (or writev(2)) at the same time, then the I/O operations were not atomic with
respect updating the file offset, with the result that the blocks of data output by the two processes might (incorrectly) overlap.
This problem was fixed in Linux 3.14.
Sever years ago I observed this behaviour of write on stdout with concurrent
children printing stuff, that's why I wrote that with write, the lines may
overlap.
I am not sure why you have an outer loop. You could rewrite as follows. Once you create the child processes, they could run in any order. So you might seem the output in "order" but in another run you might see different order. It depends on the process scheduling by your OS and for your purpose, it's all running in "parallel". So you really don't need to ensure parent process is dead.
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <sys/wait.h>
int main(int argc, char **argv)
{
if (argc != 2) {
printf("Incorrect args\n");
exit(1);
}
char buf[1024];
FILE *file = fopen(argv[1], "r");
while (fgets(buf, sizeof buf, file)) {
pid_t child = fork();
if(child == 0) {
write(STDOUT_FILENO, buf, strlen(buf));
_exit(0);
}
}
/* Wait for all child processes. */
while (wait(NULL) != -1);
}

How and why can fork() fail?

I'm currently studying the fork() function in C. I understand what it does (I think). Why do we check it in the following program?
#include <stdio.h>
#include <unistd.h>
#include <stdlib.h>
int main()
{
int pid;
pid=fork();
if(pid<0) /* Why is this here? */
{
fprintf(stderr, "Fork failed");
exit(-1);
}
else if (pid == 0)
{
printf("Printed from the child process\n");
}
else
{
printf("Printed from the parent process\n");
wait(pid);
}
}
In this program we check if the PID returned is < 0, which would indicate a failure. Why can fork() fail?
From the man page:
Fork() will fail and no child process will be created if:
[EAGAIN] The system-imposed limit on the total number of pro-
cesses under execution would be exceeded. This limit
is configuration-dependent.
[EAGAIN] The system-imposed limit MAXUPRC (<sys/param.h>) on the
total number of processes under execution by a single
user would be exceeded.
[ENOMEM] There is insufficient swap space for the new process.
(This is from the OS X man page, but the reasons on other systems are similar.)
fork can fail because you live in the real world, not some infinitely-recursive mathematical fantasy-land, and thus resources are finite. In particular, sizeof(pid_t) is finite, and this puts a hard upper bound of 256^sizeof(pid_t) on the number of times fork could possibly succeed (without any of the processes terminating). Aside from that, you also have other resources to worry about like memory.
There is not enough memory available to make the new process perhaps.
If the kernel fails to allocate memory for example, that's pretty bad and would cause fork() to fail.
Have a look at the error codes here:
http://linux.die.net/man/2/fork
Apparently it can fail (not really fail but hang infinitely) due to the following things coming together:
trying to profile some code
many threads
much memory allocation
See also:
clone() syscall infinitely restarts because of SIGPROF signals #97
Hanging in ARCH_FORK with CPUPROFILE #704
SIGPROF keeps a large task from ever completing a fork(). Bug 645528
Example:
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>
int main()
{
size_t sz = 32*(size_t)(1024*1024*1024);
char *p = (char*)malloc(sz);
memset(p, 0, sz);
fork();
return 0;
}
Build:
gcc -pg tmp.c
Run:
./a.out

How to open new terminal through C program in linux

I have written client-sever code where I have many connections, let say each node represents different process on same machine. And to do that I have obviously use fork().
But now problem is that all results get displayed on same terminal.
I want to know is there any way such that after each fork() or process creation new terminal gets opened and all results get displayed for that process on particular terminal.
P.S: I have tried system("gnome-terminal") but it just opens new terminal but all results get displayed again on same terminal only. All new terminals are just opens and remain blank without any result.
Also I have gone through this link How to invoke another terminal for output programmatically in C in Linux but I don't want to run my program with parameters or whatever. Its should be just like ./test
Here is my code:-
for(int i=0;i<node-1;i++)
{
n_number++;
usleep(5000);
child_pid[i]=fork();
if(!child_pid[i])
{
system("gnome-terminal");
file_scan();
connection();
exit(0);
}
if(child_pid[i]<0)
printf("Error Process %d cannot be created",i);
}
for(int i=0;i<node-1;i++)
wait(&status);
So basically what I want is for each process there should be new terminal displaying only that process information or result.
What I exactly want:
After fork() I have some data related to say process 1 then I want its output to one terminal
Same goes with each process. So its like if I have 3 process then there must be 3 terminals and each must display process related data only.
I know it can be doable using IPC(Inter Process Communication) but is there any other way around? I mean just 2-3 commands or so? Because I do not want to invest too much in coding this part.
Thanks in advance!!!
Maybe you want something like that. This program is using the unix98 pseudoterminal (PTS), which is a bidirectional channel between master and slave. So, for each fork that you do, you will need to create a new PTS, by calling the triad posix_openpt, grantpt, unlockpt at master side and ptsname at slave side. Do not forget to correct the initial filedescriptors (stdin, stdout and sdterr) at each side.
Note that is just a program to prove the concept, so I am not doing any form of error check.
#define _XOPEN_SOURCE 600
#include <unistd.h>
#include <stdlib.h>
#include <stdio.h>
#include <libgen.h>
#include <string.h>
#include <fcntl.h>
int main() {
pid_t i;
char buf[10];
int fds, fdm, status;
fdm = posix_openpt(O_RDWR);
grantpt(fdm);
unlockpt(fdm);
close(0);
close(1);
close(2);
i = fork();
if ( i != 0 ) { // father
dup(fdm);
dup(fdm);
dup(fdm);
printf("Where do I pop up?\n");
sleep(2);
printf("Where do I pop up - 2?\n");
waitpid(i, &status, 0);
} else { // child
fds = open(ptsname(fdm), O_RDWR);
dup(fds);
dup(fds);
dup(fds);
strcpy(buf, ptsname(fdm));
sprintf(buf, "xterm -S%c/2", basename(buf));
system(buf);
exit(0);
}
}

Resources