This question already has an answer here:
fork(), problems with multiple children
(1 answer)
Closed 8 years ago.
The code :
for ( ii = 0; ii < 24; ++ii) {
switch (fork()) {
case -1 : {
printf("\n\nproblem with fork() !!! \n\n");
exit(0);
};
case 0 : {
WriteOnShared_Mem(ii);
}break;
default : {
ChildPidTab[ii] = p;
usleep(50000);
ReadShared_MemMp(nbSect, 24,ChildPidTab);
};
}
}
My problem is that i get too many child (nbenfant = 24), i got much more than 24 :/
This is my 3rd post today here but still not solved :(
Thanks
Read carefully the fork(2) man page. Read that page several times, it is hard to understand. Read also the wikipage on fork (system call) and on processes (computing).
Please understand -and that takes time- that fork is returning simultaneously twice on success: once in the parent and once in the child
The fork syscall can fail (and then returns -1) for a number of reasons. On failure of fork please use perror or some other way to show the errno. And you should always keep the result of fork. So code
for (ii = 0; ii < 24; ++ii) {
fflush(NULL);
pid_t p = fork();
switch (p) {
case -1 : // fork failed
printf("\n\nproblem with fork() in pid %d error %s!!! \n\n",
(int) getpid(), strerror(errno));
exit(EXIT_FAILURE);
break;
case 0: // child process
WriteOnShared_Mem(ii);
ii = MAX_INT; // to stop the for loop
break;
default: // parent process
ChildPidTab[ii] = p;
/// etc.... some synchronization is needed
break;
}
In particular, fork can fail because
EAGAIN fork() cannot allocate sufficient memory to copy the
parent's page tables and allocate a task structure for
the child.
EAGAIN It was not possible to create a new process because the
caller's RLIMIT_NPROC resource limit was encountered. To
exceed this limit, the process must have either the
CAP_SYS_ADMIN or the CAP_SYS_RESOURCE capability.
ENOMEM fork() failed to allocate the necessary kernel structures
because memory is tight.
If you want to be able to fork more processes, try to:
increase the RLIMIT_NPROC resource limit with setrlimit(2) (which might be called by system facilities, so look also into /etc/pam.d/login etc
lower the resources required by the fork-ing program. In particular, lower the heap memory requirements
increase some system resources, like perhaps swap. You could swapon some temporary file for testing.
As Joachim Pileborg replied you should avoid forking too much (the forked process continues the loop so is also forking again).
Don't forget that stdio routines are buffered. Use fflush(3) appropriately.
I suggest reading the Advanced Linux Programming book (available online) which has a full chapter explaining process handling on Linux.
BTW, check with ps or top or pstree how many processes you have (and with the free command how much memory is used, but read http://linuxatemyram.com/ before complaining). It could happen that your particular system is not able to fork more than 24 times your particular program (because of lack of resources)
Study also the source code of simple shells (like sash) and use strace -f (e.g. on some shell, or on your program) to understand more what syscalls are done. Also learn how to use the gdb debugger.
It's because each child continues with the loop and so in turn fork their own children. When the children are done, you should either return from the main function or call exit.
The child process coutinue fork new child process, you just need stop it .
Like this:
switch (fork()) {
case -1 : {
printf("\n\nproblem with fork() !!! \n\n");
exit(0);
};
case 0 : {
i = 24 ; //--- here I set i = 24 , so child process will stop fork new child process.
WriteOnShared_Mem(ii);
}break;
default : {
ChildPidTab[ii] = p;
usleep(50000);
ReadShared_MemMp(nbSect, 24,ChildPidTab);
};
}
}
Related
I'm writing a code that mimics a shell behavior, specifically & and |.
My function receives user commands, and checks if there's an & at the end, then the child process should run in the background and the parent should not wait for it to finish and continue executing commands.
Also it's supposed to check if there's a | in the input array and run two child processes while piping their stdin and stdout.
I have implemented the behavior for &, but whenever I compile and run my code, I only get the printf sentence from the parent's process.
I would like to hear ideas how to fix this, in addition I would appreciate any suggestions regarding the implementation of | (pipes) and how to prevent zombies.
int process_arglist(int count, char** arglist) {
int pid = fork();
printf("%d", pid);
switch (pid) {
case -1:
fprintf(stderr, "ERROR: fork failed\n");
return 1;
break;
case 0: // Son's proccess
printf("I got to son");
//check last arglist argument
if (strcmp(arglist[count - 1], "&") == 0) {
setpgid(0, 0);
arglist[count - 1] = NULL;
if (execvp(*arglist, arglist) < 0) { //execute the command
fprintf(stderr, "ERROR: execvp failed\n");
exit(1);
}
} else { //There's no & at the end, look for pipes
int i = 0;
while (i < count) {
if (strcmp(arglist[i], "|") == 0) {
int pid2 = fork();
if (pid2 < 0) {
//fork failed, handle error
}
if (pid2 == 0) { // Son's proccess
} else { //Parent's code
}
}
}
}
break;
//in case no & and no |, call execvp
default: //Parent's code
printf("I go to parent");
return 1;
break;
}
return 0;
}
The output is always "I go to parent"
I assume your code is for Linux or some other POSIX system. Read some good book on Linux programming (perhaps the old Advanced Linux Programming, freely downloadable, or something newer).
stdio(3) is buffered, and stdout and printf is often (but not always) line-buffered. Buffering happens for efficiency reasons (calling write(2) very often, e.g. once per output byte, is very slow; you should prefer doing write-s on chunks of several kilobytes).
BTW you'll better handle failure of system calls (see intro(2) and syscalls(2)) by using errno(3) thru perror(3) (or strerror(3) on errno). You (and the user of your shell) needs to be informed of the failure reason (and your current code don't show it).
I recommend to often end your printf format control strings with \n (this works when stdout is line-buffered) or to call fflush(3) at appropriate places.
As a rule of thumb, I suggest doing fflush(NULL); before every call to fork(2).
The behavior you observe is consistent with the hypothesis that some printf-ed data is staying in buffers (e.g. of stdout).
You could use strace(1) on your program (or on other ones, e.g. some existing shell process) to understand what system calls are done.
You should compile with all warnings and debug info (e.g. gcc -Wall -Wextra -g with GCC), improve your code to get no warnings, and use the debugger gdb (with care, it can be used on forking processes).
I'm writing a code that mimics a shell behavior
You probably are coding some shell. Then study for inspiration the source code of existing free software shells (most -probably all- Linux shells are free software).
I would appreciate any suggestions regarding the implementation of | (pipes) and how to prevent zombies.
Explaining all that requires a lot of space (several chapters of a book, or perhaps, an entire book) and don't fit here or on any other forum. So read a good Linux or POSIX programming book. Regarding pipes, read pipe(7) (it should be created with pipe(2) before the fork). Regarding avoiding zombie processes, you need to carefully call waitpid(2) or some similar call.
For child processes, the wait() and waitpid() functions can be used to suspends execution of the current process until a child has exited. But this function can not be used for non-child processes.
Is there another function, which can wait for exit of any process ?
Nothing equivalent to wait(). The usual practice is to poll using kill(pid, 0) and looking for return value -1 and errno of ESRCH to indicate that the process is gone.
Update: Since linux kernel 5.3 there is a pidfd_open syscall, which creates an fd for a given pid, which can be polled to get notification when pid has exited.
On BSDs and OS X, you can use kqueue with EVFILT_PROC+NOTE_EXIT to do exactly that. No polling required. Unfortunately there's no Linux equivalent.
So far I've found three ways to do this on Linux:
Polling: you check for the existence of the process every so often, either by using kill or by testing for the existence of /proc/$pid, as in most of the other answers
Use the ptrace system call to attach to the process like a debugger so you get notified when it exits, as in a3nm's answer
Use the netlink interface to listen for PROC_EVENT_EXIT messages - this way the kernel tells your program every time a process exits and you just wait for the right process ID. I've only seen this described in one place on the internet.
Shameless plug: I'm working on a program (open source of course; GPLv2) that does any of the three.
You could also create a socket or a FIFO and read on them. The FIFO is especially simple: Connect the standard output of your child with the FIFO and read. The read will block until the child exits (for any reason) or until it emits some data. So you'll need a little loop to discard the unwanted text data.
If you have access to the source of the child, open the FIFO for writing when it starts and then simply forget about it. The OS will clean the open file descriptor when the child terminates and your waiting "parent" process will wake up.
Now this might be a process which you didn't start or own. In that case, you can replace the binary executable with a script that starts the real binary but also adds monitoring as explained above.
Here is a way to wait for any process (not necessarily a child) in linux to exit (or get killed) without polling:
Using inotify to wait for the /proc'pid' to be deleted would be the perfect solution, but unfortunately inotify does not work with pseudo file systems like /proc.
However we can use it with the executable file of the process.
While the process still exists, this file is being held open.
So we can use inotify with IN_CLOSE_NOWRITE to block until the file is closed.
Of course it can be closed for other reasons (e.g. if another process with the same executable exits) so we have to filter those events by other means.
We can use kill(pid, 0), but that can't guarantee if it is still the same process. If we are really paranoid about this, we can do something else.
Here is a way that should be 100% safe against pid-reuse trouble: we open the pseudo directory /proc/'pid', and keep it open until we are done. If a new process is created in the meantime with the same pid, the directory file descriptor that we hold will still refer to the original one (or become invalid, if the old process cease to exist), but will NEVER refer the new process with the reused pid. Then we can check if the original process still exists by checking, for example, if the file "cmdline" exists in the directory with openat(). When a process exits or is killed, those pseudo files cease to exist too, so openat() will fail.
here is an example code:
// return -1 on error, or 0 if everything went well
int wait_for_pid(int pid)
{
char path[32];
int in_fd = inotify_init();
sprintf(path, "/proc/%i/exe", pid);
if (inotify_add_watch(in_fd, path, IN_CLOSE_NOWRITE) < 0) {
close(in_fd);
return -1;
}
sprintf(path, "/proc/%i", pid);
int dir_fd = open(path, 0);
if (dir_fd < 0) {
close(in_fd);
return -1;
}
int res = 0;
while (1) {
struct inotify_event event;
if (read(in_fd, &event, sizeof(event)) < 0) {
res = -1;
break;
}
int f = openat(dir_fd, "fd", 0);
if (f < 0) break;
close(f);
}
close(dir_fd);
close(in_fd);
return res;
}
You could attach to the process with ptrace(2). From the shell, strace -p PID >/dev/null 2>&1 seems to work. This avoid the busy-waiting, though it will slow down the traced process, and will not work on all processes (only yours, which is a bit better than only child processes).
None I am aware of. Apart from the solution from chaos, you can use semaphores if you can change the program you want to wait for.
The library functions are sem_open(3), sem_init(3), sem_wait(3), ...
sem_wait(3) performs a wait, so you don´t have to do busy waiting as in chaos´ solution. Of course, using semaphores makes your programs more complex and it may not be worth the trouble.
Maybe it could be possible to wait for /proc/[pid] or /proc/[pid]/[something] to disappear?
There are poll() and other file event waiting functions, maybe that could help?
Since linux kernel 5.3 there is a pidfd_open syscall, which creates an fd for a given pid, which can be polled to get notification when pid has exited.
Simply poll values number 22 and 2 of the /proc/[PID]/stat.
The value 2 contains name of the executable and 22 contains start time.
If they change, some other process has taken the same (freed) PID. Thus the method is very reliable.
You can use eBPF to achieve this.
The bcc toolkit implements many excellent monitoring capabilities based on eBPF. Among them, exitsnoop traces process termination, showing the command name and reason for termination,
either an exit or a fatal signal.
It catches processes of all users, processes in containers, as well as processes that
become zombie.
This works by tracing the kernel sched_process_exit() function using dynamic tracing, and
will need updating to match any changes to this function.
Since this uses BPF, only the root user can use this tool.
You can refer to this tool for related implementation.
You can get more information about this tool from the link below:
Github repo: tools/exitsnoop: Trace process termination (exit and fatal signals). Examples.
Linux Extended BPF (eBPF) Tracing Tools
ubuntu manpages: exitsnoop-bpfcc
You can first install this tool and use it to see if it meets your needs, and then refer to its implementation for coding, or use some of the libraries it provides to implement your own functions.
exitsnoop examples:
Trace all process termination
# exitsnoop
Trace all process termination, and include timestamps:
# exitsnoop -t
Exclude successful exits, only include non-zero exit codes and fatal signals:
# exitsnoop -x
Trace PID 181 only:
# exitsnoop -p 181
Label each output line with 'EXIT':
# exitsnoop --label EXIT
Another option
Wait for a (non-child) process' exit using Linux's PROC_EVENTS
Reference project:
https://github.com/stormc/waitforpid
mentioned in the project:
Wait for a (non-child) process' exit using Linux's PROC_EVENTS. Thanks
to the CAP_NET_ADMIN POSIX capability permitted to the waitforpid
binary, it does not need to be set suid root. You need a Linux kernel
having CONFIG_PROC_EVENTS enabled.
Appricate #Hongli's answer for macOS with kqueue. I implement it with swift
/// Wait any pids, including non-child pid. Block until all pids exit.
/// - Parameters:
/// - timeout: wait until interval, nil means no timeout
/// - Throws: WaitOtherPidError
/// - Returns: isTimeout
func waitOtherPids(_ pids: [Int32], timeout: TimeInterval? = nil) throws -> Bool {
// create a kqueue
let kq = kqueue()
if kq == -1 {
throw WaitOtherPidError.createKqueueFailed(String(cString: strerror(errno)!))
}
// input
// multiple changes is OR relation, kevent will return if any is match
var changes: [Darwin.kevent] = pids.map({ pid in
Darwin.kevent.init(ident: UInt(pid), filter: Int16(EVFILT_PROC), flags: UInt16(EV_ADD | EV_ENABLE), fflags: NOTE_EXIT, data: 0, udata: nil)
})
let timeoutDeadline = timeout.map({ Date(timeIntervalSinceNow: $0)})
let remainTimeout: () ->timespec? = {
if let deadline = timeoutDeadline {
let d = max(deadline.timeIntervalSinceNow, 0)
let fractionalPart = d - TimeInterval(Int(d))
return timespec(tv_sec: Int(d), tv_nsec: Int(fractionalPart * 1000 * 1000 * 1000))
} else {
return nil
}
}
// output
var events = changes.map{ _ in Darwin.kevent.init() }
while !changes.isEmpty {
// watch changes
// sync method
let numOfEvent: Int32
if var timeout = remainTimeout() {
numOfEvent = kevent(kq, changes, Int32(changes.count), &events, Int32(events.count), &timeout);
} else {
numOfEvent = kevent(kq, changes, Int32(changes.count), &events, Int32(events.count), nil);
}
if numOfEvent < 0 {
throw WaitOtherPidError.keventFailed(String(cString: strerror(errno)!))
}
if numOfEvent == 0 {
// timeout. Return directly.
return true
}
// handle the result
let realEvents = events[0..<Int(numOfEvent)]
let handledPids = Set(realEvents.map({ $0.ident }))
changes = changes.filter({ c in
!handledPids.contains(c.ident)
})
for event in realEvents {
if Int32(event.flags) & EV_ERROR > 0 { // #see 'man kevent'
let errorCode = event.data
if errorCode == ESRCH {
// "The specified process to attach to does not exist"
// ingored
} else {
print("[Error] kevent result failed with code \(errorCode), pid \(event.ident)")
}
} else {
// succeeded event, pid exit
}
}
}
return false
}
enum WaitOtherPidError: Error {
case createKqueueFailed(String)
case keventFailed(String)
}
PR_SET_PDEATHSIG can be used to wait for parent process termination
I just realised that the "script" binary on GNU linux is using two forks instead of one.
It could simply use select instead of doing a first fork(). Why would it use two forks ?
Is it simply because select did not exist at the time it has been coded and nobody had the motivation to recode it or is there a valid reason ?
man 1 script: http://linux.die.net/man/1/script
script source: http://pastebin.com/raw.php?i=br8QXRUT
The clue is in the code, which I have added some comments to.
child = fork();
sigprocmask(SIG_SETMASK, &unblock_mask, NULL);
if (child < 0) {
warn(_("fork failed"));
fail();
}
if (child == 0) {
/* child of first fork */
sigprocmask(SIG_SETMASK, &block_mask, NULL);
subchild = child = fork();
sigprocmask(SIG_SETMASK, &unblock_mask, NULL);
if (child < 0) {
warn(_("fork failed"));
fail();
}
if (child) {
/* child of second fork runs 'dooutput' */
if (!timingfd)
timingfd = fdopen(STDERR_FILENO, "w");
dooutput(timingfd);
} else
/* parent of second fork runs 'doshell' */
doshell();
} else {
sa.sa_handler = resize;
sigaction(SIGWINCH, &sa, NULL);
}
/* parent of first fork runs doinput */
doinput();
There are thus three process running:
dooutput()
doshell()
doinput()
I think you are asking why use three processes, not one process and select(). select() has existed since ancient UNIX history, so the answer is unlikely to be that select() did not exist. The answer is more prosaic. doshell() needs to be in a separate process anyway, as what it does is exec the shell with appropriately piped fds. You thus need at least one fork. Writing dooutput() and doinput() within a select() loop looks to me perfectly possible, but it is actually easier to use blocking I/O rather than have to worry about using select etc. As fork() is relatively lightweight (given UNIX's CoW semantics) and there is little need for communication between the two processes, why use select() when fork() is perfectly good and produces smaller code? IE the real answer is 'why not?'
Does the process begin when fork() is declared? Is anything being killed here?
pid_t child;
child = fork();
kill (child, SIGKILL);
Or do you need to declare actions for the fork process in order for it to actually "begin"?
pid_t child;
child = fork();
if (child == 0) {
// do something
}
kill (child, SIGKILL);
I ask because what I am trying to do is create two children, wait for the first to complete, and then kill the second before exiting:
pid_t child1;
pid_t child2;
child1 = fork();
child2 = fork();
int status;
if (child1 == 0) { //is this line necessary?
}
waitpid(child1, &status, 0);
kill(child2, SIGKILL);
The C function fork is defined in the standard C library (glibc on linux). When you call it, it performs an equivalent system call (on linux its name is clone) by the means of a special CPU instruction (on x86 sysenter). This causes the CPU to switch to a privileged mode and start executing instructions of the kernel. The kernel then creates a new process (a record in a list and accompanying structures), which inherits a copy of memory mappings of the original process (text, heap, stack, and others), file descriptors and more.
The memory areas are marked as non-writable, so that when the new or the original process tries to overwrite them, the kernel gets to handle a CPU exception and perform a copy-on-write (therefore delaying the need to copy a memory page until absolutely necessary). That's because the mappings initially point to the same pages (pieces of physical memory) in both processes.
The kernel then gives execution to the scheduler, which decides which process to run next. It could be the original process, the child process, or any other process running in the system.
Note: The Linux kernel actually puts the child process in front of the parent process in the run queue, so it is run earlier than the parent. This is deemed to give better performance when the child calls exec right after forking.
When execution is given to the original process, the CPU is switched back to nonprivileged mode and starts executing the next instruction. In this case it continues with the fork function of the standard library, which returns the PID of the child process (as returned by the clone system call).
Similarly, the child process continues execution in the fork function, but here it returns 0 to the calling function.
After that, the program continues in both cases normally. The child process has the original process as the parent (this is noted in a structure in the kernel). When it exists, the parent process is supposed to do the cleanup (receiving the exit status of the child) by calling wait.
Note: The clone system call is rather complicated, because it unifies fork with the creation of threads, as well as linux namespaces. Other operating systems have different implementation of fork, e.g. FreeBSD has fork system call by itself.
Disclaimer: I am not a kernel developer. If you know better, please correct the answer.
See Also
clone (2)
The Design and Implementation of the FreeBSD Operating System (Google Books)
Understanding the Linux Kernel (Google Books)
Is it true that fork() calls clone() internally?
"Declare" is the wrong word to use in this context; C uses that word to talk about constructs that merely assert the existence of something, e.g.
extern int fork(void);
is a declaration of the function fork. Writing that in your code (or having it written for you as a consequence of #include <unistd.h>) does not cause fork to be called.
Now, the statement in your sample code, child = fork(); when written inside a function body, does (generate code to) make a call to the function fork. That function, assuming it is in fact the system primitive fork(2) on your operating system, and assuming it succeeds, has the special behavior of returning twice, once in the original process and once in a new process, with different return values in each so you can tell which is which.
So the answer to your question is that in both of the code fragments you showed, assuming the things I mentioned in the previous paragraph, all of the code after the child = fork(); line is at least potentially executed twice, once by the child and once by the parent. The if (child == 0) { ... } construct (again, this is not a "declaration") is the standard idiom for making parent and child do different things.
EDIT: In your third code sample, yes, the child1 == 0 block is necessary, but not to ensure that the child is created. Rather, it is there to ensure that whatever you want child1 to do is done only in child1. Moreover, as written (and, again, assuming all calls succeed) you are creating three child processes, because the second fork call will be executed by both parent and child! You probably want something like this instead:
pid_t child1, child2;
int status;
child1 = fork();
if (child1 == -1) {
perror("fork");
exit(1);
}
else if (child1 == 0) {
execlp("program_to_run_in_child_1", (char *)0);
/* if we get here, exec failed */
_exit(127);
}
child2 = fork();
if (child2 == -1) {
perror("fork");
kill(child1, SIGTERM);
exit(1);
}
else if (child2 == 0) {
execlp("program_to_run_in_child_2", (char *)0);
/* if we get here, exec failed */
_exit(127);
}
/* control reaches this point only in the parent and only when
both fork calls succeeded */
if (waitpid(child1, &status, 0) != child1) {
perror("waitpid");
kill(child1, SIGTERM);
}
/* only use SIGKILL as a last resort */
kill(child2, SIGTERM);
FYI, this is only a skeleton. If I were writing code to do this for real (which I have: see for instance https://github.com/zackw/tbbscraper/blob/master/scripts/isolate.c ) there would be a whole bunch more code just to comprehensively detect and report errors, plus the additional logic required to deal with file descriptor management in the children and a few other wrinkles.
The fork process spawns a new process identical to the old one and returns in both functions.
This happens automatically so you don't have to take any actions.
But nevertheless, it is cleaner to check if the call indeed succeeded:
A value below 0 indicates failure. In this case, it is not good to call kill().
A value == 0 indicates that we are the child process. In this case, it is not very clean to call kill().
A value > 0 indicates that we are the parent process. In this case, the return value is our child. Here it is safe to call kill().
In your case, you even end up with 4 processes:
Your parent calls fork(), being left with 2 processes.
Both of them call fork() again, resulting in a new child process for each of them.
You should move the 2nd fork() process into the branch where the parent code runs.
The child process begins some time after fork() has been called (there is some setup which happens in the context of the child).
You can be sure that the child is running when fork() returns.
So the code
pid_t child = fork();
kill (child, SIGKILL);
will kill the child. The child might execute kill(0, SIGKILL) which does nothing and returns an error.
There is no way to tell whether the child might ever live long enough to execute it's kill. Most likely, it won't since the Linux kernel will set up the process structure for the child and let the parent continue. The child will just be waiting in the ready list of the processes. The kill will then remove it again.
EDIT If fork() returns a value <= 0, then you shouldn't wait or kill.
I'm working on an exercise on the textbook "Operating System Concepts 7th Edition", and I'm a bit confused about how does fork() work. From my understanding, fork() creates a child process which runs concurrently with its parent. But then, how do we know exactly which process runs first? I meant the order of execution.
Problem
Write a C program using fork() system call that generates the Fibonacci sequence in the child process. The number of sequence will be provided in the command line.
This is my solution:
#include <sys/types.h>
#include <stdlib.h>
#include <stdio.h>
#include <unistd.h>
void display_fibonacci_sequence( int n ) {
int i = 0;
int a = 1;
int b = 1;
int value;
printf( "%d, %d, ", a, b );
for( ;i < n - 2; ++i ) {
value = a + b;
printf( "%d, ", value );
a = b;
b = value;
}
printf( "\n" );
}
int main( int argc, char** argv ) {
int n;
pid_t pid;
pid = fork();
if( argc != 2 ) {
fprintf( stderr, "Invalid arguments" );
exit( -1 );
}
n = atoi( argv[1] );
if( pid < 0 ) {
fprintf( stderr, "Fork failed" );
exit( -1 );
}
else if( pid == 0 ) {
display_fibonacci_sequence( n );
}
else { // parent process
// what do we need to do here?
}
}
To be honest, I don't see any difference between using fork and not using fork. Besides, if I want the parent process to handle the input from user, and let the child process handle the display, how could I do that?
You are asking many questions, I'll try to answer them in a convenient order.
First question
To be honest, I don't see any difference between using fork and not
using fork.
That's because the example is not a very good one. In your example the parent doesn't do anything so the fork is useless.
Second
else {
// what do we need to do here?
}
You need to wait(2) for the child to terminate. Make sure you read that page carefully.
Third
I want the parent process to handle the input from user, and let the
child process handle the display
Read the input before the fork and "handle" the display inside if (pid == 0)
Fourth
But then, how do we know exactly which process runs first?
Very few programs should concern themselves with this. You can't know the order of execution, it's entirely dependent on the environment. TLPI says this:
After a fork(), it is indeterminate which process—the parent or the
child—next has access to the CPU. On a multiprocessor system, they may both simultaneously get access to a CPU.
Applications that implicitly or explicitly rely on a particular
sequence of execution in order to achieve correct results are open to
failure due to race conditions
That said, an operating system can allow you to control this order. For instance, Linux has /proc/sys/kernel/sched_child_runs_first.
We don't know which runs first, the parent or the child. This is why the parent generally has to wait for the child process to complete if there is some dependency on order of execution between them.
In your specific problem, there isn't any particular reason to use fork(). Your professor probably gave you this just for a trivial example.
If you want the parent to handle input and the child to calculate, all you have to do is move the call to fork() below the point at which you handle the command-line args. Using the same basic logic as above, have the child call display_fibonacci_sequence, and have the parent simply wait
The process which is selected by your system scheduler is chosen to run, not unlike any other application running on your operating system. The process spawned is treated like any other process where the scheduler assigns a priority or spot in queue or whatever the implementation is.
But then, how do we know exactly which process runs first? I meant the
order of execution.
There is no guarantee to which one ran first. fork returns 0 if it is the child and the pid of the child if it is the parent. Theoretically they could run at exactly the same time on a multiprocessor system. If you actually wanted to determine which ran first you could have a shared lock between the two processes. The one that acquires the lock first could be said to have run first.
In terms of what to do in your else statement. You'll want to wait for the child process to exit using wait or waitpid.
To be honest, I don't see any difference between using fork and not using fork.
The difference is that you create a child process. Another process on the system doing computation. For this simple problem the end user experience is the same. But fork is very different when you are writing systems like servers that need to deal with things concurrently.
Besides, if I want the parent process to handle the input from user, and let the child process handle the display, how could I do that?
You appear to have that setup already. The parent process just needs to wait for the child process to finish. The child process will printf the results to the terminal. And the parent process currently gets user input from the command line.
While you cannot control which process (parent or child) gets scheduled first after the fork (in fact on SMP/multicore it might be both!) there are many ways to synchronize the two processes, having one wait until the other reaches a certain point before it performs any nontrivial operations. One classic, extremely portable method is the following:
Prior to fork, call pipe to create a pipe.
Immediately after fork, the process that wants to wait should close the writing end of the pipe and call read on the reading end of the pipe.
The other process should immediately close the reading end of the pipe, and wait to close the writing end of the pipe until it's ready to let the other process run. (read will then return 0 in the other process)