I write a code that some processes use an array, sort it and then print it. In fact What I would like to do is that each process should sort a line of integer numbers that main process gives to them and print them, then send them back to the main process. The algorithm works fine without process and forking. But when I add forking, some process cause printing or performing some instruction more than one time or more. Please let me know how to manage it.
Here is the code:
if (N<=NumberOfLines)
{
PortionOfProcess=NumberOfLines/N;
for (int i=0;i<N;i++)//making N process using fork
{
for (int j=0; j<PortionOfProcess; j++)//For using from function by a single process
{
int pointer=i*PortionOfProcess+j;//this points to the line for each process
printf("poniter: %d . the i is %d, the j is: %d and the portionprocess is : %d\n",pointer,i,j,PortionOfProcess);
fileopener(B_result,pointer);
mypid=fork();
if (mypid==0)//child
{
///////////do the sorting
for (int j=0 ; j<(y-1) ; j++)
{
for (int i=0 ; i<(y-1) ; i++)
{
if (B_result[i+1] < B_result[i])
{
t = B_result[i];
B_result[i] = B_result[i + 1];
B_result[i + 1] = t;
}
}
}
for (int j=0 ; j<y ; j++)
{
printf("SORTED %d \n",B_result[j]);
}
//////////////////end sorting
}
}
}
}
I am new in C programming. I write a code that some processes use an array, sort it and then print it. The algorithm works fine with out process and forking
Here is what fork() does: it creates an entire new copy of the process that, in most ways, is completely independent of the original. However, the original parent process does not wait for the children to finish. Nor has it any way of communicating with the children.
What you want to do is actually quite complex. The parent and child processes needs to create some sort of communications channel. This is most usually done by creating a pipe between them. The child will then write to the pipe like a normal file and the parent will read from the pipe. The logic will look something like this:
create pipe
fork
if parent close write end of the pipe
if child close read end of pipe
The children then do their stuff and exit normally. The parent, however, has a load of files to read and it doesn't know which order to read them in. In your case the children are fairly simple, so you could probably just read each one in the order you create it, but you may also want to look at select so that you read the results in the order they are ready.
Finally, you need to call wait or waitpid so that you get the return status of each child and you do not end up with zombie processes which is a possibility because with the parent blocking on input from various pipes, any mistake you make could lead to it waiting forever (or until killed).
Related
I want to know the number of processes created for the below code. I got to know from my instructor the answer is 41 but I am unable to follow the same. Please explain the same with a process tree.
void main() {
for (i=0;i<2;i++){
fork();
if(!fork()) {
execl("/bin/ls","ls",NULL);
fork();
}
fork();
}
fork();
}
This looks like a homework question. If we would draw a process tree for you, you might get some points now, but you will not learn how to analyze a program, and this may hurt you later. You will learn more by understanding how the program works. (Of course, this program is an academic example and not very useful except for learning.)
I suggest to mark the fork calls with letters.
int main(void) {
for (int i = 0; i < 2; i++) {
fork(); /* A */
if(!fork()) { /* B */
execl("/bin/ls","ls",NULL);
fork(); /* C */
}
fork(); /* D */
}
fork(); /* E */
}
Take paper and pencil, write down what happens and draw a tree using the loop counter and the marked fork calls.
Example:
The program runs a loop for two cycles (0 and 1), the loop continues in all processes.
In parent P, loop cycle 0, fork A will create child 1.
P -(0,A)-> 1
Still in loop cycle 0, both P and 1 will run the fork B inside the condition, creating a new child each.
P -(0,B)-> 2, 1 -(0,B)-> 3.
Think about the meaning of the condition and decide which processes run the conditional block.
Think about what happens after execl, e.g. process x executes ls, resulting in ...
Some processes (name them) will reach D and create a child each, all will continue with loop cycle 1...
etc.
To see what happens you could add some output after every fork to display what happens: which loop index, which fork, is the process parent or child of this fork, PID, parent PID. And before the execl display which PID is about to call it. (Note that buffered output like printf may show unexpected behavior in combination with fork, so it might be better to use sprintf and write.) Running the program will produce output that could help you to draw a process tree. It is even possible to format the output in a way that a tree could be generated automatically using graphviz or PlantUML. All these are advanced topics.
I want to read a file using different processes but when i try that first created child read all file so other processes cannot read the file . For example i create 3 different process with 101,102 and 103 process ids.
a read from = 101.
b read from = 101.
c read from = 101.
d read from = 101.
But I wanted to read like that
a read from = 101.
b read from = 103.
c read from = 102.
d read from = 103.
I tried to solve it using semaphore and mutex but I couldn't do that. Could you help me, please?
int i=0, pid;
char buffer[100];
for(i=0; i<3; i++){
pid = fork();
if(pid == 0){
sem_wait(&mutex); // sem_t mutex is global.
while(read(fd,&buffer[j],1) == 1){
printf("%c read from = %d\n",buffer[j],getpid());
j++;
}
sem_post(&mutex);
exit(0);
}
else{
wait(NULL);
}
}
The problem is that even though each process has its own file descriptor, those file descriptors all share the same open file description ('descriptor' != 'description'), and the read position is stored in the file description, not the file descriptors. Consequently, when any of the children reads the file, it moves the file pointer for all the children.
For more information about this, see the POSIX specifications for:
open()
dup2()
fork()
No mutex or other similar gadget is going to fix this problem for you — at least, not on its own. The easiest fix is to reopen the file in the child processes so that each child has a separate open file description as well as its own file descriptor. Alternatively, each child will have to use a mutex, rewind the file, read the data, and release the mutex when done. It's simpler to (re)open the file in the child processes.
Note that the mutex must be shared between processes for it to be relevant. See the POSIX specification for pthread_mutexattr_setpshared(). That is not set with the default mutex attribute values.
You have two problems that prevent what you appear to want and will result in the entire file being read by (just) the first child
the parent process waits for each child immediately after creating it, before forking any more children. So after creating the first child, it will wait for that child to exit looping and creaing a second child. To fix that, you need have two loops parent -- the first just creates children and the second waits for them:
for (...) {
if (fork() == 0) {
// run the child
exit(0); } }
for (...)
wait(NULL); // wait for a child
your reading loop is inside the sem_wait/sem_post. So the first child will get the mutex, then proceed to read the entire file before releasing the mutex. Subsequent children will not get the mutex until the file is fully read, so they'll see they're at the EOF and exit. To fix this you need to move the sem_wait/sem_post calls inside the while loop:
while (!done) {
sem_wait(&mutex);
if (read(...) == 1) { ...
} else {
done = true; }
sem_post(&mutex); }
You might not even need the semaphore at all -- the kernel will synchronize reads between the different processes, so each byte will be read by exactly one child. This would allow the children to proceed in parallel (processing the bytes) rather than only allowing one child at a time to run.
Of course, even with the above, one child may process many bytes before another child starts to run and process them, so if the processing is fast and there are few bytes, they might still all be consumed by the first child.
I've been searching around about the topic, but it's kinda confusing for me with how many different techniques there are and I'm not sure how to approach my problem.
I have a function that computes some value, but it's based on random numbers and I want to compute that value multiple times, let's say few dozen or hundred times and take the average of it, but since it takes quite a while I've wanted to use multiprocessing, with each process executing that function, saving the result and then I'd simply sum the results and divide by the amount of worker processes in the main process.
Quite simple in theory, but I have no idea how to do it - it seems that a simple way would be to just do something like
loop that creates pipes
if (fork())
loop that reads the outputs of pipes
else
code of function that computes the desired value
but that somehow seems wrong? I'm really not sure how to do it
EDIT:
To adress the comments, I've been thinking about something like this:
for (int i = 0; i < n_children; ++i) {
if (fork() == 0) { //child process
x += estimation();
}
}
for (int i = 0; i < n_children; ++i) //waiting for each process to end
wait(NULL);
x /= n_children;
but I know that it won't work properly, I don't know how to store/synchronize the results
As William Pursell mentioned in the comments, a single pipe is what you want. The parent will close the write end, and each forked child will close the read end. Each child writes its result to the pipe. The parent calls wait(2) on each child and, if the status indicates data was written to the pipe, reads the pipe and updates the average.
It could also be done with Posix anonymous shared memory. Allocate an array of results in shared memory. Each child will have a unique value of the loop variable i when its process is created. The child writes to array[i]. The parent waits for each child. When they have all completed, iterate over the array and compute.
Say I have a function called worker(int in, int out) that performs some task based on the given in file descriptor and takes the result and writes it to out.
It might look something like:
while (read(in, buffer, some_max_length) > 0){
// Do something with buffer
.
.
write(out, some_other_info, some_size);
}
Say I also have a main process that spawns a variable amount of processes that might look something like:
// Assume I have done error checking.
array_of_write_pipes[n][2];
array_of_read_pipes[n][2]; // Assume these have already been populated.
while(there is a word from stdin){
// Spawn n processes
for(int i = 0; i < n; i++){
n = fork();
if(n == 0){
write(array_of_read_pipes[i][1], some_data, some_max_length);
worker(some_other_data, array_of_read_pipes[i][0], array_of_write_pipes[i][1]);
exit(0);
}
}
// Block of code waiting for all children to terminate.
// Collect the results of each child process
for(i = 0; i < n; i++){
while (read(array_of_write_pipes[i][0], buffer, some_max_length) > 0){
// Do something with buffer
.
.
}
}
}
At the moment, it seems to hang.
My end goal is to have the main process send the same word to each child process. Each child process then performs the task via worker(). Then, the child process sends its results back to the main process for further processing.
At this moment, I'm not even sure if I'm even remotely going in the right direction.
I tried to keep this question as general as possible except for the parts dealing with piping. If I'm missing any information, please let me know. Just a disclaimer, this is a homework related problem and I do not want the full answer, just whether or not my thought process is correct. If not, what am I missing?
Any help is appreciated.
So I'm forking a couple of child processes and each of them is supposed to take a line that I've read from a file and do operations on them.
What I have is a struct containing the lines like :
struct query {
char lines[LINESIZE];
};
and I have an array of structs. So each struct serves to one child process.
This is how I forked my child processes :
for(i=0; i<5; i++) {
n = fork();
}
And say I have five structs to serve for each of these processes.
struct query query[5];
So First processes takes query[0].lines and do some operations on it, second process gets query[1].lines and does the same operations on it and so on ...
Should I use pipe to pass values between processes? I feel like there's a much simpler solution to this but my lack of practice and knowledge in C is really slowing me down.
I suppose you're trying to spawn 5 processes, but in the code that you posted you'll end up creating way more than 5 processes, in fact in:
for(i = 0; i < 5; ++i) {
n = fork();
}
when i = 0 you'll fork a process, since the forked process is an exact copy of the parent it will continue in the for loop, so at that point you'll have two processes each one having i = 1 and forking each one a new process, then you'll have at this point 4 processes, when the loop is complete you have created 160 processes.
Allocating and initializing the array "query" before the forking it is perfectly fine what you have to fix is the spawning. The fork() call returns 0 in the child process, the process id of the child to the parent process or -1 if there was a error. Knowing if the current process is the parent or the child we can continue or break out of the loop and do the computation:
for(i = 0; i < 5; ++i) {
if(fork() == 0) {
/* child process */
process_query(query[i]);
exit();
}
}