C Processes Programming [closed] - c

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
I'm taking an Operating Systems class for university and we have an assignment as follows:
Write a program that can be used to create a child process.
The child process should create a file called “Listx.txt” and ask the user for data to write to it. The parent process should read the data from the file and display it on the screen.
Modify the program to make the parent read the file and display the contents five times. It should pause for 1 second between each display.
Modify the program to make the parent read the file and display the contents over and over again until the user sends SIGSTOP. It should pause for 1 second between each display.
And this is the code I've come up with:
#include <unistd.h>
#include <stdlib.h>
#include <stdio.h>
#include <fcntl.h>
int main()
{
int x;
int y = 0;
pid_t pid = fork();
if (pid==0)
{
printf("Hi, i am the child\n");
int fd;
fd = open("listx.txt", O_RDWR |O_CREAT |O_TRUNC);
printf ("enter Number");
scanf("%d\n",x);
char wd [100];
ssize_t nr;
wd[0]=x;
nr = write (fd, wd, sizeof (wd));
}
else
printf(" I am the parent, the child is %d\n",pid);
{
int fd;
fd = open ("listx.txt", O_RDONLY);
if (fd == -1)
{
printf("file not opened \n");
}
else
{
printf("file found \n");
}
char wd[100];
ssize_t nr;
nr = read (fd, wd, sizeof (wd));
if (nr == -1)
{
printf("file not read \n");
}
else
{
while (y < 5){
printf("The file has %s \n",wd);
sleep(1);
}
}
return 0;
The program compiles (through GCC) but I think I have the logic wrong.
May you kindly assist with helping me solve this?

This:
scanf("%d\n",x);
char wd [100];
ssize_t nr;
wd[0]=x;
is rather wrong, in more ways than one:
You must pas &x to scanf(), since it can't store the value unless given an address. Instead you pass the current value of x, causing undefined behavior.
You assign the value of x into a single character, which is going to drop lots of bits. This is probably not what you want to do.
You use file descriptors even after detecting that they are not valid.
Please figure out how to maximize the diagnostics (warnings and errors) from your compiler, and observe what it says. Many of these problems will generate warnings. For GCC, this manual page is informative. Basically, start out by adding -Wall -Wpedantic -Wextra to your compiler invocation.

exit the child (_exit(0))
wait in the parent until child has finished (waitpid(2) et.al.)

Apart from scanf problems I see printf(" I am the parent, the child is %d\n",pid); that I suspect that you wanted inside the curly brackets.
Moreover you need to ensure that the child wrote before starting reading so the first instruction in the parent should be waitpid(pid,&status,0); that waits for the child termination (and indirectly for the file being written). Note that the fact that the code of the child is on top doesn't mean that it will executed as first (I think this is what the exercise wants to highlight).
Another thing that you should always do as a good programmer is closing your file after writing.

Related

problem in writing to terminal after using execvp and dup2 syscalls

Line number #15 { printf("This goes to the terminal\n"); } is not getting printed anywhere not in the terminal nor in the file.
//inputs argc = 3 :- ./executable_file output_file command
int main(int argc, char **argv)
{
if(argc < 3)
{
return 0;
}
int stdout_copy = dup(1);
int fd = open(argv[1], O_CREAT | O_RDWR | O_TRUNC, 0644);
if (fd < 0)
{
printf("ERROR\n");
return 0;
}
printf("This goes to the standard output(terminal).\n");
printf("Now the standard output will go to \"%s\" file .\n", argv[1]);
dup2(fd, 1);
printf("This output goes to \"%s\"\n",argv[1]);
close(fd);
execvp(argv[2],argv+2);
dup2(stdout_copy,1);
printf("This goes to the terminal\n");
return 0;
}
Apologies for the Previous Question :
I'm really sorry, it was my mistake in analysing it.
And special thanks for all answers and hints.
problem in writing to terminal after using execvp and dup2 syscalls
Neither:
execvp(argc[2],argc+2);
dup2(stdout_copy,1);
printf("This goes to the terminal\n");
Or:
dup2(stdout_copy,1);
execvp(argc[2],argc+2);
printf("This goes to the terminal\n");
...will output to stdout if the call to execvp(argc[2],argc+2); succeeds.
However, both will output to stdout if it fails.
(Unless command line arguments are incorrect, dup2() likely has nothing to do with failure to output to stdout. See additional content below for how to check this.)
Read all about it here: execvp.
In a nutshell, execvp() replaces the current process with a new process. If it is successful the current process is no longer what you are viewing on the terminal. Only when it is not successful will the commands following it be executed.
The following suggestions are not precisely on-topic, but important nonetheless...
Change:
int main(int argv, char **argc)
To:
int main(int argc, char **argv) //or int main(int argc, char *argv[]), either are fine.
This will be the foundation of seeing normal behavior. Anything else is very confusing to future maintainers of your code, and to people trying to understand what you are doing here.
These names are easily remembered by keeping in mind that argc is used for the count of command line arguments, and argv is the vector that is use to store them.
Also, your code shows no indications that you are checking/validating these arguments, but given the nature of your program, they should be validated before going on. For example:
//verify required number of command line arguments was entered
if(argc <!= 3)//requires at least one additional command line argument
{
printf("Usage: prog.exe [path_filename]\nInclude path_filename and try again.\nProgram will exit.");
return 0;
}
//check if file exists before going on
if( access( argv[1], F_OK ) != -1 )
{
// file exists
} else {
// file doesn't exist
}
//do same for argv[2]
(2nd example to check file in Linux environment is from here)
BTW, Knowing the command line arguments that were passed into the program, would help to provide a more definitive answer here. Their syntax and content, and whether or not the files that they reference exist, determine how the call to execvp will behave.
Suggestions
It is generally always look at the return values of functions that have them. But because of the unique behavior of execvp If is successful it does not return, and if it fails it will always return -1. So in this case pay special attention to the value of errno for error indications, again all of which are covered in the link above.
As mentioned in comments (in two places.) it is a good idea to use fflush(stdout) to empty buffers when interpreting standard I/O and file descriptor I/O, and before using any of the exec*() family of calls.
Take time to read the man pages for the functions - shell commands that are used. It will save time, and guide you during debugging sessions.

merging two files with write and read in C [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
I would like to merge two files using open and file descriptor. Moreover I would like to separate the content of the first file with - before writing the the content of the second file.
I did the following :
void merge (char* fileName, char *fileName1) {
int fd = open(fileName, O_RDWR);
char c;
while (read(fd, c, 1) > 0) {//going at the end of the first file
}
char next[] = "\n";
char charc[] = "-";
write (fd, next, strlen(next));
for (int i = 0; i < 80; i++) {
if (write (fd, charc, strlen(charc)) == -1) {
perror("error : ");
}
}
write (fd, next, strlen(next));
int fd1 = open(fileName1, O_RDWR);
while(read(fd1, &c, 1) > 0) {
write(fd, &c, sizeof(c));
}
close(fd1);
close(fd);
}
Is there a better way to write this code ? Moreover I have a little problem even if it works it seems like I don't have the right to read the new file. For example if I do cat newFile I have a permission denied.
Is there a better way to write this code ?
You are not handling errors of all calls. All of syscalls open, write, read and close return -1 on error and set errno and may do that at any time. EINTR could be handled.
going at the end of the first file open has O_APPEND flag mode that is used for appending data.
Copying one character at a time is very not optimal. With glibc standard library you could use BUFSIZ bytes at a time that is chosen for fast I/O output. You could make a copy of a big chunk size at a time that is a power of 2, like 2048 or 4096.
There is little reason to use file descriptors here - prefer to use standard FILE * handling, which would make your code portable and also buffer the data for faster I/O.
If you wish to create the file use O_CREAT and add the third argument to open that is the mask of permissions of new file.
On linux there is splice(2) system call that can be used to append data on kernel side for maximum efficiency.

Chaos when fork() meets fopen()

I've found that a open filestream will get messed up if we do fork() before closing it. It is well known that concurrency, i.e., race conditions can happen when parent and child process want to modify the filestream. However, even when the child process doesn't ever touch the filestream, it still has undefined behavior. I was wondering if someone can explain this maybe from how the kernel deals with a filestream during the stages where child process is forked and exited.
Below is a quick snippet of a strange behavior:
#include <unistd.h>
#include <stdio.h>
#include <stdlib.h>
#include <sys/types.h>
#include <sys/wait.h>
int main() {
// Open file
FILE* fp = fopen("test.txt", "r");
int count = 0;
char* buffer = NULL;
size_t capacity = 0;
ssize_t line = 0;
while ( (line = getline(&buffer, &capacity, fp)) != -1 ) {
if (line > 0 && buffer[line - 1] == '\n') // remove the end '\n'
buffer[line - 1] = 0;
pid_t pid = fork();
if (pid == 0) {
// fclose(fp); // Magic line here: when you add this, everything is fine
if (*buffer == '2')
execlp("xyz", "xyz", NULL);
else
execlp("pwd", "pwd", NULL);
exit(1);
} else {
waitpid(pid, NULL, 0);
}
count++;
}
printf("Loops: %d\n", count);
return 0;
}
Just copy the code into a new file (e.g., test.c). And create a .txt file test.txt with the simple content
1
2
3
4
and run
$ gcc test.c && ./a.out
There are 4 lines in the file. The loops is expected to read each line and execute exactly 4 times (1 2 3 4). And I choose to let it exec an invalid command "xyz" when it's in the 2nd loop. Then, you will find the loop actually executes 6 times (1 2 3 4 3 4)! The fact is that, when all four commands executed are valid, nothing will go wrong. But if there is an invalid command executed, every command after it will be executed twice. (Please note that this strange behavior only occurs with Linux machine, my Mac OS is doing okay, not sure about Windows. So the problem is platform-dependent?)
It looks like whenever I fork(), the filestream in parent is no longer promised to be the old fp (non-deterministic behavior), even when my child process doesn't touch it.
A temporary solution I found is: fclose(fp) in child process. This will silence the above strange behavior, but in more complex conditions, there are still other things can be observed. It would be appreciated if somebody can give me some insight into this problem. Thanks
As said in the comments already you need to close open file descriptors before calling exec.
In this blogpost (section 4) there is a neat code sample you can use to ensure all fds are closed even in complex applications where you don't always know what files are open at the moment:
for ( i=getdtablesize(); i>2; --i)
close(i); /* close all descriptors */
(slightly modified to keep stdin, stdout, stderr open)
It's kind of hacky but it works. If you want to avoid that you can also set the O_CLOEXEC flag on each file descriptor that you open. Since when using fopen you do not directly call open() you can accomplish this by adding the 'e' flag to it (when using glibc >= 2.7):
FILE* fp = fopen("test.txt", "er");
When calling exec*() all file descriptors with this flag are automatically closed.

Yet another minishell pipeline in C

As many before me, I'm trying to implement a basic shell in C. Overall things are working nicely, and I'm now trying to add pipes and redirections.
I've read a lot about the pipe() function and have successfully written a side program to pipe a function's output into a second function's input.
Where I have trouble is when it comes to looping over an undetermined amount of functions.
Here's the last version of my function as well as the main I use to test it :
char **g_env;
int ft_pipeline(char **cmd, unsigned int pos)
{
int in;
int pfd[2];
pid_t pid;
char **cur_cmd;
in = 0;
while (cmd[pos])
{
if (pipe(pfd) != 0)
return (1);
close(pfd[0]);
dup2(pfd[1], in);
close(pfd[1]);
pid = fork();
if (pid == -1)
return (2);
if (pid == 0) //child
{
close(pfd[1]);
dup2(pfd[0], 0);
close(pfd[0]);
cur_cmd = ft_strsplit_blank(cmd[pos]);
execve(cur_cmd[0], cur_cmd, g_env);
}
wait(NULL);
in = pfd[0];
pos++;
}
return (0);
}
int main(void)
{
char **cmd = ft_strsplit("/bin/ls -l /dev | /bin/grep std", '|');
g_env = NULL;
ft_pipeline(cmd, 0);
return (0);
}
In this current form, ls is properly executed but written to stdout, and grep returns :
/bin/grep: (standard input): Bad file descriptor
This is the fifth time I rewrite my code, and I've tried to tweak it for a few days now. I've also read several other post here to try and grasp the logic behind this small program, to no avail.
I'd really like it if you could tell me where I'm making the mistake and how I could fix it.
Note : You will very likely find many ways to improve this code in its form. I know about it but that's something that I cannot do, in most cases. Although this is not homework, it is still something that I do for school (see it as a voluntary practice) and I have to respect standards in the way I write my code or the functions I use.

C strange anomaly, when writing to file (works normally when writing to stdout)

I'm very new to C so please bear with me. I am struggling with this for really long time and I had a hard time to narrow down the cause of error.
I noticed that when forking process and writing to a file (only the original process writes to the file a strange thing happens, the output is nearly multiplied by the number of forks, hard to explain, thus I made a small test code where you can run and it recreates the problem.
#include <stdio.h>
#include <stdlib.h>
void foo()
{
FILE* file = fopen("test", "w");
int i=3;
int pid;
while (i>0)
{
pid=fork();
if(pid==0)
{
printf("Child\n");
exit(0);
}
else if(pid > 0)
{
fputs("test\n", file);
i=i-1;
}
}
}
int main()
{
foo();
exit(EXIT_SUCCESS);
}
Compile and run it once the way it is and once with file=stdout. When writing to stdout the output is:
test
test
test
But when writing to the file the output is:
test
test
test
test
test
test
Also if you add indexing and change i to a larger number you can see some kind of a pattern, but that doesn't help me.
Well frankly said I have no idea why could this happen, neither how to fix it. But I am a total novice at C so there might be just a normal logical explanation for all this =).
Thank you for all your time and answers.
stdout is usually unbuffered or line buffered; other files are typically block buffered. You need to fflush() them before fork(), or every child will flush its own copy of the buffer, leading to this multiplication.

Resources