I have an assignment requiring me to write a multi-processed program that works with a memory-mapped file containing a string of characters. After the parent process maps the file to memory, it spawns 2 children processes to modify the file. Child 1 outputs the contents of the file, converts the file's contents to their upper case equivalent, then outputs the file's new contents. Child 2 waits 1 second to let child 1 finish, outputs the file's contents, removes any hyphen " - " characters, then outputs the file's new contents. My problem with both child processes is that after first displaying the file's contents, the processes attempt to modify the contents of the file, but neither child outputs the file's new contents. I get no errors when running or compiling so I can't find out what the problem is. And of course, I'm new to memory mapping so feel free to let me know what I'm doing wrong. Here is my source code:
#include <stdio.h>
#include <unistd.h>
#include <stdlib.h>
#include <sys/types.h>
#include <sys/mman.h>
#include <fcntl.h>
#include <sys/stat.h>
#include <signal.h>
#include <string.h>
int main (int argc, char *argv[]) {
struct stat buf;
int fd, length, status, i, j, k;
char *mm_file;
char *string = "this is a lowercase-sentence.";
length = strlen(string);
fd = open(argv[1], O_CREAT | O_RDWR, 0666); //Creates file with name given at command line
write(fd, string, strlen(string)); //Writes the string to be modified to the file
fstat(fd, &buf); //used to determine the size of the file
//Establishes the mapping
if ((mm_file = mmap(0, (size_t) buf.st_size, PROT_READ|PROT_WRITE, MAP_SHARED, fd, 0)) == (caddr_t) - 1) {
fprintf(stderr, "mmap call fails\n");
}
//Initializes child processes
pid_t MC0;
pid_t MC1;
//Creates series of child processes which share the same parent ID
if((MC0 = fork()) == 0) {
printf("Child 1 %d reads: \n %s\n", getpid(), mm_file);
//{convert file content to uppercase string};
for (i = 0; i < length; i++) {
string[i] = toupper(string[i]);
}
//sync the new contents to the file
msync(0, (size_t) buf.st_size, MS_SYNC);
printf("Child 1 %d reads again: \n %s\n", getpid(), mm_file);
exit(EXIT_SUCCESS); //Exits process
} else if ((MC1 = fork()) == 0) {
sleep(1); //so that child 2 will perform its task after child 1 finishes
("Child 2 %d reads: \n %s\n", getpid(), mm_file);
//{remove hyphens}
for (j = 0; j < length; i++) {
if (string[i] == '-') {
string[i] = ' ';
}
}
//sync the new contents to the file
msync(0, (size_t) buf.st_size, MS_SYNC);
printf("Child 2 %d reads again: \n %s\n", getpid(), mm_file);
exit(EXIT_SUCCESS); //Exits process
}
// Waits for all child processes to finish before continuing.
waitpid(MC0, &status, 0);
waitpid(MC1, &status, 0);
return 0;
}
Then my output is as follows:
**virtual-machine:~$** ./testt file
Child 1 3404 reads:
this is a lowercase-sentence.
Child 2 3405 reads:
this is a lowercase-sentence.
All child processes have finished. Now exiting program.
**virtual-machine:~$**
But my desired result would be:
**virtual-machine:~$** ./testt file
Child 1 3404 reads:
this is a lowercase-sentence.
Child 1 3404 reads again:
THIS IS A LOWERCASE-SENTENCE.
Child 2 3405 reads:
THIS IS A LOWERCASE-SENTENCE.
Child 2 3405 reads:
THIS IS A LOWERCASE SENTENCE.
All child processes have finished. Now exiting program.
**virtual-machine:~$**
Any help is greatly appreciated.
There are a few errors here. Firstly, you write into the file and then map it into the memory. The mapping is correct, but the writing not. If the string has n characters, you have to write n+1 characters, since strings in C are null-terminated. Now you only have n, so all C string functions will try to access at least one more byte, which is not good. And if that one extra byte is not null (zero), the functions will go even further. In debug more they might be zeroed, but in optimized code usually not. So you have to use
write(fd, string, strlen(string)+1); //Writes the string to be modified to the file
Then you do this:
for (i = 0; i < length; i++) {
string[i] = toupper(string[i]);
}
This only changes the data that is referred by the pointer string, which has nothing to do with the memory mapped file. You should have:
for (i = 0; i < length; i++) {
mm_file[i] = toupper(mm_file[i]);
}
The same is with the second child process.
Also your msync() call is a bit suspect. You give the memory address as 0, which is not within your memory mapped file, so it will not sync the contents. You need to call msync(mm_file, (size_t) buf.st_size, MS_SYNC);
Also, many compilers will put the constant strings into read-only memory, so you might not even be allowed to change the data referred to by string. In this case it seems you are allowed.
Do also remember, that the length of the file is one byte larger than the length of the string, so use the variables correctly. Currently you do, since you sync the file with file length and handle the string with string length.
You have let mem map get in the way of the logic.
To get this working comment out all mem map stuff and just work on the file.
This will show you that neither child reads from the input file, never mind writing new contents to it.
Under some operating systems, Linux being one if you are mixing reads and writes you need a seek between to keep the write and read pointers in the same position. This may have to be fseek(stream, 0,SEEK_CUR);
Child one should be something like
// Lock file here
rewind(file);
printf ("child 1 reads ");
int ch;
while(1){
ch = fgetc(file);
if(ch == EOF) break;
fputc (Ch,stdout );
}
fputc('\n',stdout );
Rewind(file);
while(1){
ch=fgetc (file);
if(Ch == EOF) break;
fseek(file,-1,SEEK_CUR);
fputc (toupper (Ch),file);
fseek(file,0,SEEK_CUR);
}
Rewind ( file);
Printf (" child 1 reads ");
while(1){
ch=fgetc(file);
if(Ch == EOF) break;
fputc (Ch,file) ;
}
// Unlock file here
Because you have multiple processes acting on the same object you have to implement write locking ( exclusive locks).
Read man pages flock(2), fcntl (2) and lockf(3).
These cooperative locks could be implemented as a semaphore.
With out locking both children may try to write to the same character simultaneously, in this example it shouldn't matter as one child does hyphens and the other letters.
Now it's working uncomment your mem map stuff.
Related
The aim is to create a program that uses 10 processes (the original and 9 child processes) to concurrently write an “output.txt” file.
The idea is that each process writes a character string with a decimal number repeated 5 times. So the initial process will write 5 zeros (“00000”), the first child process 5 ones (“11111”), the second 5 doses (“22222”) and so on.
So the content of the file at the end will be: 0000011111222223333444445555566666777778888899999.
But i went to check if the program worked correctly executing the program 10 times in a row but the results were not the ones that i was expecting, because the program did not print some numbers in different executions.
This is my code.
numbers.c
int main(void)
{
int fd1,fd2,i,pos;
char c;
char buffer[6];
fd1 = open("output.txt", O_CREAT | O_TRUNC | O_RDWR, S_IRUSR | S_IWUSR);
write(fd1, "00000", 5);
for (i=1; i < 10; i++) {
pos = lseek(fd1, 0, SEEK_CUR);
if (fork() == 0) {
/* Child */
sprintf(buffer, "%d", i*11111);
lseek(fd1, pos, SEEK_SET);
write(fd1, buffer, 5);
close(fd1);
exit(0);
} else {
/* Parent */
lseek(fd1, 5, SEEK_CUR);
}
}
//wait for all childs to finish
while (wait(NULL) != -1);
lseek(fd1, 0, SEEK_SET);
printf("File contents are:\n");
while (read(fd1, &c, 1) > 0)
printf("%c", (char) c);
printf("\n");
close(fd1);
exit(0);
}
Then i went to check if the program worked correctly executing the program 10 times in a row but these were the results.
$ for i in $(seq 10); do ./numbers ; done
File contents are:
0000011111222223333355555666668888899999
File contents are:
00000111112222255555666668888899999
File contents are:
0000011111222223333355555666668888899999
File contents are:
00000111112222244444666667777799999
File contents are:
00000444447777755555666668888899999
File contents are:
00000222224444455555777778888899999
File contents are:
0000011111222224444455555777778888899999
File contents are:
0000011111222225555544444888887777799999
File contents are:
0000011111222224444455555777778888899999
File contents are:
0000011111222225555544444888887777799999
I think some processes are not working correctly, so what am i doing wrong?
As I noted in a comment:
You're running into erratic scheduling and some dubious assumptions. You should probably calculate the position where process i should write and do a positioned write (using pwrite()) to that position.
The processes all share one open file description (which is a separate structure hidden behind the open file descriptors), so when a child moves the write position by using write() or lseek(), it affects all the processes. The pwrite() function doesn't move the current file position and guarantees atomic writes.
Your code could become:
#include <fcntl.h>
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
int main(void)
{
const char filename[] = "output.txt";
char buffer[6];
int fd1 = open(filename, O_CREAT | O_TRUNC | O_RDWR, S_IRUSR | S_IWUSR);
if (fd1 < 0)
{
fprintf(stderr, "failed to open/create file '%s'\n", filename);
exit(EXIT_FAILURE);
}
write(fd1, "00000", 5);
for (int i=1; i < 10; i++) {
int offset = 5 * i;
if (fork() == 0) {
/* Child */
sprintf(buffer, "%d", i*11111);
pwrite(fd1, buffer, 5, offset);
close(fd1);
exit(0);
}
}
while (wait(NULL) != -1)
;
lseek(fd1, 0, SEEK_SET);
printf("File contents are:\n");
while (read(fd1, buffer, 5) > 0)
printf("%.*s", 5, buffer);
printf("\n");
close(fd1);
exit(0);
}
The output is predictable and boring:
File contents are:
00000111112222233333444445555566666777778888899999
You could get more interesting results if one or more of the children failed to do the writing; then there'd be null bytes in the file in place of the digits that should have been written (except for the last process).
If, for some unstated reason, you are not allowed to use pwrite(), then life is a lot harder. You should eliminate the extraneous lseek() operations in the parent. You can make the code close to deterministic by using nanosleep() and having each child sleep for i milliseconds before it seeks and writes. Or you could use a formal synchronization mechanism of some sort — a semaphore, or a pthread_mutex_t (pthread_init()) mutex with the 'process shared' attribute set (pthread_mutexattr_setpshared()). If those aren't allowed either and the timing trick isn't acceptable, then you are definitely up against a wall. It can be done, but there comes a point at which the prohibitions are pointless.
I am trying to write to a file and display the output of the thing i wrote with another process. The code i come up with:
void readLine (int fd, char *str) {
int n;
do {
n = read (fd, str, 1);
} while (*str++ != '\0');
}
int main(int argc,char ** argv){
int fd=open("sharedFile",O_CREAT|O_RDWR|O_TRUNC,0600);
if(fork()==0){
char buf[1000];
while(1) {
readLine(fd,buf);
printf("%s\n",buf);
}
}else{
while(1){
sleep(1);
write(fd,"abcd",strlen("abcd")+1);
}
}
}
the output i want (each result spaced from the other with a period of one second):
abcd
abcd
abcd
....
Unfortunately this code doesn't work, it seems that the child process (the reader of the file "sharedFile") reads junk from the file because somehow it reads values even when the file is empty.
When trying to debug the code, readLine function never reads the written file correctly,it always reads 0 bytes.
Can someone help?
First of all, when a file descriptor becomes shared after forking, both the parent and child are pointing to the same open file description, which means in particular that they share the same file position. This is explained in the fork() man page.
So whenever the parent writes, the position is updated to the end of the file, and thus the child is always attempting to read at the end of the file, where there's no data. That's why read() returns 0, just as normal when you hit the end of a file.
(When this happens, you should not attempt to do anything with the data in the buffer. It's not that you're "reading junk", it's that you're not reading anything but are then pretending that whatever junk was in the buffer is what you just read. In particular your code utterly disregards the return value from read(), which is how you're supposed to tell what you actually read.)
If you want the child to have an independent file position, then the child needs to open() the file separately for itself and get a new fd pointing to a new file description.
But still, when the child has read all the data that's currently in the file, read() will again return 0; it won't wait around for the parent to write some more. The fact that some other process has a file open for writing don't affect the semantics of read() on a regular file.
So what you'll need to do instead is that when read() returns 0, you manually sleep for a while and then try again. When there's more data in the file, read() will return a positive number, and you can then process the data you read. Or, there are more elegant but more complicated approaches using system-specific APIs like Linux's inotify, which can sleep until a file's contents change. You may be familiar with tail -f, which uses some combination of these approaches on different systems.
Another dangerous bug is that if someone else writes text to the file that doesn't contain a null byte where expected, your child will read more data than the buffer can fit, thus overrunning it. This can be an exploitable security vulnerability.
Here is a version of the code that fixes these bugs and works for me:
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
void readLine (int fd, char *str, size_t max) {
size_t pos = 0;
while (pos < max) {
ssize_t n = read(fd, str + pos, 1);
if (n == 0) {
sleep(1);
} else if (n == 1) {
if (str[pos] == '\0') {
return;
}
pos++;
} else {
perror("read() failure");
exit(2);
}
}
fprintf(stderr, "Didn't receive null terminator in time\n");
exit(2);
}
int main(int argc, char ** argv){
int fd=open("sharedFile", O_CREAT|O_RDWR|O_TRUNC, 0600);
if (fd < 0) {
perror("parent opening sharedFile");
exit(2);
}
pid_t pid = fork();
if (pid == 0){
int newfd = open("sharedFile", O_RDONLY);
if (newfd < 0) {
perror("child opening sharedFile");
exit(2);
}
char buf[1000];
while (1) {
readLine(newfd, buf, 1000);
printf("%s\n",buf);
}
} else if (pid > 0) {
while (1){
sleep(1);
write(fd,"abcd",strlen("abcd")+1);
}
} else {
perror("fork");
exit(2);
}
return 0;
}
I am developing a simple shell program, a command line interpreter and I wanted to read input from the file line by line, so I used getline() function. At the first time, the program works correctly, however, when it reaches the end of the file, instead of terminating, it starts to read a file from the start and it runs infinitely.
Here are some codes in main function that are related to getline():
int main(int argc,char *argv[]){
int const IN_SIZE = 255;
char *input = NULL;
size_t len = IN_SIZE;
// get file address
fileAdr = argv[2];
// open file
srcFile = fopen(fileAdr, "r");
if (srcFile == NULL) {
printf("No such file!\n");
exit(-1);
}
while (getline( &input, &len, srcFile) != -1) {
strtok(input, "\n");
printf("%s\n", input);
// some code that parses input, firstArgs == input
execSimpleCmd(firstArgs);
}
fclose(srcFile);
}
I am using fork() in my program and most probably it causes this problem.
void execSimpleCmd(char **cmdAndArgs) {
pid_t pid = fork();
if (pid < 0) {
// error
fprintf(stderr, "Fork Failed");
exit(-1);
} else if (pid == 0) {
// child process
if (execvp(cmdAndArgs[0], cmdAndArgs) < 0) {
printf("There is no such command!\n");
}
exit(0);
} else {
// parent process
wait(NULL);
return;
}
}
In addition, sometimes the program reads and prints a combinations of multiple lines. For example, if an input file as below:
ping
ww
ls
ls -l
pwd
it prints something like pwdg, pwdww, etc. How to fix it?
It appears that closing a FILE in some cases seeks the underlying file descriptor back to the position where the application actually read to, effectively undoing the effect of the read buffering. This matters, since the OS level file descriptors of the parent and the child point to the same file description, and the same file offset in particular.
The POSIX description of fclose() has this phrase:
[CX] [Option Start] If the file is not already at EOF, and the file is one capable of seeking, the file offset of the underlying open file description shall be set to the file position of the stream if the stream is the active handle to the underlying file description.
(Where CX means an extension to the ISO C standard, and exit() of course runs fclose() on all streams.)
I can reproduce the odd behavior with this program (on Debian 9.8):
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>
#include <sys/types.h>
#include <sys/wait.h>
int main(int argc, char *argv[]){
FILE *f;
if ((f = fopen("testfile", "r")) == NULL) {
perror("fopen");
exit(1);
}
int right = 0;
if (argc > 1)
right = 1;
char *line = NULL;
size_t len = 0;
// first line
getline(&line, &len, f);
printf("%s", line);
pid_t p = fork();
if (p == -1) {
perror("fork");
} else if (p == 0) {
if (right)
_exit(0); // exit the child
else
exit(0); // wrong way to exit
} else {
wait(NULL); // parent
}
// rest of the lines
while (getline(&line, &len, f) > 0) {
printf("%s", line);
}
fclose(f);
}
Then:
$ printf 'a\nb\nc\n' > testfile
$ gcc -Wall -o getline getline.c
$ ./get
getline getline2
$ ./getline
a
b
c
b
c
Running it with strace -f ./getline clearly shows the child seeking the file descriptor back:
clone(child_stack=NULL, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, child_tidptr=0x7f63794e0710) = 25117
strace: Process 25117 attached
[pid 25116] wait4(-1, <unfinished ...>
[pid 25117] lseek(3, -4, SEEK_CUR) = 2
[pid 25117] exit_group(1) = ?
(I didn't see the seek back with a code that didn't involve forking, but I don't know why.)
So, what happens is that the C library on the main program reads a block of data from the file, and the application prints the first line. After the fork, the child exits, and seeks the fd back to where the application level file pointer is. Then the parent continues, processes the rest of the read buffer, and when it's finished, it continues reading from the file. Because the file descriptor was seeked back, the lines starting from the second are again available.
In your case, the repeated fork() on every iteration seems to result in an infinite loop.
Using _exit() instead of exit() in the child fixes the problem in this case, since _exit() only exits the process, it doesn't do any housekeeping with the stdio buffers.
With _exit(), any output buffers are also not flushed, so you'll need to call fflush() manually on stdout and any other files you're writing to.
However, if you did this the other way around, with the child reading and buffering more than it processes, then it would be useful for the child to seek back the fd so that the parent could continue from where the child actually left.
Another solution would be not to mix stdio with fork().
I am trying to create separate child process for each letter that needs to be counted in a file. I have the file being read in a parent process, but the output is all zeros. I don't understand what I am doing wrong. I need to use child processes for each of the letters, but I am not exactly sure how to create separate processes for each letter. Please help! Here is my code:
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <string.h>
#include <sys/types.h>
#include <ctype.h>
#include <sys/wait.h>
int main(int argc, char **argv){
char characters[26] = { "abcdefghijklmnopqrstuvwxyz" };
int counter[26] = { 0 };
int n = 26;
int c;
char ch;
if (argc < 2)
return 1;
pid_t pids[26];
for (int i = 0; i < n; i++){
pids[i] = fork();
if (pids[i] < 0) {
printf("Error");
exit(1);
} else if (pids[i] == 0) {
while (c = fgetc(file) != EOF){
if (c == characters[i])
counter[i]++;
}
exit(0);
} else {
FILE *file = fopen(argv[1], "r");
if(file == NULL)
printf("File not found\n");
while (c = fgetc(file) != EOF);
fclose(file);
wait(NULL);
}
}
for (int i = 0; i < n; ++i){
printf("%c: %i\n", characters[i], counter[i]);
}
return 0;
}
The problem with forking when the parent has open a file for reading, is that
although all children inherit copies of open file descriptors, they all
share the same file description.
man fork
The child process is an exact duplicate of the parent process except for the following points:
[...]
The child inherits copies of the parent's set of open file descriptors. Each file descriptor in the child refers to the same open
file description (see open(2)) as the corresponding file descriptor in the parent. This means that the two file descriptors share
open file status flags, file offset, and signal-driven I/O attributes (see the description of F_SETOWN and F_SETSIG in fcntl(2)).
You can do such a program, but you would have to synchronize the children with
each other, because every time a child does fgetc(file), the file description
advances for all children. The synchronization would have to be written such as
all children wait for the others to stop reading, do a rewind and then finally
read. In that case having all these children is no gain at all.
For more information about that, see this excellent answer from this
question: Can anyone explain a simple description regarding 'file descriptor' after fork()?
Another problem with your code is this:
printf("%c: %i\n", characters[i], counter[i]);
fork duplicates the process and they both run in separate memory spaces.
The children's counter is a copy of the parent's counter, but a modification
of counter in a child process will only affect the counter for that process,
the parent's counter is not affected by that. So in this case you are always
printing 0, because the parent never changed counter.
Also, even if the modification of a child's counter would somehow propagate to
the parent, the parent should wait for the child process to make the
modification before accessing the variable. Again synchronization would be
needed for that.
For the parent to benefit of the work of the children, it must communicate with
the children. One way to do it is by creating a pipe for each of the children. The
parent closes the writing end of the pipes, the children close the reading end
of the pipe. When the child does its work, it writes the results on the writing end of it's
pipe back to the parent and exits. The parent must then wait for every children,
read from the reading end of the pipe.
This program does exactly that:
#include <stdio.h>
#include <unistd.h>
#include <stdlib.h>
#include <sys/types.h>
#include <sys/wait.h>
#include <string.h>
#include <errno.h>
int main(int argc, char **argv)
{
char characters[26] = "abcdefghijklmnopqrstuvwxyz";
if(argc != 2)
{
fprintf(stderr, "usage: %s file\n", argv[0]);
return 1;
}
size_t i;
int pipes[26][2];
// creating the pipes for all children
for(i = 0; i < 26; ++i)
{
if(pipe(pipes[i]) < 0)
{
perror("unable to create a pipe");
return 1;
}
}
pid_t pids[26];
memset(pids, -1, sizeof pids);
for(i = 0; i < 26; ++i)
{
pids[i] = fork();
if(pids[i] < 0)
{
fprintf(stderr, "Unable to fork for child %lu: %s\n", i, strerror(errno));
continue;
}
if(pids[i] == 0)
{
// CHILD process
// closing reading end of pipe
close(pipes[i][0]);
FILE *fp = fopen(argv[1], "r");
if(fp == NULL)
{
close(pipes[i][1]);
exit(1);
}
int n = 0, c;
while((c = getc(fp)) != EOF)
{
if(c == characters[i])
n++;
}
// sending answer back to parent through the pipe
write(pipes[i][1], &n, sizeof n);
fclose(fp);
close(pipes[i][1]);
exit(0);
}
// PARENT process
// closing writing end of pipe
close(pipes[i][1]);
}
printf("Frequency of characters for %s\n", argv[1]);
for(i = 0; i < 26; ++i)
{
if(pids[i] < 0)
{
fprintf(stderr, "%c: could not create child worker\n", (char) i + 'a');
close(pipes[i][0]);
continue;
}
int status;
waitpid(pids[i], &status, 0);
if(WIFEXITED(status) && WEXITSTATUS(status) == 0)
{
// child ended normally and wrote result
int cnt;
read(pipes[i][0], &cnt, sizeof cnt);
printf("%c: %d\n", (char) i + 'a', cnt);
} else {
printf("%c: no answer from child\n", (char) i + 'a');
}
close(pipes[i][0]);
}
return 0;
}
The parent creates 26 pipes, each one for a child. The it creates an array for
the pids and initializes them to -1 (later for error checking). Then enters in
the loop and creates a new child and closes the writing end of the parent's pipe
for the i-th child. Then it goes again into a loop and checks if a child
process was created for every character. If that's the case, it waits for that
child to exit and checks it's exit status. If and only if the child exits
normally (with an exit status of 0), it reads from the reading end of the pipe
and prints the result, otherwise it prints an error message. Then it closes the
reading end of the pipe and exits.
Meanwhile every child closes its reading end of the pipe and open a file for
reading. By doing this, the children don't share the file description and can
independently from each other read the contents of the file and calculate the
frequency of the letter assigned to the child. If something goes wrong when
opening the file, the child closes the writing end of the pipe and exits with a
return status of 1, signalling to the parent, that something went wrong and that
it won't send any result through the pipe. If everything goes well, the child
writes the result in the writing end of the pipe and exits with an exit status
of 0.
The output of this program with the its source is:
$ ./counter counter.c
Frequency of characters for counter.c
a: 44
b: 5
c: 56
d: 39
e: 90
f: 40
g: 17
h: 26
i: 113
j: 1
k: 5
l: 35
m: 6
n: 68
o: 45
p: 59
q: 2
r: 78
s: 71
t: 65
u: 25
v: 5
w: 10
x: 3
y: 6
z: 5
I want the parent process to take the arguments to main() and send the characters in them one at a time to the child process through a pipe starting with argv[1] and continue through the rest of the arguments.(one call to write for each character).
I want the child process to count the characters sent to it by the parent process and print out the number of characters it received from the parent. The child process should not use the arguments to main() in any way whatsoever.
What am i doing wrong? do i need to use exec()?
output that isnt correct:
~ $ gc a03
gcc -Wall -g a03.c -o a03
~ $ ./a03 abcd ef ghi
child: counted 12 characters
~ $
here is the program..
#include <sys/wait.h>
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <string.h>
int main(int argc, char *argv[])
{
int length = 0;
int i, count;
int fdest[2]; // for pipe
pid_t pid; //process IDs
char buffer[BUFSIZ];
if (pipe(fdest) < 0) /* attempt to create pipe */
printf("pipe error");
if ((pid = fork()) < 0) /* attempt to create child / parent process */
{
printf("fork error");
}
/* parent process */
else if (pid > 0) {
close(fdest[0]);
for(i = 1; i < argc; i++) /* write to pipe */
{
write(fdest[1], argv[i], strlen(argv[1]));
}
wait(0);
} else {
/* child Process */
close(fdest[1]);
for(i = 0; i < argc; i++)
{
length +=( strlen(argv[i])); /* get length of arguments */
}
count = read(fdest[0], buffer, length);
printf("\nchild: counted %d characters\n", count);
}
exit(0);
}
You said that "the child process should not use the arguments to main() in any way whatsoever". However, I see that your child process is using argc. Doesn't this defeat your restriction?
You also say that you want "one call to write for each character". Your current implementation uses one call to write for each argument, not each character. Was this a typo? If not, you will want to use something more like this:
char nul='\0', endl='\n';
for (a=1; a < argc; ++a) {
for (c=0; c < strlen(argv[a]); ++c) {
write(fdest[1], &argv[a][c], 1);
}
write(fdest[1], &nul, 1);
}
write(fdest[1], &endl, 1);
This will write one character at a time, with each argument as a NULL-terminated string and a newline character at the end. The newline is only there to serve as a marker to indicate that there is no more data to send (and is safe to use since you won't be passing in a newline in a CLI argument).
The child process will just need to be a loop that reads incoming bytes one by one and increments a counter if the byte is not '\0' or '\n'. When it reads the newline character, it breaks out of the input processing loop and reports the value of the counter.
You have an error here:
write(fdest[1], argv[i], strlen(argv[1]));
You should take strlen(argv[i]) rather, or you're telling write() to read past the space of argv[i] and invoke undefined behavior.
Note that you're only calling read() once. By the time you're calling read(), perhaps only one of the argv[]s have been written by the parent. Or 2. Or any number of them.
The problem is here
write(fdest[1], argv[i], strlen(argv[1]));
Notice that this is strlen of argv[1], it should be argv[i]. You are actually referencing past the end of argv[2] and argv[3] in this loop
You are effectively writing strlen("abcd") * 3 characters which is 12 chars
In here:
for(i = 1; i < argc; i++) /* write to pipe */
{
write(fdest[1], argv[i], strlen(argv[1]));
}
strlen(argv[1]) should be in fact strlen(argv[i])