piping multiple statements together only performs first command - c

edit half of the problem has been fixed and the question has been edited to reflect that fixing. I still am only seeing this perform the first command in a series of pipes.
I am stuck on the very last thing in my shell that I am making from scratch. I want to be able to parse commands like "finger | tail -n +2 | cut -c1-8 | uniq | sort" and I think that I am close. What I have right now, when given that line as an array of strings, only executes the "finger" part.
Here is the part of code that isn't working as it needs to be, I've narrowed it down to just this for loop and the part after:
mypipe[2];
//start out with stdin
oldinfd = 0;
//run the beginning commands. connect in/out of pipes to eachother.
for(int i = 0; i < max; i++){
pipe(mypipe);
processline(commands[i], oldinfd, mypipe[1]); //doesn't wait on children
close(mypipe[1]);
oldinfd = mypipe[0];
}
//run the final command, put output into stdout
processline(commands[max], oldinfd, 1); //waits on children
...
This loop runs the program and should end up looking like this:
stdin -> [0]pipe[write] -> [read]pipe[write] -> ... ->[read]pipe[1] -> stdout
My processline function takes in a line ('finger') and execs it. for instance, processline("finger", 0, 1) run the finger command without flaws. For the purposes of this question assume that processline works perfectly, what I'm having troubles with is how I'm using the pipes and their write and read ends.
Also with print statements, I have confirmed that the correct words are being sent to the correct parts. (the for loop receives "finger", "tail -n +2", "cut -c1-8", and "uniq", while the commands[max] that the last line receives is "sort").

Related

Why doesn't stdbuf line buffer the output of some simple c programs

I'm trying to use stdbuf to line buffer the output of a program but I can't seem to make it work as I would expect. Using this code:
#include <stdio.h>
#include <unistd.h>
int main (void)
{
int i=0;
for(i=0; i<10; i++)
{
printf("This is part one");
fflush(stdout);
sleep(1);
printf(" and this is part two\n");
}
return 0;
}
I see This is part one, a one second wait then and this is part two\nThis is part one.
I expected that running it as
stdbuf --output=L ./test.out
would cause the output to be a 1 second delay and then This is part one and this is part two\n repeating at one second intervals. Instead I see the same output as in the case when I don't use stdbuf.
Am I using stdbuf incorrectly or does the call to fflush count as "adjusting" the buffering as described in the sdtbuf man page?
If I can't use stdbuf to line buffer in this way is there another command line tool that makes it possible?
Here are a couple of options that work for me, given the sample code, and run interactively (the output was to a pseudo-TTY):
./program | grep ^
./program | while IFS= read -r line; do printf "%s\n" "$line"; done
In a couple of quick tests, both output a complete line at a time. If you need to do pipe it further, grep's --line-buffered option should be useful.

What Is the Use Of ( \r ) carriage return in c program

Can You Explain Me This Code ....
What Is The Use Of \r In This Program
#include<stdio.h>
void main()
{
printf("This Is \r Amarendra Deo");
}
The \r has no inherit meaning for the C language, but terminals (aka console) can react to this character in different ways. The most common way for terminal is that carriage return sets the cursor at the start of the current line. So when you execute this line, you'll get
Amarendra Deo
Because printf will print This Is and the \r will set the cursor back to the beginning of the line and Amarendra Deo will overwrite whatever has been printed on that line. And since Amarendra Deo is longer than This Is, all you see is
Amarendra Deo
This is for example a very useful trick for when you want to print something
repeatedly on the same line, for example a status message:
for(size_t i = 0; i < 5; ++i)
{
printf("Processing task %d...\r", i+1);
fflush(stdout);
execute_task(i); // can take several seconds to finish
}
In that case you'll see the the Processing task ... text on the same line and it's a nice visual feature for the user. Try executing that for yourself (change the execute_task(i) with sleep(1) or something to make a delay).

Segmentation fault when I pipe stdout to my program

I don't know if I have to tell it again, but english is not my native language and I'm not a very good student, I've seen you are able to correct my message so it's fine but I'd like to apologize once again.
Here is the deal, I have a program, which convert a given graph passed in argument to Dimacs format that I'll store in a .cnf file. (We use it to solve it with a SAT-solver)
He's perfectly working when I use it by myself, so I'd like to have another program, graph_generator, that I'll pipe into myprogram to have random graphes.
I've made my graph_generator program, and he correctly prints graph at the format I want, so I've tried to do basically ./graph_generator | ./myprogram but I instantly get a segmentation fault, I can't see why, graph_generator returns exactly what it's expected, and when I want to use a debugger, I don't see how it's possible knowing that I pipe a result, when I copy paste the result of graph_generator myprogram correctly generates my .cnf file.
I don't know where the problem could come from, I have a theory but it's a bit lame, it's that the stdout of graph_generator, once piped myprogram considers the space as an argument and there is the problem. Anyone could help me please?
int main (int argc, char* argv[]){
graph* mygraph;
int taille, nbEdge;
int i;
FILE* resultat;
printf("mark 1");
taille = atoi(argv[1]);
nbEdge = atoi(argv[2]);
printf("mark 2");
mygraph = build_empty_graph(taille);
for(i = 3; i < argc; i+= 2)
add_edge(atoi(argv[i]), atoi(argv[i+1]), mygraph);
resultat = fopen("resultat.cnf", "w");
write_result_comments(resultat);
write_result_header(resultat, mygraph);
write_first_stack(resultat, mygraph);
write_second_stack(resultat, mygraph);
fclose(resultat);
return 0;
}
Here is the main of myprogram, when I use it with the pipe, the message "mark1" doesn't even appears
It is segfaulting because you don't check argc and are passing no values as arguments.
Please note that stdin is a separate stream from the arguments in argv.
Best way to fix this is to build up hierarchically:
tokenizer: read stdin in a loop with getchar until you get to whitespace (space, tab or newline).
parser: atoi is fine, since you only pass ints.
state machine: first two args to taille and nbEdge, rest in pairs (x, y) to call the program. Maybe use a switch statement and a state variable in a loop.
program: the rest of your program pretty much as is.

awk read data from file with getline as it's being written

I have a script that is running two commands. The first command is writing data to a temp file. The second command is piping to awk while the first command is running in the background. awk, in the second command, needs to read the data from the temp file, but it's parsing its own data faster than data is getting written to the temp file.
Here's an example:
#!/bin/bash
command1 > /tmp/data.txt &
# command1 takes several minutes to run, so start command 2 while it runs in the background
command2 | awk '
/SEARCH/ {
#Matched input so pull next line from temp file
getline temp_line < "/tmp/data.txt"
}
'
This works, unless awk parses the data from command2 so fast that command1 can't keep up with it. I.e. awk is getting an EOF from /tmp/data.txt before command1 has finished writing to it.
I've also tried wrapping some checks around getline, like:
while ((getline temp_line < "/tmp/data.txt") < 0) {
system("sleep 1") # let command1 write more to the temp file
}
# Keep processing now that we have read the next line
But I think once it hits an EOF in the temp file, it stops trying to read from it. Or something like that.
The overall script works as long as command1 writes to the temp file faster than awk tries to read from it. If I put a sleep 10 command between the two commands, then the temp file builds enough buffer and the script produces the output I need. But I may be parsing files much larger than what I've tested on, or the commands might run at different speeds on different systems, etc, so I'd like a safety mechanism to wait for the file until data has been written to it.
Any ideas how I can do this?
I think you'd need to close the file between iterations and read it from the start again back to where you had read it before, something like this (untested);
sleepTime = 0
while ((getline temp_line < "/tmp/data.txt") <= 0) {
close("/tmp/data.txt")
system("sleep " ++sleepTime) # let command1 write more to the temp file
numLines = 0
while (++numLines < prevLines) {
if ( (getline temp_line < "/tmp/data.txt") <= 0 ) {
print "Aaargghhh, my file is gone!" | "cat>&2"
exit
}
}
}
++prevLines
Note that I built in a variable "sleepTime" to have your command sleep longer each time through the loop so if it's taking your tmp file a long time to fill up your 2nd command waits longer for it each iteration. Use that or not as you like.
Using getline in nested loops with system() commands all seems a tad clumsy and error-prone though - I can't help thinking there's probably a better approach but I don't know what off the top of my head.

C Minishell Command Expansion Printing Gibberish

I'm writing a unix minishell in C, and am at the point where I'm adding command expansion. What I mean by this is that I can nest commands in other commands, for example:
$> echo hello $(echo world! ... $(echo and stuff))
hello world! ... and stuff
I think I have it working mostly, however it isn't marking the end of the expanded string correctly, for example if I do:
$> echo a $(echo b $(echo c))
a b c
$> echo d $(echo e)
d e c
See it prints the c, even though I didn't ask it to. Here is my code:
msh.c - http://pastebin.com/sd6DZYwB
expand.c - http://pastebin.com/uLqvFGPw
I have a more code, but there's a lot of it, and these are the parts that I'm having trouble with at the moment. I'll try to tell you the basic way I'm doing this.
Main is in msh.c, here it gets a line of input from either the commandline or a shellfile, and then calls processline (char *line, int outFD, int waitFlag), where line is the line we just got, outFD is the file descriptor of the output file, and waitFlag tells us whether or not we should wait if we fork. When we call this from main we do it like this:
processline (buffer, 1, 1);
In processline, we allocate a new line:
char expanded_line[EXPANDEDLEN];
We then call expand, in expand.c:
expand(line, expanded_line, EXPANDEDLEN);
In expand, we copy the characters literally from line to expanded_line until we find a $(, which then calls:
static int expCmdOutput(char *orig, char *new, int *oldl_ind, int *newl_ind)
orig is line, and new is expanded line. oldl_ind and newl_ind are the current positions in the line and expanded line, respectively. Then we pipe, and recursively call processline, passing it the nested command(for example, if we had "echo a $(echo b)", we would pass processline "echo b").
This is where I get confused, each time expand is called, is it allocating a new chunk of memory EXPANDEDLEN long? If so, this is bad because I'll run out of stack room really quickly(in the case of a hugely nested commandline input). In expand I insert a null character at the end of the expanded string, so why is it printing past it?
If you guys need any more code, or explanations, just ask. Secondly, I put the code in pastebin because there's a ton of it, and in my experience people don't like it when I fill up several pages with code.
Your problem lies in expCmdOutput. As you already noticed, you do not get NUL terminated strings when reading the output of your child process using read. What you want to do is terminate the string manually, by adding something like
buf[bytes_read] = '\0';
after your call to read in line 29 (expand.c). Sicne you need space for the NUL, you can only read up to BUF_SIZE - 1 bytes then, of course.
You should probably rethink the whole loop you do afterwards, though:
/* READ OUTPUT OF COMMAND FROM READ END OF PIPE, THEN CLOSE READ END */
bytes_read = read(fd[0],buf,BUF_SIZE);
while(bytes_read > 0)
{
bytes_read = read(fd[0], buf, BUF_SIZE);
if (bytes_read == -1) perror("read");
}
close(fd[0]);
If the output of your command is longer than BUF_SIZE, you simply read again to buf, overwriting the output you just read. What you really want here is to allocate memory and append to the end using strcat (or by holding a pointer to the end of your string for efficiency).

Resources