I'm writing a shell for school assignment !
I have figured it out how to implement redirections like that.
ls | cat | wc
My question is how can i combine piping + files redirection like :
ls | cat < input1 < input2 > output | wc
The only way i see to make this working is to create sub processes whith intermediates pipes (tees). Am i right or there is an easier way ?
Thank you in advance and sorry for bad english !
Related
I've been stuck on this for a while now, is it possible to redirect stdout to two different places? I am writing my own shell for practice, and it can currently run commands like ps aux | wc -l or ps aux | wc -l > output.file. However, when I try to run ps aux > file.out | wc -l, the second command does not receive the input from the first.
In the last example, the first command would be run in a child process that would output to one end of the pipe. The logic is similar to what follows:
close(stdout);
dup2(fd[1], STDOUT_FILENO);
//If a file output is also found
filewriter = open(...);
dup2(filewriter, STDOUT_FILENO);
//Execute the command
Normal UNIX shells don't work with that syntax either. UNIX (and some other OSs) provides the tee[1] command to send output to a file and also stdout.
Example:
ps aux | tee file.out | wc -l
[1] See http://en.wikipedia.org/wiki/Tee_(command)
The tee command does just that in UNIX. To see how to do it in straight C, why not look at tee's source code?
I want to list the number of files in a directory in shell-script.
This command works well:
let number_of_files=`ls $direc -l| wc -l`
My problem is that when I use this command with nohup, it doesn't work well.
The same happens when trying to get a file:
file_name=` ls -1 $direc | head -$file_number | tail -1`
Do you know any other option to do it?
I know that in c there is a function:
num_of_files=scandir(directory,&namelist,NULL,NULL);
I also include the full command-line:
nohup sh script_name.sh > log.txt &
Do you know any other way in shell-script that works well with nohup?
Thanks.
Try something like this,
NUMBER_OF_FILES=$(find . -maxdepth 1 -type f | wc -l)
echo $NUMBER_OF_FILES
That is find (from the current directory) to a max depth of 1 (e.g. the current directory only) everything that is of type "file", and then count the number of lines. Finally, assign the result of that to NUMBER_OF_FILES.
This question already has answers here:
Connecting n commands with pipes in a shell?
(2 answers)
Closed 9 years ago.
I'm trying to implement a shell, and i got everything working perfectly fine, with the exception of multiple pipes.
i.e ls -l -a -F | tr [a-z] [A-Z] | sort how can i approach this? i know i have to create multiple pipes to solve this but how exactly to do it?
Can someone guide me in the right direction?
I currently manage only one pipe, but i'm not too sure on how to approach this when i have more than two pipes.
I was wondering if anyone could provide me with some pseudo-code n how to approach this problem
Just parse the string in order, when you get to a pipe symbol you fork off the last command and store the std in and std out. If you had a prior command you pump that commands std out to the std in of the new command. Then you loop.
Additional note:
The only difference between
A) thing1 > thing2
and
B) thing1 | thing2
Is this, in A) you are running thing1 (with fork) and setting the output to a file called thing2
In B) you are running both thing1 and thing2 with fork and setting the output of thing1 to the input of thing2.
So,
C) thing1 | thing2 | thing3
Is just the same, you need to run (fork) thing1, thing2, thing3 and set the output of thing1 to the intput of thing2, the output of thing2 to the input of thing3.
Pipe works just like > but you run the "target" with fork.
If you've indeed got everything working but multiple pipes, you can reduce them to single pipes using grouping or subshells.
{ ls -l -a -F | tr a-z A-Z; } | sort
(ls -l -a -F | tr a-z A-Z) | sort
I want to implement multi pipes in c so I can do something like this, where ||| means duplicate the stdin to N pipe commands):
cat /tmp/test.log ||| wc -l ||| grep test1 ||| grep test2 | grep test3
This will return to me the number of lines in the file and the lines in the file that contain 'test1' string and the lines in the file that contain 'test2' && 'test3' string
In other words this would have the effect of these 3 regular pipelines:
cat /tmp/test.log | wc -l --> stdout
| grep test1 --> stdout
| grep test2 | grep test3 --> stdout
Has someone already implementated something like this? I didn't find anything...
NOTE: I know it can be done with scripting languages or with bash multiple file descriptors, but I am searching C code to do it.
Thanks!
Maybe your should start off with the tee command and examine their code.
Because it is impossible in C to have more than one process (or thread) read the same file descriptor without draining the data read, all solutions will have to store the data read in a temporary file and then each read the temp file.
Penz says that the problem could be solved by Multios and coproc features in the thread.
However, I am unsure about the solution.
I do know that you can use multios as
ls -1 > file | less
but I have never used such that you have two inputs.
How can you use these features to have a pipe loop in Zsh?
I am having trouble understanding the questions.
Are you trying to do the following:
(ls -1 && file) | less
Where && is used for multiple commands on a single line.
Or are you trying to do the following:
ls -1 | tee file | less
Where tee puts the output into the file and standard out.