problem in writing to terminal after using execvp and dup2 syscalls - c

Line number #15 { printf("This goes to the terminal\n"); } is not getting printed anywhere not in the terminal nor in the file.
//inputs argc = 3 :- ./executable_file output_file command
int main(int argc, char **argv)
{
if(argc < 3)
{
return 0;
}
int stdout_copy = dup(1);
int fd = open(argv[1], O_CREAT | O_RDWR | O_TRUNC, 0644);
if (fd < 0)
{
printf("ERROR\n");
return 0;
}
printf("This goes to the standard output(terminal).\n");
printf("Now the standard output will go to \"%s\" file .\n", argv[1]);
dup2(fd, 1);
printf("This output goes to \"%s\"\n",argv[1]);
close(fd);
execvp(argv[2],argv+2);
dup2(stdout_copy,1);
printf("This goes to the terminal\n");
return 0;
}
Apologies for the Previous Question :
I'm really sorry, it was my mistake in analysing it.
And special thanks for all answers and hints.

problem in writing to terminal after using execvp and dup2 syscalls
Neither:
execvp(argc[2],argc+2);
dup2(stdout_copy,1);
printf("This goes to the terminal\n");
Or:
dup2(stdout_copy,1);
execvp(argc[2],argc+2);
printf("This goes to the terminal\n");
...will output to stdout if the call to execvp(argc[2],argc+2); succeeds.
However, both will output to stdout if it fails.
(Unless command line arguments are incorrect, dup2() likely has nothing to do with failure to output to stdout. See additional content below for how to check this.)
Read all about it here: execvp.
In a nutshell, execvp() replaces the current process with a new process. If it is successful the current process is no longer what you are viewing on the terminal. Only when it is not successful will the commands following it be executed.
The following suggestions are not precisely on-topic, but important nonetheless...
Change:
int main(int argv, char **argc)
To:
int main(int argc, char **argv) //or int main(int argc, char *argv[]), either are fine.
This will be the foundation of seeing normal behavior. Anything else is very confusing to future maintainers of your code, and to people trying to understand what you are doing here.
These names are easily remembered by keeping in mind that argc is used for the count of command line arguments, and argv is the vector that is use to store them.
Also, your code shows no indications that you are checking/validating these arguments, but given the nature of your program, they should be validated before going on. For example:
//verify required number of command line arguments was entered
if(argc <!= 3)//requires at least one additional command line argument
{
printf("Usage: prog.exe [path_filename]\nInclude path_filename and try again.\nProgram will exit.");
return 0;
}
//check if file exists before going on
if( access( argv[1], F_OK ) != -1 )
{
// file exists
} else {
// file doesn't exist
}
//do same for argv[2]
(2nd example to check file in Linux environment is from here)
BTW, Knowing the command line arguments that were passed into the program, would help to provide a more definitive answer here. Their syntax and content, and whether or not the files that they reference exist, determine how the call to execvp will behave.
Suggestions
It is generally always look at the return values of functions that have them. But because of the unique behavior of execvp If is successful it does not return, and if it fails it will always return -1. So in this case pay special attention to the value of errno for error indications, again all of which are covered in the link above.
As mentioned in comments (in two places.) it is a good idea to use fflush(stdout) to empty buffers when interpreting standard I/O and file descriptor I/O, and before using any of the exec*() family of calls.
Take time to read the man pages for the functions - shell commands that are used. It will save time, and guide you during debugging sessions.

Related

How to write a program for input and output redirection wc < f1.txt > f2.txt in c

I was trying something like this but got stuck and don't know in which direction to proceed. I even tried using fork() and then assigning the task separately to child and parent but the redirection in that case in not working as intended.
#include<stdio.h>
#include<stdlib.h>
#include<unistd.h>
#include<sys/types.h>
#include<fcntl.h>
int main()
{
execlp("cat","<","f1.txt",">","f2.txt",NULL);
return 0;
}
In Linux and other POSIXy operating systems, standard input corresponds to file descriptor 0 (STDIN_FILENO), standard output to file descriptor 1 (STDOUT_FILENO), and standard error to file descriptor 2 (STDERR_FILENO).
Standard file handles stdin, stdout, and stderr are the standard C abstraction, and in Linux are implemented on top of those file descriptors.
To redirect standard input, output, or error, first you need to get an open file descriptor to whatever you want to redirect from/to. In the case of files, you do this via the open() function, which returns the file descriptor number. Then, you use the dup2() function to duplicate (copy) that to the descriptor you want.
Consider the following example.c:
// SPDX-License-Identifier: CC0-1.0
#define _POSIX_C_SOURCE 200809L
#include <stdlib.h>
#include <unistd.h>
#include <sys/types.h>
#include <fcntl.h>
#include <string.h>
#include <errno.h>
/* Duplicate oldfd to newfd, and close oldfd.
In error cases, tries to close both descriptors.
Returns 0 if success, error code (with errno set) otherwise.
*/
static inline int move_descriptor(int oldfd, int newfd)
{
if (oldfd == -1 || newfd == -1) {
if (oldfd != -1)
close(oldfd);
if (newfd != -1)
close(newfd);
return errno = EBADF;
}
if (oldfd == newfd)
return 0;
if (dup2(oldfd, newfd) == -1) {
const int saved_errno = errno;
close(oldfd);
close(newfd);
return errno = saved_errno;
}
if (close(oldfd) == -1) {
const int saved_errno = errno;
close(newfd);
return errno = saved_errno;
}
return 0;
}
/* Write a message to standard error, keeping errno unchanged.
This is async-signal safe.
Returns 0 if success, error code otherwise.
*/
static inline int wrerr(const char *msg)
{
const char *end = (msg) ? msg + strlen(msg) : msg;
if (end == msg)
return 0;
const int saved_errno = errno;
while (msg < end) {
ssize_t n = write(STDERR_FILENO, msg, (size_t)(end - msg));
if (n > 0) {
msg += n;
} else
if (n != -1) {
errno = saved_errno;
return EIO;
} else
if (errno != EINTR) {
const int retval = errno;
errno = saved_errno;
return retval;
}
}
errno = saved_errno;
return 0;
}
static inline void errormessage(const char *name, const char *cause)
{
wrerr(name);
wrerr(": ");
wrerr(cause);
wrerr(".\n");
}
int main(int argc, char *argv[])
{
int fd;
if (argc < 4 || !strcmp(argv[1], "-h") || !strcmp(argv[1], "--help")) {
const char *arg0 = (argc > 0 && argv && argv[0] && argv[0][0]) ? argv[0] : "(this)";
wrerr("\n");
wrerr("Usage: "); wrerr(arg0); wrerr(" [ -h | --help ]\n");
wrerr(" "); wrerr(arg0); wrerr(" INPUT OUTPUT COMMAND [ ARGUMENTS ... ]\n");
wrerr("\n");
return EXIT_FAILURE;
}
fd = open(argv[1], O_RDONLY | O_NOCTTY);
if (fd == -1) {
errormessage(argv[1], strerror(errno));
return EXIT_FAILURE;
}
if (move_descriptor(fd, STDIN_FILENO)) {
errormessage(argv[1], strerror(errno));
return EXIT_FAILURE;
}
fd = open(argv[2], O_WRONLY | O_CREAT, 0666);
if (fd == -1) {
errormessage(argv[2], strerror(errno));
return EXIT_FAILURE;
}
if (move_descriptor(fd, STDOUT_FILENO)) {
errormessage(argv[2], strerror(errno));
return EXIT_FAILURE;
}
if (strchr(argv[3], '/'))
execv(argv[3], (char *const *)(argv + 3));
else
execvp(argv[3], (char *const *)(argv + 3));
errormessage(argv[3], strerror(errno));
return EXIT_FAILURE;
}
The move_descriptor() is just a wrapper around dup2() and close(). I included it to show how to do the descriptor moving (copying and closing the old one) safely; with sufficient error checking.
The wrerr(msg) function is analogous to fputs(msg, stderr), except that it uses the file descriptor interface (write()) directly, bypassing the C stderr stream abstraction completely. It is also async-signal safe*, meaning you can use it inside signal handlers.
*: Technically, one could argue whether strlen() is async-signal safe or not. In Linux using glibc, newlibc, or avr-libc, it is.
Many Linux/Unix/POSIXy error messages use format "filename: Error message." Since the wrerr() function takes only one parameter, I included the errormessage(filename, message) helper function to print such error messages. This kind of splitting commonly used tasks to helper functions makes the code easier to read, and easier to maintain too.
The program itself takes at least four command-line arguments. (The first argument, argv[0], is the command itself. The first parameter is argv[1]. For example, if you compile and ran this as ./example arg1 arg2 arg3, then argv[0]=="./example", argv[1]=="arg1", argv[2]=="arg2", and argv[3]=="arg3". In Linux and POSIXy systems, argv[argc] == NULL, so we can use the argv array directly in execv() and execvp() and related functions.
If there are fewer than four command-line parameters, or if argv[1] matches "-h" or "--h", we print usage and exit.
Otherwise, the first parameter (argv[1]) names the file we redirect input from (and it must exist), and the second parameter (argv[2]) the file we redirect output to (which we'll create if it does not exist yet).
The O_NOCTTY flag may look confusing at first, but I included it, because it is so common when redirecting input from file-like objects. It basically means that even if the pathname refers to a terminal, don't do any terminal and session related magic when opening it: "if it is a terminal, and we don't happen to have a controlling terminal, don't make it our controlling terminal". (It only affects programs run in a new session (via setsid) or by services like cron, udev, et cetera, since programs you normally run from a terminal have that terminal as their controlling terminal. The main thing about terminals and sessions is that if the controlling terminal gets closed, each process having that terminal as their controlling terminal will receive a hangup (SIGHUP) signal.)
When opening the file we redirect output to, we use O_CREAT flag, and add an extra parameter, the file access mode. The leading zero means that 0666 is an octal constant, i.e. base-8, and refers to decimal value 6·82 + 6·81 + 6·80 = 438. It is the standard value that you see most often used. It is modified (by the kernel) by the current umask (whose value you can see in the shell by running umask). It is written in octal because then the third digit from right specifies the owner (user) rights, second from right the group rights, and the rightmost the rights for everyone else; 1 being read access, 2 being write access, and 4 being execute (for files) or pass through/work in (for directories). (Each file has an owner (user) and group in Linux, as they do in all Unix and POSIXy systems.)
Whenever open() flags include O_CREAT, the additional access mode value must be supplied. It will almost always be 0666, except for some rare cases where you want to use a more restrictive value; but the general rule is to use 0666 and let the user make it more restrictive if they want by modifying their umask: that is what just about all utilities do anyway.
The third command line parameter (fourth argument, argv[3]), contains the name or path to the executable we'll run. If it contains a slash (/), we assume it is a pathname reference, and use execv(). If it does not contain a slash, we assume it is a name, and use execvp(), which uses the PATH environment variable to look for an executable with that name.
If the exec succeeds, the process is replaced with the new program; the execution ends at the exec.
If the exec fails for any reason, both execv() and execvp() will set errno. It makes sense to print the command (without any arguments, if there were any), and the error message, and exit with failure in that case.
AFAIK, redirection is a shell feature.
I don't see what you are trying to do. But to achieve what you are trying, you need to run
sh -c 'cat < f1.txt > f2.txt'
in the exec format. (I am not familiar with the exec format in C)
You can use bash instead of sh. The -c flag takes a string argument — called command string, and executes it.
My guess of the exec format would be:
execlp( "sh", "-c", " 'cat < f1.txt > f2.txt' ", NULL);
Note the third argument to the function is enclosed in two sets of quotes " '...' ". This is a requirement. Else the shell's space splitting will break it and pass only cat as the command string.

Using stdin in C exclusively through a piped in file

I wrote a file parser for a project that parses a file provided on the command line.
However, I would like to allow the user to enter their input via stdin as well, but exclusively through redirection via the command line.
Using a Linux based command prompt, the following commands should yield the same results:
./check infile.txt (Entering filename via command line)
./check < infile.txt
cat infile.txt | ./check
The executable should accept a filename as the first and only command-line argument. If no filename is specified, it should read from standard input.
Edit: I realized how simple it really was, and posted an answer. I will leave this up for anyone else who might need it at some point.
This is dangerously close to "Please write my program for me". Or perhaps it even crossed that line. Still, it's a pretty simple program.
We assume that you have a parser which takes a single FILE* argument and parses that file. (If you wrote a parsing function which takes a const char* filename, then this is by way of explaining why that's a bad idea. Functions should only do one thing, and "open a file and then parse it" is two things. As soon as you write a function which does two unrelated things, you will immediately hit a situation where you really only wanted to do one of them (like just parse a stream without opening the file.)
So that leaves us with:
#include <errno.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include "myparser.h"
/* Assume that myparser.h includes
* int parseFile(FILE* input);
* which returns non-zero on failure.
*/
int main(int argc, char* argv[]) {
FILE* input = stdin; /* If nothing changes, this is what we parse */
if (argc > 1) {
if (argc > 2) {
/* Too many arguments */
fprintf(stderr, "Usage: %s [FILE]\n", argv[0]);
exit(1);
}
/* The convention is that using `-` as a filename is the same as
* specifying stdin. Just in case it matters, follow the convention.
*/
if (strcmp(argv[1], "-") != 0) {
/* It's not -. Try to open the named file. */
input = fopen(argv[1], "r");
if (input == NULL) {
fprintf(stderr, "Could not open '%s': %s\n", argv[1], strerror(errno));
exit(1);
}
}
}
return parse(input);
}
It would probably have been better to have packaged most of the above into a function which takes a filename and returns an open FILE*.
I guess my brain is fried because this was a very basic question and I realized it right after I posted it. I will leave it up for others who might need it.
ANSWER:
You can fgets from stdin, then to check for the end of the file you can still use feof for stdin by using the following:
while(!feof(stdin))

Fail to read command output using popen function

In Linux, I am finding pid of process by opening pipe with "pidof process_name" command and then reading it's output using fgets function. But it fails to find pid once in a while. Below is my code for finding pid of my process.
int FindPidByProcessName(char *pName)
{
int pid = -1;
char line[30] = { 0 };
char buf[64] = { 0 };
sprintf(buf, "pidof %s", pName);
//pipe stream to process
FILE *cmd = popen(buf, "r");
if (NULL != cmd)
{
//get line from pipe stream
fgets(line, 30, cmd);
//close pipe
pclose(cmd); cmd = NULL;
//convert string to unsigned LONG integer
pid = strtoul(line, NULL, 10);
}
return pid;
}
In output sometimes pid=0 comes even though process is available in "ps" command output.
So, I try to find root cause behind this issue and i found something like input/output buffer mechanism is may creating issue in my scenario.
So I try to use sync() function before opening popen() and strangely my function starts working with 100% accuracy.
Now sync() function is taking too much time(approximately 2min sometime) to complete its execution which is not desirable. So i try to use fflush(), fsync() and fdatasync() but these all are not working appropriately.
So please anyone tell me what was the exact root cause behind this issue And how to solve this issue appropriately?
Ok, the root cause of the error is stored in the errno variable (which btw you do not need to initialize). You can get an informative message using the fucntion
perror("Error: ");
If u use perror the variable errno is interpreted and you get a descriptive message.
Another way (the right way!) of finding the root cause is compiling your program with the -g flag and running the binary with gdb.
Edit: I strongly suggest the use of the gdb debugger so that you can look exactly what path does your code follow, so that you can explain the strange behaviour you described.
Second Edit: Errno stores the last error (return value). Instead of calling the functions as you do, you should write, and check errno immediately:
if ((<function>) <0) {
perror("<function>: ");
exit(1);
}

How do I read file into a command line?

Basically what I want to do is have a program with int main(argc, *argv[]) and instead of writing chars into command line, I want to have my program read those words from a file. How could I accomplish this? Is there a special command in Linux for that?
You can use standard redirect operations in a *nix shell to pass files as input:
./myprogram < inputfile.txt
This statement executes your program (myprogram) and pumps the data inside of inputfile.txt to your program
You can also redirect the output of program to a file in a similar fashion:
./myprogram > outputfile.txt
Instead of doing
for(int i = 1; i < argc; i++)
{
insert(&trie, argv[i]);
}
you could doing something like
FILE *input;
char *line;
....
while (fscanf(input, "%ms", &line) != EOF) {
insert(&trie, line);
/* If you make a copy of line in `insert()`, you should
* free `line` at here; if you do not, free it later. */
free(line);
}
Use redirection
yourprogram < youtextfile
will offer the content of yourtextfile as standard input (stdin) to yourprogram. Likewise
yourprogram > yourothertextfile
will send everything the program writes to standard output (stdout) to yourothertextfile
You'll notice when reading man pages that most system calls have a version that works directly with stdin or stdout
For example consider the printf family:
printf ("hello world\n");
is a shorter version of
fprintf (stdout,"hello world\n");
and the same goes for scanf and stdin.
This is only the most basic usage of redirection, which in my opinion is one of the key aspects of "the unix way of doing things". As such, you'll find lots of articles and tutorials that show examples that are a lot more advanced than what I wrote here. Have a look at this Linux Documentation Project page on redirection to get started.
EDIT: getting fed input via redirection ior interactively "looks" the same to the program, so it will react the same to redirected input as it does to console input. This means that if your program expects data line-wise (eg because it uses gets() to read lines), the input text file should be organized in lines.
By default, every program you execute on POSIX-compliant systems has three file descriptors open (see <unistd.h> for the macros' definition): the standard input (STDOUT_FILENO), the standard output (STDOUT_FILENO), and the error output (STDERR_FILENO), which is tied to the console.
Since you said you want read lines, I believe the ssize_t getline(char **lineptr, size_t *n, FILE *stream) function can do the job. It takes a stream (FILE pointer) as a third argument, so you must either use fopen(3) to open a file, or a combination of open(2) and fdopen(3).
Getting inspiration from man 3 getline, here is a program demonstrating what you want:
#define _GNU_SOURCE
#include <stddef.h>
#include <stdio.h>
#include <stdlib.h>
int main(int argc, char *argv[])
{
FILE *fp;
size_t len;
char *line;
ssize_t bytes_read;
len = 0;
line = NULL;
if (argc > 1)
{
fp = fopen(argv[1], "r");
if (fp == NULL)
{
perror(*argv);
exit(EXIT_FAILURE);
}
}
else
fp = stdin;
while ((bytes_read = getline(&line, &len, fp)) != -1)
printf("[%2zi] %s", bytes_read, line);
free(line);
exit(EXIT_SUCCESS);
}
Without arguments, this program reads lines from the standard input: you can either feed it lines like echo "This is a line of 31 characters" | ./a.out or execute it directly and write your input from there (finish with ^D).
With a file as an argument, it will output every line from the file, and then exit.
You can have your executable read its arguments on the command line and use xargs, the special Linux command for passing the contents of a file to a command as arguments.
An alternative to xargs is parallel.

How to intercept SSH stdin and stdout? (not the password)

I realize this question is asked frequently, mainly by people who want to intercept the password-asking phase of SSH. This is not what I want. I'm after the post-login text.
I want to write a wrapper for ssh, that acts as an intermediary between SSH and the terminal. I want this configuration:
(typing on keyboard / stdin) ----> (wrapper) ----> (ssh client)
and the same for output coming from ssh:
(ssh client) -----> (wrapper) -----> stdout
I seem to be able to attain the effect I want for stdout by doing a standard trick I found online (simplified code):
pipe(fd)
if (!fork()) {
close(fd[READ_SIDE]);
close(STDOUT_FILENO); // close stdout ( fd #1 )
dup(fd[WRITE_SIDE]); // duplicate the writing side of my pipe ( to lowest # free pipe, 1 )
close(STDERR_FILENO);
dup(fd[WRITE_SIDE]);
execv(argv[1], argv + 1); // run ssh
} else {
close(fd[WRITE_SIDE]);
output = fdopen(fd[READ_SIDE], "r");
while ( (c = fgetc(output)) != EOF) {
printf("%c", c);
fflush(stdout);
}
}
Like I said, I think this works. However, I can't seem to do the opposite. I can't close(STDIN_FILENO) and dup the readside of a pipe. It seems that SSH detects this and prevents it. I've read I can use the "-t -t" option to force SSH to ignore the non-stdin nature of its input; but when I try this it still doesn't work.
Any hints?
Thanks very much!
Use popen (instead of execv) to execute the ssh cmd and be able to read and write to the session.
A pipe will not work if you want to allow any interactive use of ssh with the interceptor in place. In this case, you need to create a pseudo-tty. Look up the posix_openpt, ptsname, and grantpt functions. There's also a nonstandard but much-more-intuitive function called openpty, and a wrapper for it called forkpty, which make what you're trying to do extremely easy.
Python's Paramiko does all of this with SSH but it is in Python source code. However, for a C programmer, reading Python is a lot like reading pseudocode so go to the source and learn exactly what works.
Here's a working example that writes to ssh:
#include <unistd.h>
int main(int argc, char **argv)
{
int pid;
int fds[2];
if (pipe(fds))
return -1;
pid = fork();
if (!pid)
{
close(fds[1]);
close(STDERR_FILENO);
dup2(fds[0], STDIN_FILENO);
execvp(argv[1], argv + 1);
}
else
{
char buf[256];
int rc;
close(fds[0]);
while ((rc = read(STDIN_FILENO, buf, 256)) > 0)
{
write(fds[1], buf, rc);
}
}
wait(NULL);
return 0;
}
This line is probably wrong:
execv(argv[1], argv + 1); // run ssh
The array must be terminated by a NULL pointer, if you are using argv[] the parameter from main() I don't think there is any guarantee that this is the case. Edit: just checked the C99 standard and argv is NULL terminated.
execv() does not search the path for the file to execute, so if you are passing ssh as the parameter, it is equivalent to ./ssh which is probably not what you want. You could use execvp() but that is a security risk if a malicious program called ssh appears in $PATH before /bin/ssh. Better to use execv() and force the correct path.

Resources