Building C file in Sublime Text 3 and using fork(); - c

(running on a mac) My C.sublime-build file looks like this:
{
"cmd" : ["gcc -Wall -g $file_name -o ${file_base_name} && ./${file_base_name}"],
"selector" : "source.c",
"shell": true,
"working_dir" : "$file_path"
}
and I have a simple program with the following code:
#include <stdio.h>
#include <unistd.h>
int main ( int argc, char *argv[] ) {
printf("hi\n");
fork();
printf("bye\n");
return 0;
}
and sublime will execute it and give me
hi
bye
hi
bye
while executing from the shell gives me the correct result,
hi
bye
bye
why is this happening?

According to ISO C:
Standard input and standard output are fully buffered, unless they
refer to a terminal device, in which case, they are line buffered.
When you're using ST3, it does not refer to a terminal device so it is fully buffered. It means hi\n and bye\n will be stored in buffer zone and fork()will copy them to child process. Then both of them will be output twice.
When you're using the shell, you're using a terminal device and it is line buffered. During thr execution, hi\n will be output firstly and buffer zone is flushed due to the \n. Then bye\n is send to buffer zone and will be output twice.

It may be that when sublime executes it that stdout, for whatever reason, is not using line buffered output but fully buffered output instead. So, when you fork() the child, the "hi\n" still resides on the child's FILE too. The output of both is only flushed when the programs exit and they both print the same output.

Related

how to make ncurses program working with other linux utils?

Suppose I have a ncurses program which does some job on curses screen, and finally print something to stdout. Call this program c.c, compiled to a.out.
I expect cat $(./a.out) first fire up ncurses, after some action, a.out quits and print c.c to stdout, which is read by cat, and thus print content of file c.c.
#include <stdio.h>
#include <ncurses.h>
int main() {
initscr();
noecho();
cbreak();
printw("hello world");
refresh();
getch();
endwin();
fprintf(stdout, "c.c");
return 0;
}
I also expect ./a.out | xargs vim, ls | ./a.out | xargs less to work.
But when I type ./a.out | xargs vim, hello world never shows up. The command seems not executed in order, vim does not open c.c.
What is the correct way to make a ncurses program to work with other linux utils?
Pipes use the standard output (stdout) and standard input (stdin).
The simplest way - rather than using initscr, which initializes the output to use the standard output, use newterm, which allows you to choose the file descriptors, e.g.,
newterm(NULL, stderr, stdin);
rather than
initscr();
which is (almost) the same as
newterm(NULL, stdout, stdin);
By the way, when you include <ncurses.h> (or <curses.h>), there is no need to include <stdio.h>.
If you wanted to use your program in the middle of a pipe, that is more complicated: you would have to drain the standard input and open the actual terminal device. But that's another question (and has already been answered).
Further reading:
initscr, newterm, endwin, isendwin, set_term, delscreen -
curses screen initialization and manipulation routines
ncurses works by writing a bunch of ansi escapes to stdout, which the terminal will interpret. You can run ./a.out > file and then inspect the file to see what you're actually writing. It'll be immediately obvious why programs are confused:
$ cat -vE file
^[(B^[)0^[[?1049h^[[1;24r^[[m^O^[[4l^[[H^[[Jhello world^[[24;1H^[[?1049l^M^[[?1l^[>c.c
The correct way of doing this is to skip all the graphical/textual UI parts when you detect that stdout is not a terminal, i.e. it's consumed by a program instead of a user:
#include <unistd.h>
#include <stdio.h>
#include <ncurses.h>
int main() {
if(isatty(1)) {
// Output is a terminal. Show stuff to the user.
initscr();
noecho();
cbreak();
printw("hello world");
refresh();
getch();
endwin();
} else {
// Output is consumed by a program.
// Skip UI.
}
fprintf(stdout, "c.c");
return 0;
}
This is the canonical Unix behavior.
If you instead want to force your UI to be shown regardless, you can draw your UI on stderr.

Why printf() does not output to a file, when stdout is redirected to that file?

The following is a simple C program:
#include <unistd.h>
#include <stdio.h>
int main(void)
{
while (1)
{
printf("Hello World\n");
sleep(1);
}
}
Build and run it, the "Hello World" will be printed in the terminal:
$ ./a.out
Hello World
Hello World
Hello World
But if the stdout is redirected to a file, after running a while, there is still nothing in the file:
$ ./a.out > log.txt
^C
$ cat log.txt
$
Why doesn't the printf output to the file which stdout is redirected to?
For terminal only by default it is line buffer. In here you redirected the stdout to the file. So, now the stdout is not pointing a terminal. It pointing a file. For the file it is by default fully buffered. So, you have flush the stdout after writing it.
Refer the answer for this question.
As #js1, said, you have to call fflush(stdout) after writing it.

Why does using pipes with `who` cause mom not to like me?

In a program I'm writing, I fork() and execl() do determine who mom likes. I noticed that if I set up pipes to write to who's stdin, it produces no output. If I don't set up pipes to write to stdin, then who produces output as normal. (yes, I know, writing to who's stdin is pointless; it was residual code from executing other processes that made me discover this).
Investigating this, I wrote this simple program (edit: for a simpler example, just run: true | who mom likes):
$ cat t.c:
#include <unistd.h>
#include <assert.h>
int main()
{
int stdin_pipe[2];
assert( pipe(stdin_pipe) == 0);
assert( dup2(stdin_pipe[0], STDIN_FILENO) != -1);
assert( close(stdin_pipe[0]) == 0);
assert( close(stdin_pipe[1]) == 0);
execl("/usr/bin/who", "/usr/bin/who", "mom", "likes", (char*)NULL);
return 0;
}
Compiling and running results in no output, which is what surprised me initially:
$ cc t.c
$ ./a.out
$
However, if I compile with -DNDEBUG (to remove the piping work in the assert()s) and run, it works:
$ cc -DNDEBUG t.c
$ ./a.out
batman pts/0 2014-08-15 12:57 (:0)
$
As soon as I call dup2(stdin_pipe[0], STDIN_FILENO), who stops producing output. The only explanation I could come up with is that dup2 affects the tty, and who uses the tty do determine who I am (given the -m flag prints "only hostname and user associated with stdin"). My main question is:
Why can't who mom likes/who am i/who -m determine who I am when I give it a pipe for stdin? What mechanism is it using to determine its information, and why does using a pipe ruin this mechanism? I know it's using stdin somehow, but I don't understand exactly how or exactly why stdin being a pipe matters.
Let's look at the source code for GNU coreutils who:
if (my_line_only)
{
ttyname_b = ttyname (STDIN_FILENO);
if (!ttyname_b)
return;
if (STRNCMP_LIT (ttyname_b, DEV_DIR_WITH_TRAILING_SLASH) == 0)
ttyname_b += DEV_DIR_LEN; /* Discard /dev/ prefix. */
}
When -m (my_line_only) is used, who finds the tty device connected to stdin, and then proceeds to finds the entry for that tty in utmp.
When stdin is not a terminal, there is no name to look up in utmp, so it exits without printing anything.

Redirecting output of a C program as input of another program in Linux command shell

I wrote a program p1.c which takes input from the linux command shell (Using- char n=argv[1]). I want the character output of p1.c to be taken as input of program p2.c . How can I do this? I used the command
./p2.out < ./p1.out T > output.txt. It doesn't seem to work as 'T' is taken as input for p2.out and its output is written in output.txt.
Use pipeline: ./p1.out T | ./p2.out

execve("/bin/sh", 0, 0); in a pipe

I have the following example program:
#include <stdio.h>
int
main(int argc, char ** argv){
char buf[100];
printf("Please enter your name: ");
fflush(stdout);
gets(buf);
printf("Hello \"%s\"\n", buf);
execve("/bin/sh", 0, 0);
}
I and when I run without any pipe it works as it should and returns a sh promt:
bash$ ./a.out
Please enter your name: warning: this program uses gets() which is unsafe.
testName
Hello "testName"
$ exit
bash$
But this does not work in a pipe, i think I know why that is, but I cannot figure out a solution. Example run bellow.
bash$ echo -e "testName\npwd" | ./a.out
Please enter your name: warning: this program uses gets() which is unsafe.
Hello "testName"
bash$
I figure this has something to do with the fact that gets empties stdin in such a way that /bin/sh receives a EOF and promtly quits without an error message.
But how do I get around this (without modifying the program, if possible, and not removing gets, if not) so that I get a promt even though I supply input through a pipe?
P.S. I am running this on a FreeBSD (4.8) machine D.S.
You can run your program without any modifications like this:
(echo -e 'testName\n'; cat ) | ./a.out
This way you ensure that your program's standard input doesn't end after what echo outputs. Instead, cat continues to supply input to your program. The source of that subsequent input is your terminal since this is where cat reads from.
Here's an example session:
bash-3.2$ cc stdin_shell.c
bash-3.2$ (echo -e 'testName\n'; cat ) | ./a.out
Please enter your name: warning: this program uses gets(), which is unsafe.
Hello "testName"
pwd
/home/user/stackoverflow/stdin_shell_question
ls -l
total 32
-rwxr-xr-x 1 user group 9024 Dec 14 18:53 a.out
-rw-r--r-- 1 user group 216 Dec 14 18:52 stdin_shell.c
ps -p $$
PID TTY TIME CMD
93759 ttys000 0:00.01 (sh)
exit
bash-3.2$
Note that because shell's standard input is not connected to a terminal, sh thinks it is not executed interactively and hence does not display the prompt. You can type your commands normally, though.
Using execve("/bin/sh", 0, 0); is cruel and unusual punishment for the shell. It gives it no arguments or environment at all - not even its own program name, nor even such mandatory environment variables as PATH or HOME.
Not 100% sure of this (the precise shell being used and the OS might throw these answers a bit; I believe that FreeBSD uses GNU bash by default as /bin/sh?), but
sh may be detecting that its input is not a tty.
or
Your version of sh might go into non-interactive mode like that also if called as sh, expecting login will prepend a - onto argv[0] for it. Setting up execve ("/bin/sh", { "-sh", NULL}, NULL) might convince it that it's being run as a login shell.

Resources