gdb - debugging with pipe - c

Say I have a two programs named blah and ret. I want to debug blah program which receives input from ret program via I/O redirection. How do I debug the blah program in the following case using gdb?
bash> ret | blah

At first, you may run the program and debug it by pid. This solution, of course, doesn't cover all cases.
Another approach is to use Linux capabilities for inter-process communication. In short, you redirect the output of ret to a FIFO special file ("named pipe") and then read from that FIFO via debugger. Here's how it's done. From bash, run:
mkfifo foo
This creates a special file in your directory that will serve as a named pipe. When you write text to this file (using the same syntax echo "Hello" >foo), the writing program will block until someone reads the data from the file (cat <foo, for instance). In our case, a gdb-controlled process will read from this file.
After you created a fifo, run from bash:
ret > foo & # ampersand because it may block as nobody is reading from foo
gdb blah
Then, in gdb prompt, run
run <foo
And get the desired effect. Note that you can't read the data from the fifo (as well as from a usual pipe) twice: when you've read all the data, the blah process dies and you should repeat the command writing to foo (you may do it from the other shell window).
When you're done, remove the fifo with rm foo (or place it into the directory where it will automatically be removed upon system restart, such as /tmp).

GDB's run command uses bash to perform redirection. A simple way to achieve the equivalent of ret | blah is to use bash's process substitution feature.
$ gdb blah
...
(gdb) run < <(ret)
Explanation: bash substitutes <(ret) with something like /dev/fd/123, which is a file descriptor of the stdout of ret. We can use that fd similarly to a named FIFO as described in the other answer, except that we don't have to manually create it ourselves, nor worry about the lifetime of the ret process.

Related

Hijacking system("/bin/sh") to run arbitrary commands

I'm trying to perform a privilege escalation attack using a binary which performs the call:
system("/bin/sh");
Is there a way to pass commands as "arguments" or such with the opened shell?
(I don't see it opening, I guess it runs and dies as soon as it has nothing to do which is immediately).
Edit: I Cannot edit the code. It's compiled already.
If you execute
system("/bin/bash");
the shell enters into interactive mode. It reads commands from standard input and writes answers to standard output. The standard input and output is inherited from the calling (your) program. Your program will wait until the shell finishes (i.e. until you enter the command exit or you type ^D at the beginning of line). The shell will run with the same privileges as the calling program.
If you control stdin
What you'll need to do is connect stdin to something that will, when read, provide a source of commands before invoking that code.
I'm writing the below in bash, but you can convert it to whatever language you actually intend to do this in:
# create a file with the commands you want to run
cat >/tmp/commands <<'EOF'
echo "Hello world"
EOF
# open that file and copy its file descriptor to FD 0 (stdin)
exec </tmp/commands
# then invoke your compiled executable that starts a shell.
run-your-command-that-starts-a-shell
If the program controls or overrides its stdin
Another option is to pass ENV with the name of a file to source:
cat >/tmp/commands <<'EOF'
echo "Hello world"
EOF
ENV=/tmp/commands run-your-command-that-starts-a-shell

Is it possible to connect more than the two standard streams to a terminal in Linux?

Consider the following simple program, and suppose it is in a file called Test.c.
#include <stdio.h>
int main(){
fprintf(stdout, "Hello stdout\n");
fprintf(stderr, "Hello stderr\n");
}
Suppose I compile this program into an executable called Test and run it as follows.
./Test > Out 2> Err
After this run, I will have two files Out and Err, which will contain the two messages respectively.
This is wonderful because I can have two different types of messages normally printed to the console, and then filter one or both by using bash redirection. However, the fact that I can only do this kind of filtering with two file descriptors seems very limiting.
Is there some way to open a third or nth file descriptor which points at the terminal output, so I could filter it separately?
The syntax might be something like this.
./Test > Out 2> Err 3> Err2
I speculate that bash may have some rudimentary support for this because of the following test, which seems to imply that bash will treat the number after a & as a file descriptor.
$ ./Test >&2
Hello stdout
Hello stderr
$ ./Test >&3
bash: 3: Bad file descriptor
At a shell, running either
exec 3>/dev/tty
...or...
exec 3>&1
...will open file descriptor 3, pointing it to your TTY explicitly (in the first case), or to the location at which stdout is currently writing to (in the second).
If you want to use this in a program, I would strongly suggest taking an FD number to write extra logs to as an optional argument:
yourprogram --extra-logs-fd=3
...combining that output with stderr, or suppressing it entirely (as appropriate) if no such option is given. (A user who wanted extra logging to go to stdout could thus use --extra-logs-fd=1, or --extra-logs-fd=2 for stderr).
Even better, if your only target OS is Linux, is just to accept a filename to write to:
# to write to a file
yourprogram --extra-logs=extra_logs.txt
# to write to FD 3
yourprogram --extra-logs=/dev/fd/3
# to write to a tee program, then to stderr (in ksh or bash)
yourprogram --extra-logs=>(tee extra_logs.txt >&2)
...of course, you can do all that with the FD mode (just redirect 3>extra_logs.txt in your shell in the first case, and 3> >(tee extra_logs.txt >&2) in the third), but this makes you do the work of managing FD numbers by hand, and with what advantage?

How to tail stdout or any of the standard streams?

I'm trying to understand how standard streams work in Linux, specifically how I can capture say, stdout directly from my terminal with something like tail. Since everything in Linux is a file, isn't it possible to simply do a tail -f /dev/stdout?
So to test this, I wrote a trivial program:
int main(int argc, char * argv[]) {
while (1) {
printf("This takes advantage of stdout\n");
sleep(1);
}
return 0;
}
Then in a separate terminal I did a tail -f stdout, but nothing is printed. Am I doing something wrong?
Each process has its own stdout. So you need to capture the standard output of some running process. The usual way is to make a pipeline, but you may also redirect the stdout to some file.
Learn more about tee(1). Read some bash scripting tutorial
A typical use might be thru batch(1) and a here document; I often do
batch << EOJ
make >& _make.out
EOJ
the >& is a bash-ism or a zsh-ism redirecting both stdout and stderr.
Then in a separate terminal -or even in the same terminal, after the batch sequence above- (in the same current directory)
tail -f _make.out
This enables me to see the progression of e.g. a long-lasting compilation (of course I can interrupt with Ctrl C the tail -f command without harming the compilation). YMMV.
Read also advanced bash scripting guide
Actually, /dev/stdout (notice that stdout alone means nothing particular to the shell) is a symlink to /proc/self/fd/1 . Read proc(5)
BTW, in your C code, take the habit of calling fflush(3) before any important syscall like sleep or fork (your code don't need it because stdout on terminal is line buffered but might be differently buffered when stdout is not a tty). Read stdout(3). Terminals are weird beasts, read tty demystified
addenda
To test in your C or C++ code if stdout is a terminal, use isatty(3), i.e.
if (isatty(STDOUT_FILENO))
stdout_is_a_terminal();
Notice that stdout could be a pipe or a file (then of course the above test would fail).
To write to the terminal, use /dev/tty (see tty(4)). To write on the console use /dev/console (see console(4)...). For system logging, learn about syslog(3) and e.g. rsyslogd(8).

Redirecting the output of a child process

There are several ways of redirecting the output of a child process:
using freopen(3)
using dup(3)
using popen(3)
...
What should one pick if all is wanted is to execute a child process and have it output saved in a given file, pretty much like the ls > files.txt works?
What is normally used by shells?
You can discover what your favorite shell uses by strace(1)ing your shell.
In one terminal:
echo $$
In another terminal:
strace -o /tmp/shell -f -p [PID from the first shell]
In the first terminal again:
ls > files.txt
In the second terminal, ^C your strace(1) command and then edit the /tmp/shell output file to see what system calls it made to do the redirection.
freopen(3) manipulates the C standard IO FILE* pointers. All this will be thrown away on the other side of the execve(2) call, because it is maintained in user memory. You could use this after the execve(2) call, but that would be awkward to use generically.
popen(3) opens a single unidirectional pipe(7). This is useful, but extremely limited -- you get either the standard output descriptor or the standard input descriptor. This would fail for something like ls | grep foo | sort where both input and output must be redirected. So this is a poor choice.
dup2(2) will manage file descriptors -- a kernel-implemented resource -- so it will persist across execve(2) calls and you can set up as many file descriptors as you need, which is nice for ls > /tmp/output 2> /tmp/error or handling both input and output: ls | sort | uniq.
There is another mechanism: pty(7) handling. The forkpty(3), openpty(3), functions can manage a new pseudo-terminal device created specifically to handle another program. The Advanced Programming in the Unix Environment, 2nd edition book has a very nice pty example program in its source code, though if you're having trouble understanding why this would be useful, take a look at the script(1) program -- it creates a new pseudo-terminal and uses it to record all input and output to and from programs and stores the transcript to a file for later playback or documentation. You can also use it to script actions in interactive programs, similar to expect(1).
I would expect to find dup2() used mainly.
Neither popen() nor freopen() is designed to handle redirections such as 3>&7. Up to a point, dup() could be used, but the 3>&7 example shows where dup() starts to creak; you'd have to ensure that file descriptors 4, 5, and 6 are open (and 7 is not) before it would handle what dup2() would do without fuss.

How to redirect the output of a c program to a file?

I am trying to redirect the output of a c program to file, even when it generates some errors because of problems with the input data. I can send the output but the error messages to a file.
Does somebody know how to do it?
From within C source code, you can redirect outputs using freopen():
General outputs:
freopen("myfile.txt", "w", stdout);
Errors:
freopen("myfile_err.txt", "w", stderr);
(This answer applies to bash shell, and similar flavors. You didn't specify your environment and this sort of question needs that detail.)
I assume you know about basic redirection with ">". To also capture STDERR in addition to STDOUT, use the following syntax:
command > file-name 2>&1
For some more background on standard streams and numbers:
http://en.wikipedia.org/wiki/Standard_streams#Standard_input_.28stdin.29
This depends on what you mean and what platform you are using. Very often you can accomplish this from the command line, which has been covered in another answer. If you use this method to accomplish this you should be aware that FILE * stderr is typically written immediately (unbuffered) while FILE * stdout may be buffered (usually line buffered) so you could end up with some of your error messages appearing to have been printed earlier than some other messages, but actually the other messages are just being printed late.
From within a C program you can also do something similar within the stdio system using freopen, which will effect the FILE *, so you could make fprintf(stderr, "fungus"); print to something besides what stderr normally would print to.
But if you want to know how to make a program redirect the actual file descriptors under a unix like system you need to learn about the dup and dup2 system calls. They allow you to duplicate a file descriptor.
int fd = open("some_file", O_WRONLY);
dup2(2,fd);
close(fd);
This code will make "some_file" the new stderr at the OS level. The dup2 call will close and replace file descriptor 2 (stderr, which is usually used by FILE * stderr but not necessarily if you call freopen(x,y,stderr) since that may make FILE *stderr use a different file descriptor).
This is how shell programs redirect input and output of programs. The open all of the files that the new program will need, fork, then the child uses dup2 to set up the files descriptors for the new program, then it closes any files that the new program won't need (usually just leaving 0, 1, and 2 open), and then uses one of the exec functions to become the program that the shell was told to run. (some of this isn't entirely accurate because some shells may rely on close on exe flags)
Using a simple linux command you can save the output into the file. here is a simple linux terminal command.
ls > file.txt
The output of this command will be stored into the file.
same as you can store the output of the program like this suppose, object file name is a, run the following command to save output in a file:
./a > file.txt

Resources