Launch interactive session via script bridge - lldb

I am trying to launch an interactive debugging session from a python script via the SWIG-generated lldb module. The program to debug is nothing but an empty main function. Here is my current attempt:
import lldb
import sys
import os
debugger = lldb.SBDebugger.Create()
debugger.SetAsync(False)
target = debugger.CreateTargetWithFileAndArch("a.out", "")
# The breakpoint itself works fine:
fileSpec = lldb.SBFileSpecList()
mainBp = target.BreakpointCreateByName("main", 4, fileSpec, fileSpec)
mainBp.SetAutoContinue(False)
# Use the current terminal for IO
stdout = os.ttyname(sys.stdout.fileno())
stdin = os.ttyname(sys.stdin.fileno())
stderr = os.ttyname(sys.stderr.fileno())
flag = lldb.eLaunchFlagNone
target.Launch(target.GetDebugger().GetListener(), [], [], stdin, stdout,
stderr, os.getcwd(), flag, False, lldb.SBError())
It seems to me that whatever flag I pass to target.Launch (I tried amongst those flags), there is no way of switching to an interactive editline session. I do understand that the primary purpose of the python bindings is non-interactive scripting, but I am nevertheless curious whether this scenario could be made possible.

There is a method on SBDebugger to do this (RunCommandInterpreter). That's how Xcode & similar make lldb console windows. But so far it's only been used from C and there's something wrong with the C++ -> Python bindings for this function such that when you try to call it from Python you get a weird error about the 5th argument being of the wrong type. The argument is an int& and that gives SWIG (the interface generator) errors at runtime.
Of course, you could just start reading from STDIN after launch and every time you get a complete line pass it to "SBCommandInterpreter::HandleCommand". But getting RunCommandInterpreter working is the preferable solution.

Related

Running cat /proc/cpuinfo in glib

I've been trying to look for questions on how to use g_spawn_sync() and they said that it is good to use when you want to execute a command in the terminal besides using pipes.
The only thing I can't figure out now is why the command cat /proc/cpuinfo doesn't work. error->message returns (No such file or directory)but if I use commands like ls or cat alone, it works. I also tried running cd /proc && cat cpuinfo but it gives me the same error.
I'm not an expert of glib but I read in the manual that I can use G_SPAWN_SEARCH_PATH so that it will check my PATH for the commands I can use without including the absolute path for the command.
I have the following code:
gchar *argv[] = { "cat /proc/cpuinfo", NULL };
char *output = NULL; // will contain command output
GError *error = NULL;
int exit_status = 0;
if (!g_spawn_sync(NULL, argv, NULL, G_SPAWN_SEARCH_PATH, NULL, NULL,
&output, NULL, &exit_status, &error))
{
printf("[getHardwareInfo] DEBUG: Error on g_spawn_sync %s.\n", error->message);
}
tl;dr: Do not use g_spawn_command_line_sync() unless you really know what you are doing.
Firstly, the actual problem you are hitting: John Szakmeister’s comment was correct: g_spawn_sync() takes an array of arguments, the first one of which is the path to the program to execute (or to look for in $PATH, if you’ve specified G_SPAWN_SEARCH_PATH). By passing the array { "cat /proc/cpuinfo", NULL }, you are saying that you want to run the program cat /proc/cpuinfo with no arguments, not the program cat with the argument /proc/cpuinfo.
However, there are many other problems here, and I think it’s important to mention them before people start cargo-culting this code, because they have security implications:
As LegalProgrammer says, why are you spawning cat when you could just call g_file_get_contents()?
Failing that, use GSubprocess instead of g_spawn_*(). It’s a more modern API, which allows you to monitor the lifecycle of the spawned process more easily, as well as getting streaming I/O in and out of the subprocess.
Do not ignore the warnings in the manual about the security implications of using g_spawn_command_line_sync(). There are several:
It will run the first matching program found in your $PATH, so if an attacker has control of your $PATH, or write access to any directory in that $PATH (such as ~/.local/bin), you will end up running an attacker-controlled program.
It’s a synchronous function, so will block on the subprocess completing, which could take unbounded time. Your program will be unresponsive for that time.
It returns the output in a single allocation, rather than as a stream, so if the subprocess returns many megabytes of output, you may hit allocation failures and abort.
The obvious next step from “g_spawn_command_line_sync() seems to do what I want” is “let’s use g_strdup_printf() to put together a command to run with it”, and then you have shell injection vulnerabilities, where an attacker who controls any of the parameters to that printf() can twist the entire shell command to execute their arbitrary code.
I'm answering my question here. After reading the manual again, I decided to use another function, g_spawn_command_line_sync, which is simpler to use than g_spawn_sync.
A simple version of g_spawn_sync() with little-used parameters removed, taking a command line instead of an argument vector. See g_spawn_sync() for full details. command_line will be parsed by g_shell_parse_argv(). Unlike g_spawn_sync(), the G_SPAWN_SEARCH_PATH flag is enabled. Note that G_SPAWN_SEARCH_PATH can have security implications, so consider using g_spawn_sync() directly if appropriate. Possible errors are those from g_spawn_sync() and those from g_shell_parse_argv().
Here is my new code:
char *output = NULL; // will contain command output
GError *error = NULL;
gint exit_status = 0;
if (!g_spawn_command_line_sync("cat /proc/cpuinfo", &output, NULL, &exit_status, &error))
{
printf("[getHardwareInfo] DEBUG: Error on g_spawn_command_line_sync %s.\n", error->message);

embedded perl in C, perlapio - interoperability with STDIO

I just realized, that the PerlIO layer seems to do something more than just (more or less) easily wrap the stdio.h-functions.
If I try to use a file-descriptor resolved via PerlIO_stdout() and PerlIO_fileno() with functions from stdio.h, this fails.
For example:
PerlIO* perlStdErr = PerlIO_stderr();
fdStdErrOriginal = PerlIO_fileno(perlStdErr);
relocatedStdErr = dup(fdStdOutOriginal);
_write(relocatedStdErr, "something", 8); //<-- this fails
I've tried this with VC10. The embedded perl program is executed from a different context - so it's not possible to use PerlIO from the context where the write to the relocatedStdErr is performed.
For the curious: I need to execute a perl script and forward the output of the script's stdout/stderr to a log whilst keeping the ability to write on stdout for myself. Moreover this should work platform independent (linux, windows console application, win32 desktop application). Just to forward the stdout/stderr doesn't work in Win32 desktop applications since there is none ;) - you need to use the perl's stdout/stderr.
Needed solution: Be able to write on a filehandle (or descriptor) derived from perlio NOT using the PerlIO stack.
EDIT - my solution:
As Story Teller was pointing to PerlIO_findFILE, this did the trick.
So here an excerpt of the code - see the comments inside for descriptions:
FILE* stdErrFILE = PerlIO_findFILE(PerlIO_stderr()); //convert to Perl's stderr to stdio FILE handle
fdStdErrOriginal = _fileno(stdErrFILE); //get descriptor using MSVC
if (fdStdErrOriginal >= 0)
{
relocatedStdErr = _dup(fdStdErrOriginal); //relocate stdErr for external writing using MSVC
if (relocatedStdErr >= 0)
{
if (pipe(fdPipeStdErr) == 0) //create pipe for forwarding stdErr - USE PERL's IO since win32subsystem(non-console) "_pipe" doesn't work
{
if (dup2(fdPipeStdErr[1], fdStdErrOriginal) >= 0) //hang pipe on stdErr - USE PERL's IO (since it's created by perl)
{
close(fdPipeStdErr[1]); //close the now duplicated writer on stdErr for further usage - USE PERL's IO (since it's created by perl)
//"StreamForwarder" creates a thread that catches/reads the pipe's input and forwards it to the processStdErrOutput function (using the PerlIO)
stdErrForwarder = new StreamForwarder(fdPipeStdErr[0], &processStdErrOutput, PerlIO_stderr());
return relocatedStdErr; //return the relocated stdErr to be able to '_write' onto it
}
}
}
}
...
...
_write(relocatedStdErr, "Hello Stackoverflow!", 20); //that works :)
One interesting thing that I actually don't understand is, that the perl documentation says that is't necessary to #define PERLIO_NOT_STDIO 0 to be able to use PerlIO_findFILE(). But for me, that works fine without it and further I like to use PerlIO and the stdio together anyway. That's a point I didn't figured out what is going on.

How to debug cgi program written in C and running in Apache2?

I have a complex cgi executable written in C, I configured in Apache2 and now it is running succesfully. How can I debug this program in the source code, such as set break points and inspect variables? Any tools like gdb or eclipse? Any tutorial of how to set up the debugging environment?
Thanks in advance!!
The CGI interface basically consists in passing the HTTP request to the executable's standard input and getting the response on the standard output. Therefore you can write test requests to files and manually execute your CGI without having to use Apache. The debugging can then be done with GDB :
gdb ./my_cgi
>> break some_func
>> run < my_req.txt
with my_req.txt containing the full request:
GET /some/func HTTP/1.0
Host: myhost
If you absolutely need the CGI to be run by Apache it may become tricky to attach GDB to the right process. You can for example configure Apache to have only one worker process, attach to it with gdb -p and use set follow-fork-mode child to make sure it switches to the CGI process when a request arrives.
I did this: in cgi main i added code to look for an existing file, like /var/tmp/flag. While existing, i run in a loop. Time enough to attach to cgi process via gdb. After then i delete /var/tmp/flag and from now I can debug my cgi code.
bool file_exists(const char *filename)
{
ifstream ifile(filename);
return ifile;
}
int cgiMain()
{
while (file_exists ("/var/tmp/flag"))
sleep (1);
...
your code
Unless FastCGI or SCGI is used, the CGI process is short-lived and you need to delay its exit to have enough time to attach the debugger while the process is still running. For casual debugging the easiest option is to simply use sleep() in an endless loop at the breakpoint location and exit the loop with the debugger once it is attached to the program.
Here's a small example CGI program:
#include <stdio.h>
#include <unistd.h>
void wait_for_gdb_to_attach() {
int is_waiting = 1;
while (is_waiting) {
sleep(1); // sleep for 1 second
}
}
int main(void) {
wait_for_gdb_to_attach();
printf("Content-Type: text/plain;charset=us-ascii\n\n");
printf("Hello!");
return 0;
}
Suppose it is compiled into cgi-debugging-example, this is how you would attach the debugger once the application enters the endless loop:
sudo cgdb cgi-debugging-example $(pgrep cgi-debugging)
Next you need to exit the infinite loop and wait_for_gdb_to_attach() function to reach the "breakpoint" in your application. The trick here is to step out of sleep functions until you reach wait_for_gdb_to_attach() and set the value of the variable is_waiting with the debugger so that while (is_waiting) exits:
(gdb) finish
Run till exit from 0x8a0920 __nanosleep_nocancel () at syscall-template.S:81
0x8a07d4 in __sleep (seconds=0) at sleep.c:137
(gdb) finish
Run till exit from 0x8a07d4 in __sleep (seconds=0) at sleep.c:137
wait_for_gdb_to_attach () at cgi-debugging-example.c:6
Value returned is $1 = 0
(gdb) set is_waiting = 0 # <<<<<< to exit while
(gdb) finish
Run till exit from wait_for_gdb_to_attach () cgi-debugging-example.c:6
main () at cgi-debugging-example.c:13
Once you are out of wait_for_gdb_to_attach(), you can continue debugging the program or let it run to completion.
Full example with detailed instructions here.
I'm not sure how to use gdb or other frontends in eclipse, but I just debugged my CGI program with gdb. I'd like to share something that other answers didn't mention, that CGIs usually need to read request meta-variables defined in RFC 3875#4.1 with getenv(3). Popular request variables in my mind are:
SCRIPT_NAME
QUERY_STRING
CONTENT_LENGTH
CONTENT_TYPE
REMOTE_ADDR
There variables are provided by http servers such as Apache. When debugging with gdb, we need to set these values by our own with set environment. In my case, there're only a few variables neededa(and the source code is very old, it still uses SCRIPT_URL instead of SCRIPT_NAME), so here's my example:
gdb cgi_name
set environment SCRIPT_URL /path/to/sub/cgis
set environment QUERY_STRING p1=v1&p2=v2
break foo.c:42
run
For me both solutions for debugging the CGI in gdb without web server presented above didn't work.
Maybe the second solution works for a GET Request.
I needed a combination of both, first setting the environment variables from rfc3875 (not sure if all of them are really neded).
Then I was able to pass only the params (not the compltete request) via STDIN from a file.
gdb cgi_name
set environment REQUEST_METHOD=POST
set environment CONTENT_LENGTH=1337
set environment CONTENT_TYPE=application/json
set environment SCRIPT_NAME=my.cgi
set environment REMOTE_ADDR=127.0.0.1
run < ./params.txt
With params.txt:
{"user":"admin","pass":"admin"}

Can't Find "Syslog.conf" in linux kernal 2.6.37.6 created with BuysBox v1.19.3

I created a tiny OS for my controller with Linux kernel 2.6.37.6 with the help of BusyBox and tool chain. I am writing a logging module(C program) in it and i want customized logs(customized path for different logs) like in /log/.
I have syslogd in my machine and /etc/syslog.conf supposed to present in my machine but it's not it the place. I created new syslog.conf under /etc but still i can't find my logs in desired place.
But if i run command syslogd -O /log/Controller.log all logs started to redirect to this (specified file). So i want to know where is the configuration file for this syslogd i can't find the configuration file for it.
Is there any way that i can write a module(program) for LOGS without requiring syslog.conf and yes of course traditional printf way. Problem is that for customized paths for log we need to give keyname LOG_LOCAL1 in openlog() as a argument but it's not working
I followed procedure from this examples http://www.codealias.info/technotes/syslog_simple_example
If you are using Busybox's syslogd then there is no support of syslog.conf,all logs are written to /var/log/messages by default.
You can modify code of syslogd in busybox which is located in busybox/sysklogd/syslogd.c for your desire behaviour
You can change code of syslogd like this
static const struct init_globals init_data = {
.logFile = {
.path = "your desire path",
.fd = -1,
},

C - Program fails to get file descriptor only when running with GDB

I'm not an expert C programmer. I'm having trouble debugging a program using GDB. (The bug I am trying to fix is unrelated to the problem I am asking about here.) My problem is that the program runs fine when I run the binary directly from a shell, but the program crashes when I run it using GDB.
Here is some information about the program which may be useful: it is a 20+ year old piece of database software, originally written for Solaris (I think) but since ported to Linux, which is setuid (but not to root, thank god).
The program crashes in GDB when trying to open a file for writing. Using GDB, I was able to determine that crash occurs because the following system call fails:
fd = open(path, O_WRONLY|O_CREAT|O_TRUNC, 0644);
For clarification: path is the path to a lockfile which should not exist. If the lock file exists, then the program shuts down cleanly before it even reaches this system call.
I do not understand why this system call would fail, since 1) The user this program runs as has rwx permissions on the directory containing path (I have verified this by examining the value of the variable stored in path), and 2) the program successfully opens the file for writing when I am not using GDB to debug it.
Are there any reasons why I cannot
The key turns out to be this bit:
... is setuid (but not to root, thank god).
When you run a program under (any) debugger (using any of the stop-and-inspect/modify program facilities), the kernel disables setuid-ness, even for non-root setuid.
If you think about this a bit it makes sense. Consider a game that keeps a "high scores" file, and uses "setuid games" to do this, with:
fd = open(GAME_SCORE_FILE, open_mode, file_mode);
score_data = read_scores(fd);
/* set breakpoint here or so */
if (check_for_new_high_score(current_score, score_data)) {
printf("congratulations, you've entered the High Scores records!\n");
save_scores(fd, score_data);
}
close(fd);
Access to the "high scores" file is protected by file permissions: only the "games" user can write to it.
If you run the game under a debugger, though, you can set a breakpoint at the marked line, and set the current_score data to some super-high value and then resume the program.
To avoid allowing debuggers to corrupt the internal data of setuid programs, the kernel simply disables setuid-ness when running code with debug facilities enabled. If you can su (or sudo or whatever) to the user, indicating that you have permission regardless of any debugging, you can then run gdb itself as that user, so that the program runs as the user it "would have" setuid-ed to.

Resources