I'm working on a patch for FFmpeg and need to debug my code. I'm loading an external library, and in order to test different library versions, I have them in different folders. To select which one I want to use, I've been using DYLD_LIBRARY_PATH=/path/to/lib/dir ./ffmpeg and that works okay. But when I try it within lldb, it crashes saying dyld: Library not loaded and Reason: image not found. This used to work pre-Xcode 7.1, but I just recently upgraded and it stopped working.
Here's my MVCE:
#include <stdio.h>
#include <stdlib.h>
int main() {
char* str = getenv("DYLD_LIBRARY_PATH");
if (str) puts(str);
else puts("(null)");
return 0;
}
Running this program as follows produces the output:
$ ./a.out
(null)
$ DYLD_LIBRARY_PATH=/tmp ./a.out
/tmp
That looks okay. But when I try to use lldb it fails:
$ DYLD_LIBRARY_PATH=/tmp lldb ./a.out
(lldb) target create "./a.out"
Current executable set to './a.out' (x86_64).
(lldb) run
Process 54255 launched: './a.out' (x86_64)
(null)
Process 54255 exited with status = 0 (0x00000000)
Trying to set the environment variable inside lldb works:
lldb ./a.out
(lldb) target create "./a.out"
Current executable set to './a.out' (x86_64).
(lldb) env DYLD_LIBRARY_PATH=/tmp
(lldb) run
Process 54331 launched: './a.out' (x86_64)
/tmp
Process 54331 exited with status = 0 (0x00000000)
lldb version (it's from Xcode 7.1):
$ lldb --version
lldb-340.4.110
Question: Is this an intended new "feature," or is this a new bug in lldb (or am I totally crazy and this never used to work)? I'm quite positive lldb used to forward the DYLD_LIBRARY_PATH environment variable, so how come it isn't anymore?
Edit: This is on OS X 10.11.1.
If this is on El Capitan (OS X 10.11), then it's almost certainly a side effect of System Integrity Protection. From the System Integrity Protection Guide: Runtime Protections article:
When a process is started, the kernel checks to see whether the main
executable is protected on disk or is signed with an special system
entitlement. If either is true, then a flag is set to denote that it
is protected against modification. …
… Any dynamic linker (dyld)
environment variables, such as DYLD_LIBRARY_PATH, are purged when
launching protected processes.
Everything in /usr/bin is protected in this fashion. Therefore, when you invoke /usr/bin/lldb, all DYLD_* environment variables are purged.
It should work to run lldb from within Xcode.app or the Command Line Tools, like so:
DYLD_LIBRARY_PATH=whatever /Applications/Xcode.app/Contents/Developer/usr/bin/lldb <whatever else>
I don't believe that copy of lldb is protected. /usr/bin/lldb is actually just a trampoline that executes the version in Xcode or the Command Line Tools, so you're ultimately running the same thing. But /usr/bin/lldb is protected so the DYLD_* environment variables are purged when running that.
Otherwise, you will have to set the environment variable inside lldb as shown in this thread:
(lldb) process launch --environment DYLD_LIBRARY_PATH=<mydylibpath> -- arg1 arg2 arg3
or using the short -v option:
(lldb) process launch -v DYLD_LIBRARY_PATH=<mydylibpath> -- arg1 arg2 arg3
Or, you can disable System Integrity Protection, although it serves a good purpose.
You can set an environment variable inside lldb once it is started.
$ lldb
(lldb) help env
Shorthand for viewing and setting environment variables. Expects 'raw' input (see 'help raw-input'.)
Syntax:
_regexp-env // Show environment
_regexp-env <name>=<value> // Set an environment variable
'env' is an abbreviation for '_regexp-env'
(lldb)
So that
lldb ./a.out
(lldb) env DYLD_LIBRARY_PATH=/path/to/lib/dir
(lldb) r
should work.
Tested with
$ lldb --version
lldb-1400.0.38.17
Apple Swift version 5.7.2 (swiftlang-5.7.2.135.5 clang-1400.0.29.51)
Related
I am trying to debug my C code on CodeBlocks with GDB Debugger.
Unfortunately It does not work, I've tried almost everything including uninstall and re install CodeBlocks (debugger included).
Here's down below I'll put the exit error so you can get way better where the problem is:
Active debugger config: GDB/CDB debugger:Default
Building to ensure sources are up-to-date
Selecting target:
Release
Adding source dir: D:\20201105_es8_randomWalk\
Adding source dir: D:\20201105_es8_randomWalk\
Adding file: D:\20201105_es8_randomWalk\bin\Release\20201105_es8_randomWalk.exe
Changing directory to: D:/20201105_es8_randomWalk/.
Set variable: PATH=.;D:\programms\CodeBlocks\MinGW\bin;D:\programms\CodeBlocks\MinGW;C:\Windows\System32;C:\Windows;C:\Windows\System32\wbem;C:\Windows\System32\WindowsPowerShell\v1.0;C:\Windows\System32\OpenSSH;C:\Users\pnmat\AppData\Local\Microsoft\WindowsApps
Starting debugger: D:\programms\CodeBlocks\MinGW\bin\gdb.exe -nx -fullname -quiet -args D:/20201105_es8_randomWalk/bin/Release/20201105_es8_randomWalk.exe
done
Setting breakpoints
Reading symbols from D:/20201105_es8_randomWalk/bin/Release/20201105_es8_randomWalk.exe...(no debugging symbols found)...done.
Debugger name and version: GNU gdb (GDB) 8.1
No symbol table is loaded. Use the "file" command.
Temporary breakpoint 3 ("D:/20201105_es8_randomWalk/main.c:23") pending.
Child process PID: 8516 [Inferior 1 (process 8516) exited normally]
Debugger finished with status 0
It seems like it does not hit any breakpoint I've inserted in the code.
I'm trying to use lldb with openocd/jtag board but I'm in trouble.
I already use openocd with gdb to develop on L0 STMicroelectronics board and it works perfectly.
Now I want the same with lldb.
I do that on LLDB host side
$ lldb bin/token.elf
(lldb) target create "bin/token.elf"
Current executable set to 'bin/token.elf' (arm).
(lldb) platform select remote-gdb-server
Platform: remote-gdb-server
Connected: no
(lldb) platform connect connect://localhost:5557
Platform: remote-gdb-server
Hostname: (null)
Connected: yes
(lldb) target list
Current targets:
* target #0: /home/cme/Projects/Tacos/ledger/trunk/se/build/st31_bolos/bin/token.elf ( arch=arm-unknown-unknown, platform=host )
On the openocd/GDB server side I correctly see the "Info : accepting 'gdb' connection on tcp/5557"
But now I don't found how to continue:
(lldb) process launch
error: process launch failed: Child exec failed.
I also tried "process continue", but lldb complains there is no process
With gdb, the process is considered as already running and I use reset/continue commands, never the 'run' command.
Does anybody know how to use lldb with openocd/jtag gdb-server?
Thanks for your help
C/M.
from what we were researching, it is not possible to debug remote (bare-metal!) targets with lldb without writing extra code.
for basic functionality, lldb needs to recognize at least one thread context.
the same is true for gdb. But there in gdb there is some sort of stub implemented faking an existing thread on the remote system. [1]
from an conversation on the lldb mailing list [2] the answer compiles to:
we have to write some (python) code to get a remote bare metal working with lldb.
[1] https://github.com/bminor/binutils-gdb/blob/28170b88cc8b40fdea2b065dafe6e1872a47ee4e/gdb/remote.c#L1808
[2] http://comments.gmane.org/gmane.comp.debugging.lldb.devel/3405
So far this works for me on my stm32g0b1 nucleo board and pyOCD but I also tested with OpenOCD
$ lldb --local-lldbinit Build/temp.elf
(lldb) target create "Build/temp.elf"
Current executable set to '/home/diego/Workspace/genesis/Buil /temp.elf' (arm).
(lldb) gdb-remote 127.0.0.1:3333
Process 1 stopped
(lldb) target modules load --load .text 0x08000000
section '.text' loaded at 0x8000000
(lldb) process plugin packet monitor reset halt
packet: qRcmd,72657365742068616c74
response: 526573657474696e672074617267657420776974682068616c740a5375636365737366756c6c792068616c74656420646576696365206f6e2072657365740a
I'm trying to attach a program with gdb but it returns:
Attaching to process 29139
Could not attach to process. If your uid matches the uid of the target
process, check the setting of /proc/sys/kernel/yama/ptrace_scope, or try
again as the root user. For more details, see /etc/sysctl.d/10-ptrace.conf
ptrace: Operation not permitted.
gdb-debugger returns "Failed to attach to process, please check privileges and try again."
strace returns "attach: ptrace(PTRACE_ATTACH, ...): Operation not permitted"
I changed "kernel.yama.ptrace_scope" 1 to 0 and /proc/sys/kernel/yama/ptrace_scope 1 to 0 and tried set environment LD_PRELOAD=./ptrace.so with this:
#include <stdio.h>
int ptrace(int i, int j, int k, int l) {
printf(" ptrace(%i, %i, %i, %i), returning -1\n", i, j, k, l);
return 0;
}
But it still returns the same error. How can I attach it to debuggers?
If you are using Docker, you will probably need these options:
docker run --cap-add=SYS_PTRACE --security-opt seccomp=unconfined
If you are using Podman, you will probably need its --cap-add option too:
podman run --cap-add=SYS_PTRACE
This is due to kernel hardening in Linux; you can disable this behavior by echo 0 > /proc/sys/kernel/yama/ptrace_scope or by modifying it in /etc/sysctl.d/10-ptrace.conf
See also this article about it in Fedora 22 (with links to the documentation) and this comment thread about Ubuntu and .
I would like to add that I needed --security-opt apparmor=unconfined along with the options that #wisbucky mentioned. This was on Ubuntu 18.04 (both Docker client and host). Therefore, the full invocation for enabling gdb debugging within a container is:
docker run --cap-add=SYS_PTRACE --security-opt seccomp=unconfined --security-opt apparmor=unconfined
Just want to emphasize a related answer. Let's say that you're root and you've done:
strace -p 700
and get:
strace: attach: ptrace(PTRACE_SEIZE, 700): Operation not permitted
Check:
grep TracerPid /proc/700/status
If you see something like TracerPid: 12, i.e. not 0, that's the PID of the program that is already using the ptrace system call. Both gdb and strace use it, and there can only be one active at a time.
Not really addressing the above use-case but I had this problem:
Problem: It happened that I started my program with sudo, so when launching gdb it was giving me ptrace: Operation not permitted.
Solution: sudo gdb ...
As most of us land here for Docker issues I'll add the Kubernetes answer as it might come in handy for someone...
You must add the SYS_PTRACE capability in your pod's security context
at spec.containers.securityContext:
securityContext:
capabilities:
add: [ "SYS_PTRACE" ]
There are 2 securityContext keys at 2 different places. If it tells you that the key is not recognized than you missplaced it. Try the other one.
You probably need to have a root user too as default. So in the other security context (spec.securityContext) add :
securityContext:
runAsUser: 0
runAsGroup: 0
fsGroup: 101
FYI : 0 is root. But the fsGroup value is unknown to me. For what I'm doing I don't care but you might.
Now you can do :
strace -s 100000 -e write=1 -e trace=write -p 16
You won't get the permission denied anymore !
BEWARE : This is the Pandora box. Having this in production it NOT recommended.
I was running my code with higher privileges to deal with Ethernet Raw Sockets by setting set capability command in Debian Distribution. I tried the above solution: echo 0 > /proc/sys/kernel/yama/ptrace_scope
or by modifying it in /etc/sysctl.d/10-ptrace.conf but that did not work for me.
Additionally, I also tried with set capabilities command for gdb in installed directory (usr/bin/gdb) and it works: /sbin/setcap CAP_SYS_PTRACE=+eip /usr/bin/gdb.
Be sure to run this command with root privileges.
Jesup's answer is correct; it is due to Linux kernel hardening. In my case, I am using Docker Community for Mac, and in order to do change the flag I must enter the LinuxKit shell using justin cormack's nsenter (ref: https://www.bretfisher.com/docker-for-mac-commands-for-getting-into-local-docker-vm/ ).
docker run -it --rm --privileged --pid=host justincormack/nsenter1
/ # cat /etc/issue
Welcome to LinuxKit
## .
## ## ## ==
## ## ## ## ## ===
/"""""""""""""""""\___/ ===
{ / ===-
\______ O __/
\ \ __/
\____\_______/
/ # cat /proc/sys/kernel/yama/ptrace_scope
1
/ # echo 0 > /proc/sys/kernel/yama/ptrace_scope
/ # exit
Maybe someone has attached this process with gdb.
ps -ef | grep gdb
can't gdb attach the same process twice.
I was going to answer this old question as it is unaccepted and any other answers are not got the point. The real answer may be already written in /etc/sysctl.d/10-ptrace.conf as it is my case under Ubuntu. This file says:
For applications launching crash handlers that need PTRACE, exceptions can
be registered by the debugee by declaring in the segfault handler
specifically which process will be using PTRACE on the debugee:
prctl(PR_SET_PTRACER, debugger_pid, 0, 0, 0);
So just do the same thing as above: keep /proc/sys/kernel/yama/ptrace_scope as 1 and add prctl(PR_SET_PTRACER, debugger_pid, 0, 0, 0); in the debugee. Then the debugee will allow debugger to debug it. This works without sudo and without reboot.
Usually, debugee also need to call waitpid to avoid exit after crash so debugger can find the pid of debugee.
If permissions are a problem, you probably will want to use gdbserver. (I almost always use gdbserver when I gdb, docker or no, for numerous reasons.) You will need gdbserver (Deb) or gdb-gdbserver (RH) installed in the docker image. Run the program in docker with
$ sudo gdbserver :34567 myprogram arguments
(pick a port number, 1025-65535). Then, in gdb on the host, say
(gdb) target remote 172.17.0.4:34567
where 172.17.0.4 is the IP address of the docker image as reported by /sbin/ip addr list run in the docker image. This will attach at a point before main runs. You can tb main and c to stop at main, or wherever you like. Run gdb under cgdb, emacs, vim, or even in some IDE, or plain. You can run gdb in your source or build tree, so it knows where everything is. (If it can't find your sources, use the dir command.) This is usually much better than running it in the docker image.
gdbserver relies on ptrace, so you will also need to do the other things suggested above. --privileged --pid=host sufficed for me.
If you deploy to other OSes or embedded targets, you can run gdbserver or a gdb stub there, and run gdb the same way, connecting across a real network or even via a serial port (/dev/ttyS0).
I don't know what you are doing with LD_PRELOAD or your ptrace function.
Why don't you try attaching gdb to a very simple program? Make a program that simply repeatedly prints Hello or something and use gdb --pid [hello program PID] to attach to it.
If that does not work then you really do have a problem.
Another issue is the user ID. Is the program that you are tracing setting itself to another UID? If it is then you cannot ptrace it unless you are using the same user ID or are root.
I have faced the same problem and try a lot of solution but finally, I have found the solution, but really I don't know what the problem was. First I modified the ptrace_conf value and login into Ubuntu as a root but the problem still appears. But the most strange thing that happened is the gdb showed me a message that says:
Could not attach to process. If your uid matches the uid of the target process, check the setting of /proc/sys/kernel/yama/ptrace_scope, or try again as the root user.
For more details, see /etc/sysctl.d/10-ptrace.conf
warning: process 3767 is already traced by process 3755 ptrace: Operation not permitted.
With ps command terminal, the process 3755 was not listed.
I found the process 3755 in /proc/$pid but I don't understand what was it!!
Finally, I deleted the target file (foo.c) that I try to attach it vid gdb and tracer c program using PTRACE_ATTACH syscall, and in the other folder, I created another c program and compiled it.
the problem is solved and I was enabled to attach to another process either by gdb or ptrace_attach syscall.
(gdb) attach 4416
Attaching to process 4416
and I send a lot of signals to process 4416. I tested it with both gdb and ptrace, both of them run correctly.
really I don't know the problem what was, but I think it is not a bug in Ubuntu as a lot of sites have referred to it, such https://askubuntu.com/questions/143561/why-wont-strace-gdb-attach-to-a-process-even-though-im-root
Extra information
If you wanna make changes in the interfaces such as add the ovs bridge, you must use --privileged instead of --cap-add NET_ADMIN.
sudo docker run -itd --name=testliz --privileged --cap-add=SYS_PTRACE --security-opt seccomp=unconfined ubuntu
If you are using FreeBSD, edit /etc/sysctl.conf, change the line
security.bsd.unprivileged_proc_debug=0
to
security.bsd.unprivileged_proc_debug=1
Then reboot.
I am on CentOS 6.4 32 bit and am trying to cause a buffer overflow in a program. Within GDB it works. Here is the output:
[root#localhost bufferoverflow]# gdb stack
GNU gdb (GDB) Red Hat Enterprise Linux (7.2-60.el6_4.1)
Copyright (C) 2010 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law. Type "show copying"
and "show warranty" for details.
This GDB was configured as "i686-redhat-linux-gnu".
For bug reporting instructions, please see:
<http://www.gnu.org/software/gdb/bugs/>...
Reading symbols from /root/bufferoverflow/stack...done.
(gdb) r
Starting program: /root/bufferoverflow/stack
process 6003 is executing new program: /bin/bash
Missing separate debuginfos, use: debuginfo-install glibc-2.12-1.107.el6_4.2.i686
sh-4.1#
However when I run the program stack just on its own it seg faults. Why might this be?
Exploit development can lead to serious headaches if you don't adequately account for factors that introduce non-determinism into the debugging process. In particular, the stack addresses in the debugger may not match the addresses during normal execution. This artifact occurs because the operating system loader places both environment variables and program arguments before the beginning of the stack:
Since your vulnerable program does not take any arguments, the environment variables are likely the culprit. Mare sure they are the same in both invocations, in the shell and in the debugger. To this end, you can wrap your invocation in env:
env - /path/to/stack
And with the debugger:
env - gdb /path/to/stack
($) show env
LINES=24
COLUMNS=80
In the above example, there are two environment variables set by gdb, which you can further disable:
unset env LINES
unset env COLUMNS
Now show env should return an empty list. At this point, you can start the debugging process to find the absolute stack address you envision to jump to (e.g., 0xbffffa8b), and hardcode it into your exploit.
One further subtle but important detail: there's a difference between calling ./stack and /path/to/stack: since argv[0] holds the program exactly how you invoked it, you need to ensure equal invocation strings. That's why I used /path/to/stack in the above examples and not just ./stack and gdb stack.
When learning to exploit with memory safety vulnerabilities, I recommend to use the wrapper program below, which does the heavy lifting and ensures equal stack offsets:
$ invoke stack # just call the executable
$ invoke -d stack # run the executable in GDB
Here is the script:
#!/bin/sh
while getopts "dte:h?" opt ; do
case "$opt" in
h|\?)
printf "usage: %s -e KEY=VALUE prog [args...]\n" $(basename $0)
exit 0
;;
t)
tty=1
gdb=1
;;
d)
gdb=1
;;
e)
env=$OPTARG
;;
esac
done
shift $(expr $OPTIND - 1)
prog=$(readlink -f $1)
shift
if [ -n "$gdb" ] ; then
if [ -n "$tty" ]; then
touch /tmp/gdb-debug-pty
exec env - $env TERM=screen PWD=$PWD gdb -tty /tmp/gdb-debug-pty --args $prog "$#"
else
exec env - $env TERM=screen PWD=$PWD gdb --args $prog "$#"
fi
else
exec env - $env TERM=screen PWD=$PWD $prog "$#"
fi
Here is a straightforward way of running your program with identical stacks in the terminal and in gdb:
First, make sure your program is compiled without stack protection,
gcc -m32 -fno-stack-protector -z execstack -o shelltest shelltest.c -g
and and ASLR is disabled:
echo 0 > /proc/sys/kernel/randomize_va_space
NOTE: default value on my machine was 2, note yours before changing this.
Then run your program like so (terminal and gdb respectively):
env -i PWD="/root/Documents/MSec" SHELL="/bin/bash" SHLVL=0 /root/Documents/MSec/shelltest
env -i PWD="/root/Documents/MSec" SHELL="/bin/bash" SHLVL=0 gdb /root/Documents/MSec/shelltest
Within gdb, make sure to unset LINES and COLUMNS.
Note: I got those environment variables by playing around with a test program.
Those two runs will give you identical pointers to the top of the stack, so no need for remote script shenanigans if you're trying to exploit a binary hosted remotely.
The address of stack frame pointer when running the code in gdb is different from running it normally. So you may corrupt the return address right in gdb mode, but it may not right when running in normal mode. The main reason for that is the environment variables differ among the two situation.
As this is just a demo, you can change the victim code, and print the address of the buffer. Then change your return address to offset+address of buffer.
In reality, however,you need to guess the return address add NOP sled before your malicious code. And you may guess multiple times to get a correct address, as your guess may be incorrect.
Hope this can help you.
The reason your buffer overflow works under gdb and segfaults otherwise is that gdb disables address space layout randomization. I believe this was turned on by default in gdb version 7.
You can check this by running this command:
show disable-randomization
And set it with
set disable-randomization on
or
set disable-randomization off
I have tried the solution accepted here and It doesn't work (for me). I knew that gdb added environment variables and for that reason the stack address doesn't match, but even removing that variables I can't work my exploit without gdb (I also tried the script posted in the accepted solution).
But searching in the web I found other script that work for me: https://github.com/hellman/fixenv/blob/master/r.sh
The use is basically the same that script in the accepted solution:
r.sh gdb ./program [args] to run the program in gdb
r.sh ./program [args] to run the program without gdb
And this script works for me.
I am on CentOS 6.4 32 bit and am trying to cause a buffer overflow in a program... However when I run the program stack just on its own it seg faults.
You should also ensure FORTIFY_SOURCE is not affecting your results. The seg fault sounds like FORTIFY_SOURCE could be the issue because FORTIFY_SOURCE will insert "safer" function calls to guard against some types of buffer overflows. If the compiler can deduce destination buffer sizes, then the size is checked and abort() is called on a violation (i.e., your seg fault).
To turn off FORTIFY_SOURCE for testing, you should compile with -U_FORTIFY_SOURCE or -D_FORTIFY_SOURCE=0.
One of the main things that gdb does that doesnt happen outside gdb is zero memory. More than likely somewhere in the code you are not initializing your memory and it is getting garbage values. Gdb automatically clears all memory that you allocate hiding those types of errors.
For example: the following should work in gdb, but not outside it:
int main(){
int **temp = (int**)malloc(2*sizeof(int*)); //temp[0] and temp[1] are NULL in gdb, but not outside
if (temp[0] != NULL){
*temp[0] = 1; //segfault outside of gdb
}
return 0;
}
Try running your program under valgrind to see if it can detect this issue.
I think the best way works out for me is to attach the process of the binary with gdb and using setarch -R <binary> to temporarily disable the ASLR protection only for the binary. This way the stack frame should be the same within and without gdb.
I use a debugging script that runs several related processes in succession with the debugger. I'm currently using -x to execute several commands automatically (such as run). How can I make gdb quit automatically when the debugged process successfully terminates? Adding a quit command to the command file will cause that command to be handled not just on successful termination, but when errors occur also (when I'd rather take over at that point).
Here's an extract of what's going on:
+ gdb -return-child-result -x gdbbatch --args ./mkfs.cpfs /dev/loop0
GNU gdb (GDB) 7.1-ubuntu
Reading symbols from /home/matt/cpfs/mkfs.cpfs...done.
Program exited normally.
Breakpoint 2 at 0x805224f: file log.c, line 32.
(gdb)
Contents of gdbbatch:
start
b cpfs_log if level >= WARNING
I think I have found a complete solution to your question in connection to looking for something similar in How to make gdb send an external notification on receiving a signal?. None of the other guys here seem to have mentioned or discovered gdb hooks.
Based on Matthew's tip about $_exitcode, this is now my app/.gdbinit that achieves exactly the behavior wanted; normal quit on successful termination and drop to gdb prompt, send email, whatnot on everything else:
set $_exitcode = -999
set height 0
handle SIGTERM nostop print pass
handle SIGPIPE nostop
define hook-stop
if $_exitcode != -999
quit
else
shell echo | mail -s "NOTICE: app has stopped on unhandled signal" root
end
end
echo .gdbinit: running app\n
run
gdb sets $_exitcode when the program successfully terminates. You can make use of that - set it to an unlikely value at the start of your script, and only quit at the end if it has changed:
set $_exitcode = -999
# ...
run
# ...
if $_exitcode != -999
quit
end
(Setting $_exitcode to an unlikely value is a bit ugly, but it will otherwise not be defined at all if the program doesn't terminate, and there doesn't seem to be any obvious way of asking "is this variable defined?" in a conditional.)
GDB has a different "language" for interacting with automated programs called GDB/MI (detailed here), but unfortunately, it doesn't look like it supports conditionals, and is expected to run from other programs with parsing and branching. So, it looks like Expect is the easiest (or at least a working) solution:
$ cat gdbrunner
#!/usr/bin/expect -f
#spawn gdb -return-child-result --args ./mkfs.cpfs /dev/loop0
spawn gdb -return-child-result --args [lindex $argv 0]
#send "start\n"
#send "b cpfs_log if level >= WARNING"
send "run\n"
expect {
normally\. { send "quit\n" }
"exited with code" { interact -nobuffer }
}
I tested this with the simple programs:
$ cat prog1.c
int main(void) { return 0; }
$ cat prog2.c
int main(void) { return 1; }
With the following results:
$ ./gdbrunner ./prog1
spawn gdb -return-child-result --args ./prog1
run
(gdb) run
Starting program: /home/foo/prog1
Program exited normally.
(gdb) quit
$ ./gdbrunner ./prog2
spawn gdb -return-child-result --args ./prog2
run
(gdb) run
Starting program: /home/foo/prog2
Program exited with code 01.
(gdb)
Essentially, you have to parse the output and branch using something else. This would of course work with any other program capable of handling input/output of another process, but the above expect script should get you started, if you don't mind Tcl. It should be a little better, and expect the first (gdb) prompt, but works due to stdin buffering.
You can also modify it to use that GDB/MI interface with the -i command-line argument to GDB; its commands and output are a bit more readily parsable, if you will expand to need more advanced features, as you can see in the previously linked documentation.