LLDB — evaluate and continue - lldb

XCode has functionality to set breakpoint, then run lldb command and “Automatically continue after evaluating”.
How to setup same functionality via --source ? Found --command quote in the manual, but no examples and no reference in sub-command help
By default, the breakpoint command add command takes lldb command line commands. You can also specify this explicitly by passing the "--command" option.
Syntax: command <sub-command> [<sub-command-options>] <breakpoint-id>

I'm not entirely clear what you are asking.
But if you want to put commands in a text file somewhere which will add set a breakpoint and add commands to it you want something like:
> cat /tmp/cmds.lldb
break set -F main
break command add
frame var
continue
DONE
> lldb -s /tmp/cmds.lldb myBinary
Or if you want to do this in Xcode, just use:
(lldb) command source /tmp/cmds.lldb
once you are in the Xcode debugging session.
This relies on one trick, the "breakpoint command add" command operates on the last breakpoint set, which was why I didn't have to specify the breakpoint number.

I think you are asking about auto-continue with lldb?
I used the modify command to add auto-continue..
(lldb) b CCCryptorCreate
Breakpoint 1: where = libcommonCrypto.dylib`CCCryptorCreate, address = 0x000000011047e1b7
(lldb) breakpoint modify --auto-continue true 1
(lldb) br list
Current breakpoints:
1: name = 'CCCryptorCreate', locations = 1, resolved = 1, hit count = 0 Options: enabled auto-continue
1.1: where = libcommonCrypto.dylib`CCCryptorCreate, address = 0x000000011047e1b7, resolved, hit count = 0
then to add some commands I used..
(lldb) breakpoint command add -s python 1
Enter your Python command(s). Type 'DONE' to end.
print "Hit this breakpoint!"
DONE
The help has some good examples (lldb) help breakpoint command add

help breakpoint command add reveals it is called --one-liner, --command must be a typo?
-o <one-line-command> ( --one-liner <one-line-command> )
Specify a one-line breakpoint command inline.
Question is actual, how to automatically continue when --source is used

Related

How do I run a program under GDB with an environment variable set to the contents of a file?

How do I run gdb with environment variables set such as in the example below?
gdb (env -i SHELLCODE="`cat ~/shellcode.bin`" ./vulnerable)
In general, you can use the set environment command before starting the program you want to debug:
set environment MYVAR abc
However, from your question it looks like you want to get the content of the environment variable from a file, and there's no way to do that from the GDB shell. You can however just start GDB with the variable already set, and it will be kept when starting the program to debug. You can verify this with the show environment command.
$ MYVAR="$(cat x.txt)" gdb ./vulnerable
or:
$ export MYVAR="$(cat x.txt)"
$ gdb ./vulnerable
or even using env:
$ env -i MYVAR="$(cat x.txt)" gdb ./vulnerable
Then:
(gdb) show environment MYVAR
MYVAR=...
(gdb) run
You might want to check that your shellcode does not contain \x00 bytes though, as that can cause some problems (not 100% sure since I didn't test it).

shell script "Thread: command not found"

So I am trying to write my first shell script that can automatically run some C codes for me. I read some materials online and here is my short shell script:
#!/bin/sh
# script for grading assignment 3
echo -n "Enter the student's index >"
read index
echo "You entered: $index"
#### Functions
function question_one
{
gcc -pthread -o $index.1 $index.1.c
taskset -c 1 ./$index.1 5 5
}
#### Main
$(question_one)
As you can see, the shell script is quite simple and what it does is also quite easy to understand. First compile a C source file named like 1.1.c, 2.1.c or 3.1.c and then run the output file with just one single CPU.
When I run this script, looks like it can successfully compile the file but unable to run the output file correctly. The error message is "assignment_three_grading: line 18: Thread: command not found". However, if I type in the commands manually by myself, everything is fine.
$(question_one)
Change this to simply:
question_one
To invoke a function you just name it as if it were a regular command. Using $(...) captures its output and tries to execute that output as another command name: definitely not what you want here.

Redirect lldb output to file

I'm using lldb inside Xcode, and one of my variables contains a huge chunk of JSON data. Using po myVar isn't much helpful to analyse this data, as it will output in the tiny Xcode debug console.
Is there a way to redirect lldb output to a file ?
I saw here that such a feature seems to be available on gdb as :
(gdb) set logging on
(gdb) set logging file /tmp/mem.txt
(gdb) x/512bx 0xbffff3c0
(gdb) set logging off
and is "translated" in lldb as :
(lldb) memory read --outfile /tmp/mem.txt --count 512 0xbffff3c0
(lldb) me r -o/tmp/mem.txt -c512 0xbffff3c0
(lldb) x/512bx -o/tmp/mem.txt 0xbffff3c0
However, the memory read command will not help in my case, and apparently, --outfile is not available for the print command.
You can use a Python script to do so (and much more), as explained here:
LLDB Python scripting in Xcode
Create a file named po.py in a directory of your choice (for example "~/.lldb"):
import lldb
def print_to_file(debugger, command, result, dict):
#Change the output file to a path/name of your choice
f=open("/Users/user/temp.txt","w")
debugger.SetOutputFileHandle(f,True);
#Change command to the command you want the output of
command = "po self"
debugger.HandleCommand(command)
def __lldb_init_module (debugger, dict):
debugger.HandleCommand('command script add -f po.print_to_file print_to_file ')
Then in lldb write:
command script import ~/.lldb/po.py
print_to_file
I found session save <filename> to be a much better, easier option than those listed here. It's not quite the same as you can't use it (to my knowledge selectively) but for generating logs, it's quite handy.
Here is a slight modification incorporating some of the comments from above:
def toFile(debugger, command, result, dict):
f=open("/Users/user/temp.txt","w")
debugger.SetOutputFileHandle(f,True);
debugger.HandleCommand(command)
f.close()
debugger.SetOutputFileHandle(sys.stdout, True)
This allows the command to be supplied as an argument, and reverts the output file handle to stdout after the command is run.
Assuming that you have a variable named jsonData (which has a Data type) you can save it to a file with this command:
expr jsonData.write(to: URL(fileURLWithPath: "/tmp/datadump.bin"))
Alternatively instead of above command you could dump memory used by this variable to a file as in the example below:
(lldb) po jsonData
▿ Optional<Data>
▿ some : 32547 bytes
- count : 32547
▿ pointer : 0x00007fe8b69bb410
- pointerValue : 140637472797712
(lldb) memory read --force --binary --outfile /tmp/datadump.bin --count 32547 0x00007fe8b69bb410
32547 bytes written to '/tmp/datadump.bin'

Application accepts command line argument of the form : *argument but not of the form argument* or *argument*

For example if my program name is test.c
Then for the following run command the argc = 2 instead of 4.
$test abc pqr* *xyz*
Try to run:
$ echo abc pqr* *xyz*
and you will understand why you don't get the argc value you were expecting
It is probably because your shell / cmd.exe (no specifics are given!) use the * as file glob. If there are no files found that match the glob, the result will be empty.
Try calling you program like this:
test abc 'pqr*' 'xyz'
refer to http://en.wikipedia.org/wiki/Glob_%28programming%29 for details about globbing, and your shell's manual for details about escaping globs.

How to save all console output to file in R?

I want to redirect all console text to a file. Here is what I tried:
> sink("test.log", type=c("output", "message"))
> a <- "a"
> a
> How come I do not see this in log
Error: unexpected symbol in "How come"
Here is what I got in test.log:
[1] "a"
Here is what I want in test.log:
> a <- "a"
> a
[1] "a"
> How come I do not see this in log
Error: unexpected symbol in "How come"
What am I doing wrong? Thanks!
You have to sink "output" and "message" separately (the sink function only looks at the first element of type)
Now if you want the input to be logged too, then put it in a script:
script.R
1:5 + 1:3 # prints and gives a warning
stop("foo") # an error
And at the prompt:
con <- file("test.log")
sink(con, append=TRUE)
sink(con, append=TRUE, type="message")
# This will echo all input and not truncate 150+ character lines...
source("script.R", echo=TRUE, max.deparse.length=10000)
# Restore output to console
sink()
sink(type="message")
# And look at the log...
cat(readLines("test.log"), sep="\n")
If you have access to a command line, you might prefer running your script from the command line with R CMD BATCH.
== begin contents of script.R ==
a <- "a"
a
How come I do not see this in log
== end contents of script.R ==
At the command prompt ("$" in many un*x variants, "C:>" in windows), run
$ R CMD BATCH script.R &
The trailing "&" is optional and runs the command in the background.
The default name of the log file has "out" appended to the extension, i.e., script.Rout
== begin contents of script.Rout ==
R version 3.1.0 (2014-04-10) -- "Spring Dance"
Copyright (C) 2014 The R Foundation for Statistical Computing
Platform: i686-pc-linux-gnu (32-bit)
R is free software and comes with ABSOLUTELY NO WARRANTY.
You are welcome to redistribute it under certain conditions.
Type 'license()' or 'licence()' for distribution details.
Natural language support but running in an English locale
R is a collaborative project with many contributors.
Type 'contributors()' for more information and
'citation()' on how to cite R or R packages in publications.
Type 'demo()' for some demos, 'help()' for on-line help, or
'help.start()' for an HTML browser interface to help.
Type 'q()' to quit R.
[Previously saved workspace restored]
> a <- "a"
> a
[1] "a"
> How come I do not see this in log
Error: unexpected symbol in "How come"
Execution halted
== end contents of script.Rout ==
If you are able to use the bash shell, you can consider simply running the R code from within a bash script and piping the stdout and stderr streams to a file. Here is an example using a heredoc:
File: test.sh
#!/bin/bash
# this is a bash script
echo "Hello World, this is bash"
test1=$(echo "This is a test")
echo "Here is some R code:"
Rscript --slave --no-save --no-restore - "$test1" <<EOF
## R code
cat("\nHello World, this is R\n")
args <- commandArgs(TRUE)
bash_message<-args[1]
cat("\nThis is a message from bash:\n")
cat("\n",paste0(bash_message),"\n")
EOF
# end of script
Then when you run the script with both stderr and stdout piped to a log file:
$ chmod +x test.sh
$ ./test.sh
$ ./test.sh &>test.log
$ cat test.log
Hello World, this is bash
Here is some R code:
Hello World, this is R
This is a message from bash:
This is a test
Other things to look at for this would be to try simply pipping the stdout and stderr right from the R heredoc into a log file; I haven't tried this yet but it will probably work too.
You can't. At most you can save output with sink and input with savehistory separately. Or use external tool like script, screen or tmux.
Run R in emacs with ESS (Emacs Speaks Statistics) r-mode. I have one window open with my script and R code. Another has R running. Code is sent from the syntax window and evaluated. Commands, output, errors, and warnings all appear in the running R window session. At the end of some work period, I save all the output to a file. My own naming system is *.R for scripts and *.Rout for save output files.
Here's a screenshot with an example.
You can print to file and at the same time see progress having (or not) screen, while running a R script.
When not using screen, use R CMD BATCH yourscript.R & and step 4.
When using screen, in a terminal, start screen
screen
run your R script
R CMD BATCH yourscript.R
Go to another screen pressing CtrlA, then c
look at your output with (real-time):
tail -f yourscript.Rout
Switch among screens with CtrlA then n
To save text from the console: run the analysis and then choose (Windows) "File>Save to File".
Set your Rgui preferences for a large number of lines, then timestamp and save as file at suitable intervals.
If you want to get error messages saved in a file
zz <- file("Errors.txt", open="wt")
sink(zz, type="message")
the output will be:
Error in print(errr) : object 'errr' not found
Execution halted
This output will be saved in a file named Errors.txt
In case, you want printed values of console to a file you can use 'split' argument:
zz <- file("console.txt", open="wt")
sink(zz, split=TRUE)
print("cool")
print(errr)
output will be:
[1] "cool"
in console.txt file. So all your console output will be printed in a file named console.txt
This may not work for your needs, but one solution might be to run your code from within an Rmarkdown file. You could write both the code and console output to HTML/PDF/Word.

Resources