LLDB make command from script - lldb

I created step-scripted that print for me the name of function the debugger calls.
Now I want to automate the part that I need to type:
thread step-scripted -C MyTrace.Trace
How can I run the above command from a script?
so I will do something like this:
script
while True:
thread step-scripted -C MyTrace.Trace

First off, there's no reason that a step plan has to do just one step. If you want to step forever, then just have the step plan do it - never set the plan to complete, and return false from should_stop. Even more convenient, if you are using a recent lldb, you can pass arguments to your scripted step plan using the -k <key> -v <value> arguments. So you could also have your plan take a "count" input, and step that many times.
Otherwise, the easiest way to do this is to use the Python interface to implement a custom command that automates this step. SBThreads are the things you step. If you use the command form that takes an SBExecutionContext, described here:
https://lldb.llvm.org/use/python-reference.html#id6
you can get the thread from SBExecutionContext.thread then use SBThread. StepUsingScriptedThreadPlan to call your thread plan to do the step. Once you are in python, writing a loop to do this forever or till some condition, etc. should be easy. Your command could also take number of times to step, etc.
Note, you can also run commands in the script interpreter using SBCommandInterpreter.HandleCommand if that seems easier to you.

Related

(LLDB on MacOs Catalina) Shell Expansion Failed

When trying to use the r or run commands in lldb I get an error like this: error: shell expansion failed (reason: invalid JSON). consider launching with 'process launch'.
It works when I just use process launch but I really do not feel like doing that.
Is there any way I could make either an alias or make shell expansions not fail?
The way lldb does shell expansion is to run a little tool called lldb-argdumper (it is in Xcode.app/Contents/SharedFrameworks/LLDB.framework/Resources on macOS) with the command arguments that you passed. lldb-argdumper wraps the contents of argv as JSON, and writes that to stdout. lldb then parses the JSON back into args and inserts the args one by oneinto the argc/argv array when it launches the process.
Something in the output is not getting properly wrapped. You can probably see what it is by looking at the output of lldb-argdumper with your arguments. Whatever it is, it's a bug, so if you can reproduce it please file with your example with http://bugs.llvm.org.
(lldb) command alias run-no-shell process launch -X 0 --
will produce an alias that doesn't do shell expansion. You can also put this in your ~/.lldbinit.
I ran into this recently. TL;DR: make sure your shell does not echo anything during initialization. Run <your-shell> -c date to confirm; only the date should be printed.
The problem was that my shell's initialization file was echoing some stuff, which was getting prepended to lldb-argdumper's JSON output. (lldb doesn't run lldb-argdumper directly; it invokes your default shell to run lldb-argdumper.)
Specifically, I use fish as my shell, which does not have separate initialization paths for interactive and non-interactive sessions. (See this issue for discussion of whether this is good.) bash and zsh have separate init files for interactive/non-interactive sessions, which makes avoiding this problem slightly easier.

Control output from makefile

I'm trying to write a makefile to replace one of the scripts used in building a fairly large application.
The current script compiles one file at a time, and the primary reason for using make is to parallelise the build process. Using make -j 16 I currently get a factor of 4 speedup on our office server.
But what I've lost is some readability of the output. The compilation program for a file bundles up a few bits and pieces of work, including running custom pre-compilers, and running the gcc command. Each of these steps outputs some information, and I would prefer it to buffer the output from the command, and then show the whole lot in one go.
Is it possible to make make do this?
If you upgrade to GNU make 4.0, then you can use the built-in output synchronization feature to get what you want.
If you don't want to upgrade, then you'll have to modify each of your recipes to be wrapped with a small program that manages the output. Or you can set the SHELL variable to something that does it for you. Searching the internet should give you some examples.
A simple way to accomplish this is to send all the log output a to log directory with each file named, say:
log_file_20131104_12013478_b.txt // log_file_<date>_<time>_<sequence letter>.txt
and then simply cat them all together as your last make job in the dependency chain:
cat log_dir/log_file_20131104_12013478_*.txt > log_file_20131104_12013478.txt
With makepp this is the default behaviour as soon as you use -j. All the individual outputs (and entering dir messages) get collected and are output together as soon as the command terminates.

Hooks on terminal. Can I call a method before a command is run in the terminal?

I am wanting to make a terminal app that stores information about files/directories. I want a way to keep the information if the file is moved or renamed.
What I thought I could do is have a function execute before any command is run. I found this:
http://www.twistedmatrix.com/users/glyph/preexec.bash.txt
But I was wondering if this would be a good way to go about it. Or should I do something else?
I would like to call that function from a C program whenever mv is entered I suppose.
If what you're trying to do is attach some sort of metadata to files, there's a much better supported way to do that -- extended attributes.
Another solution might be to use the file's inode number as an index into a database you maintain yourself.
Can you alias the mv command? in .profile or .bashrc
alias mv=/usr/bin/local/mymv
where mymv is a compiled executable that runs your C code function and calls /usr/bin/mv.
precmd and preeexec add some overhead to every bash script that gets run, even if the script never calls mv. The downside to alias is that it requires new code in /usr/local and if scripts or users employ /usr/bin/mv instead of mv it will not do what you want. Generally doing something like this often means there is a better way to handle the problem with some kind of service (daemon) or driver. Plus, what happens if your C code cannot correctly handle interesting input like
mv somefille /dev/null
If you want to run command each time after some command was executed in the terminal, just put the following in ~/.bashrc:
PROMPT_COMMAND="your_command;$PROMPT_COMMAND"
If you want your command to be executed each time before mv is executing, put the following in ~/.bashrc:
alias mv="your_script"
Make sure that your script will execute real mv if needed.
You can use inotify library to track filesystem changes. It's good solution, but once user remove file, it's already gone.
You might be able to make use of the DEBUG trap in Bash.
From man bash:
If a sigspec is DEBUG, the command arg is executed before every
simple command, for command, case command, select command, every
arithmetic for command, and before the first command executes in
a shell function
I found this article when I was forced to work in tcsh and wanted to ensure a specific environemtn variable was present when the user ran a program from a certain folder (without setting that variable globally)
tcsh can do this.
tcsh has special alias, one of which is precmd
This can be used to run a script just before the shell prompt is printed.
e.g. I used set precmd 'bash $HOME/.local/bin/on_cd.sh'
This might be one of the very few useful features in csh.
It is a shame but I don't think the same or similar feature is in bash or other sh derivites (ash, dash etc). Related answer.

create process independent of bash

I have written a program which calculates the amount of battery level available in my laptop. I have also defined a threshold value in the program. Whenever the battery level falls below threshold i would like to call another process. I have used system("./invoke.o") where invoke.o is the program that i have to run. I am running a script which runs the battery level checker program for every 5 seconds. Everything is working fine but when i close the bash shell the automatic invocation of invoke.o is not happening. How should i make the invoke.o to be invoked irrespective of whether bash is closed or not??. I am using UBUNTU LINUX
Try running it as: nohup ./myscript.sh, where the nohup command allows you to close the shell without terminating the process.
You could run your script as a cron job. This lets cron set up standard input and output for you, reschedule the job, and it will send you email if it fails.
The alternative is to run a script in the background with all input and output, including standard error output, redirected.
While you could make a proper daemon out of your program that kind of effort is probably not necessary.
man nohup
man upstart
man 2 setsid (more complex, leads to longer trail of breadcrumbs on daemon launching).

Using a Single system() Call to Execute Multiple Commands in C

In an information security lab I'm working on, I've been tasked with executing multiple commands with a single call to "system()" (written in C, running on Fedora). What is the syntax that will allow me to execute more than command through system()? (The idea being you could execute arbitrary commands through a program running on a remote computer, if the program interacts with the OS through the system() call.)
I.e.:
char command[] = "????? \r\n";
system(command);
That depends on the shell being invoked to execute the commands, but in general most shells use ; to separate commands so something like this should work:
command1; command2; command3
[EDIT]
As #dicroce mentioned, you can use && instead of ; which will stop execution at the first command that returns a non-zero value. This may or may not be desired (and some commands may return non-zero on success) but if you are trying to handle commands that can fail you should probably not string multiple commands together in a system() call as you don't have any way of determining where the failure occured. In this case your best bet would either be to execute one command at a time or create a shell script that performs the appropriate error handling and call that instead.
Use && between your commands. It has the advantage that it only continues executing commands as long as they return successful error codes. Example:
"cd /proc && cat cpuinfo"
One possibility comes immediately to mind. You could write all the commands to a script then run it with:
system ("cmd.exe /c \"x.cmd\"");
or, now that I've noticed you're running on Fedora:
system ("x.sh");

Resources