After i quit GDB every user-defined function disappear. Im sure there should be some way to make it available between sessions.
GDB reads the following files before starting: ~/.config/gdb/gdbinit, ~/.gdbinit.
It is a common practice to edit e.g. ~/.gdbinit to define the user-defined function using an external editor and use source ~/.gdbinit in a GDB session to reload that file. Once the function works as you expect, just leave it in your ~/.gdbinit and it will be available in all future GDB sessions.
Related
Previously, I asked and received advice on invoking Clang Static Analyzer for doing cross-translation-unit analysis. But this is now a separate issue.
What I want to ask here, do I need to include linker commands when using the newer CodeChecker?
CodeChecker dev here. Linker information is not used during the analysis. You can read about the CTU mode in this user guide: https://github.com/Ericsson/codechecker/blob/master/docs/analyzer/user_guide.md#cross-translation-unit-ctu-analysis-mode
The workflow when using CTU mode is more or less the same, like without it. The standard workflow is to do a CodeChecker log then a CodeChecker analyse then a CodeChecker parse or store to view the results, just add the --ctu flag to the analysis command.
I'm trying to execute a python function after target create so I can iterate over all modules of the target, download missing symbols from the internet (based on GetUUIDString) and then override the GetSymbolFileSpec() directory and path to what I downloaded.
Unfortunately I cannot figure out how to actually get a function invoked at the right time so that the program did not execute yet, the target is created and lldb.target.modules is populated and lets me modify the symbols.
Is there some documentation on how to hook into this? I am aware that there is a theoretical way to fetch symbols on demand via a shell script however that is only implemented on macos and not other platforms.
You probably don't want to do this on target create since there's no guarantee that a target will know all the libraries that will load into it before it actually runs. And plus, you probably also want to handle libraries that are dynamically loaded as the program runs. The real place to do this is on Module add (which is where the hook for DebugSymbols happens in lldb.)
It looks like Linux and Windows don't have the notion of a call-out to some agent to pull in debug symbols. They do look in /usr/local/debug for pre-cached symbols, but there's no mechanism to have a call-out like with dsymForUUID.
If you're up for a little lldb hacking, it would be pretty straightforward to add such a callout. Just make a setting that takes the name of a program. That program would takes as input a UUID, and returns as output the file name for the debug info. Then you could have lldb run this in the same place where lldb currently calls LocateMacOSXFilesUsingDebugSymbols (in LocateSymbols.cpp).
Perhaps a simpler way to do this would be to add a target stop-hook that calls some python based command you've written that looks at the module list and fetches debug information for any new libraries that have shown up. If you want to use this for debugging running programs, you only care that the symbols get added before control returns to the user. So a stop hook would be an appropriate place to do this.
I have a custom board running Yocto (Jethro) and would like to run a single u-boot command, preboot. Obviously, breaking the boot sequence with space and running it manually works. How do I get it to run automatically? More specifically, where is the startup command sequence, by default?
Edit: Also, I am aware I can edit the environment at runtime. However, I am trying to build this change into the image so I can distribute it.
When you are in the uboot environment. Enter printenv, it will list the environment variables that uboot uses.
There is a variable name bootcmd. Currently, mine contain a bunch of if else command. Similarly, add your prefer function there for boot.
And after it is finished and tested. Use saveenv to store the edit
Here is a syntax for uboot.
Edit:
U-Boot allows to store commands or command sequences in a plain text file. Using the mkimage tool you can then convert this file into a script image which can be executed using U-Boot's autoscr command. U-boot Scripting Capabilities
Typically, your U-Boot recipe will build U-Boot for a single machine, in that case, I'd normally just patch the compiled in, default, U-Boot environment to do the right thing. This is achieved by
SRC_URI_machine += "file://mydefenv.patch"
Or (even better) use your own git tree. This would also have the additional benefit that your system might be able to boot up and to something useful, even if the environment would be totally corrupted.
Another possibility is to do it like Charles suggested in a comment to another answer, create an environment offline, and have U-Boot load it, see denx.de/wiki/view/DULG/UBootScripts
A third possibility, that I've also used sometimes, is to construct the environment offline (possibly using the same or a similar mechanism as in the link above), and the flash the environment to flash during the normal flash programming process. Though, most of the time I've done this on AT91's, using a tcl script similar to at91 Sam-Ba TCL script
No matter which method you chose, the bootcmd variable in U-Boot should hold your boot script.
The general answer is that bootcmd is run by default, and if there is persistent environment you can change the command and 'saveenv' so that it's kept.
It is easiest to modify the said bootcmd, which is executed anyway.
As an alternative to patching the kernel, it is possible to override the command in u-boot.
Create a file e.g. platform-top.h at the same place where you would place the patch file (it might already exist) and override the CONFIG_BOOTCOMMAND.
The result will look something like this:
/* ... */
/* replace the memory write with any other valid command */
#define CONFIG_BOOTCOMMAND "mw 0x1 0x1 && run default_bootcommand"
Don't forget to make the file known in your bbapend SRC_URI = "file://platform-top.h"
I'm attempting to set up my computer such that I can authenticate myself using an external device connected to a python script. I started by replacing the login program in inittab with my own program, and I've been able to get into a bash shell. The problem is that it doesn't get a fresh environment like the one that is (I presume) given with login. I know there are ways for me to mess with the environment, but i haven't seen a way to give it a "default" configuration, if even such a thing makes sense.
Some ideas:
First of all it would be better in most cases to use the pluggable login architecture PAM. This will ensure, that all PAM-enabled applications and services can use the authentification method (ssh for example) and that there is no way to bypass it using regular services.
If you really want to replace login i'd suggest to clear the environment by yourselves using unsetenv for each environment variable set (you may use environ to determine the variables already set). After cleaning up the environment you may use a exec-like call to replace your program with a bash, the environment will be unchanged in this context. You may want to add the command line argument -l to start up bash as it would have been invoked by login.
Bash is running some init scripts on startup. You may check /etc/profile, /etc/bashrc and similar files for environment variables you don't want to be set.
If you want to be dependant on env (wich is not so bad, since it should be present on every linux system out there) you can use env -i bashto call bash in a clean environment.
When main(int argc, char *argv[], char *envp[]) is called by the operating system, the third parameter contains the environment. So just save a copy of it until you need to call bash.
I'm writting a DLL in ANSI C and I'm using Curl to make HTTP connections.
This DLL should be able to connect to my server to send some info just before the application terminates. I would like to know what is the best approach to achieve this. I have tried registering a callback function using atexit and also calling the connectToServer method in a destructor. I also tried to use the DllMain with the DLL_PROCESS_DETACH. On all cases, Curl is unable to connect to server(error code 7) because windows already unloaded required libraries used by Curl.
I have heard people saying the only way to achieve this on Windows is to create a separate process(watchdog) to monitor when the main process terminates and then open the connection in this other process.
The library I'm writting works both on Linux (.so) and Windows (.dll). On Linux, i'm using atexit and everything works fine. I would prefer not to use a watchdog process since I won't use it on the .so.
Is there a way to do this?
Thanks in advance.
You shouldn't do anything in DLL_PROCESS_DETACH for exactly the reasons you've found. The best way to handle this is an explicit call from the application to say "I'm shutting down, do any clean up needed".
If you really want to mess up the semantics of DLL_PROCESS_DETACH, you could manually use LoadLibrary to load the curl library directly from there, regardless of the existing linker setup, and call it through your own late binding.