How can I get LLDB to pass its environment to the executable it runs?
As in, if I run FOO=BAR lldb a.out, I want a.out's environment to have FOO=BAR.
I know I can do this using process launch -v FOO=BAR, but I have quite a few env vars and don't want to type it every time.
lldb should do this by default. There is a setting to control this behavior:
settings set target.inherit-env {true/false}
but the default is "true" so this should already be happening (it does for me...)
Note it doesn't make as much sense to pass the environment to a remote process, so Xcode may defeat this setting for iOS debugging.
Related
During initialization of OPAM I got this message:
A hook can be added to opam's init scripts to ensure that the shell
remains in sync with the opam environment when they are loaded. Set
that up? [y/N]
Could you clarify what does this mean?
I've tried to find out it here: https://github.com/ocaml/opam/search?q=hook&unscoped_q=hook
I heard this word before, but what is "shell hook"? What it is used for? Why it asks me to choose? :)
env_hook.sh adds the hook, at least in bash, scripts for other shells are similar.
This will run eval $(opam env) every time the prompt is displayed.
You can run opam env and check what it does, it sets some environment variables.
You need this hook for local switches, it will update the switch based on your current directory. See issue #3573 for discussion about this topic.
It's possible to change this configuration option later, just re-run opam init.
Having trouble setting automatically set PATH to programFiles\veyon after installation. I would like to use the veyon-ctl command line with out having to manually link it.
The code that you have highlighted seems to be working exactly as expected. I have just taken that code, and added it to a package and installed it. The result was the following...
As you will see mentioned in the output, this environment variable will not be available in the current shell until you open/close the shell. I suspect that this is the problem that you are running into.
Chocolatey does provide a helper function called refreshenv which would allow you to force the reloading of the environment variables into the current shell, however, this isn't enabled by default. You can find out how to do this by reading the article here:
https://chocolatey.org/docs/troubleshooting#i-cant-get-the-powershell-tab-completion-working
But what it comes down to is that you need to load the following into your PowerShell Profile:
# Chocolatey profile
$ChocolateyProfile = "$env:ChocolateyInstall\helpers\chocolateyProfile.psm1"
if (Test-Path($ChocolateyProfile)) {
Import-Module "$ChocolateyProfile"
}
Once this is loaded, after seeing output similar to the following when installing a Chocolatey package, you can execute the command refreshenv and the new environment variables will be available in the current shell.
I have searched high and low for what I thought would be a common question but can only find answers regarding user confirmation, not system confirmation.
I would like the following commands to run in sequential order, waiting for a response before moving onto the next command:
npm config set https-proxy http://example.com:8080
npm config set proxy http://example.com:8080
npm config set sslVerify false
npm config set strict-ssl false
set HTTP_PROXY=http://example.com:8080
set HTTPS_PROXY=http://example.com:8080
I have added the commands to the batch file in sequential order on new lines, but when executing it does not pause on each command to wait for a response. How do I force the script to wait on each command until it’s confirmed by the system?
Unqualified names like npm or doSomething may map to scripts written in various languages, including batch files. Use the call command to invoke these and cmd.exe will always wait for whatever child process is started.
It's not uncommon for .exe's to be scattered across multiple directories that would bloat the path environment variable, so many installations will lay down alias scripts in a single directory added to the path and when you invoke the command, it figures out what executables to run and launches those.
It's also common to use wrapper scripts to simplify executable invocations, add some logging, or temporarily map the command to a different version (upgrades/testing).
In the case of npm, I believe it's mostly written in JavaScript, so an appropriate scripting engine has to be launched to run the npm commands. This may be boot-strapped from a batch script or it could be invoked automagically by the OS, based on whatever file extension it has. The details from one version or installation to the next may vary and usually don't matter to the casual user invoking them from the command line, but script behavior can vary noticeably.
Unless you use a fully qualified path/filename to launch something from a command script, and generally even if you do, you should simply default to using the call command to invoke it. Then all of the above circumstance are covered and your script will always behave as expected.
call npm config set https-proxy http://example.com:8080
call npm config set proxy http://example.com:8080
call npm config set sslVerify false
call npm config set strict-ssl false
set HTTP_PROXY=http://example.com:8080
set HTTPS_PROXY=http://example.com:8080
Note that it is still possible that a script or program could pass work along to another process and then return immediately, but that kind of behavior will normally be the same, whether launched interactively or from a script.
This is actually a three-part question, which I'll explain below, but the questions are:
Using gdb, how can I run part of a program with root authority, and the rest with normal?
Why would I get "permission denied" using mkstemp to create a file in /tmp
in a setuid (to root) program?
Why would "sudo program_name" perform any differently from just ./program_name with setuid to root?
I have a C program running on Linux (multiple distributions) that normally is run by a user with normal privileges, but some parts of the program must run with root authority. For this, I have used the set-UID flag, and that works fine, as far as it goes.
However, now I would like to debug the program with normal user authority, and I find I have a catch-22. I have just added a function to create a temporary file (/tmp/my_name-XXXXXX), and that function is called from many points within the program. For whatever reason, this function issues the following message when running:
sh: /tmp/my_name-hhnNuM: Permission denied
(of course, the actual name varies.) And yet, the program is able to execute raw socket function that I absolutely know cannot be done by users other than root. (If I remove the setuid flag, the program fails miserably.)
If I run this program via gdb without sudo, it dies on the raw socket stuff (since gdb apparently doesn't --or probably cannot-- honor the setuid flag on the program). If I run it under "sudo gdb" then everything works fine. If I run it as "sudo ./my_name, everything works fine.
Here is the ls -l output for that program:
-rwsr-xr-x 1 root root 48222 Jun 23 08:14 my_name
So my questions, in no particular order:
(How) can I run different parts of a program with different effective UID under gdb?
Why is "sudo ./program" different from "./program" when ./program has set-uid to root?
Why would mkstemp fail when called by a normal user in a setuid (to root) program?
1
The only way to debug the setuid application properly under gdb is to run gdb as root. The most sensible way to do this for a setuid application is to attach to the application once it starts. A quick trick to doing this is to add a line into the setuid application:
kill(getpid(), SIGSTOP);
This causes it to stop at this point, then you attach gdb using:
sudo gdb <application> <pid>
Then you are attached to the application and can debug it as normal.
2 sudo changes the rules as it allows a variety of items from the current user's environment to be exported into the root user's environment. This is wholly dependent on the current sudo configuration and can leave you with a very different environment than a setuid application which is why you need to rely on tricks like stopping the application and then attaching to it at run time.
Additionally there may be logic in the application to detect if it's running in a setuid environment which is not actually the case when run under sudo - remember that sudo sets all the process's id fields (real uid, effective uid and saved uid) to the same value, which setuid doesn't (the real uid is still that of the original caller). You can use the getresuid() call to determine the state of the three variables.
3 The thing is that the Permission Denied message has a prefix of sh:; this seems to imply that another sub-process is being executed that is trying to access the file. After you've invoked mkstemp, you may want to loosen up the permission to read the file so that the subprocess is able to read the file.
I've followed the directions in Installing MacPorts.
To install MacPorts using the pkg installer. The installation apparently goes fine. For example, it goes through the multi-step process eventually saying "Installation Successful" or something to this effect.
And now there's just the "little" problem that neither of these commands work:
man ports
which ports
I've checked in /usr/local, /bin, and /usr/bin, and I don't see where this has been installed to. Ideas?
They're in /opt/local/bin, so as to not overwrite stuff that came with Mac OS X or that you might have gotten from elsewhere. They won't be in your $PATH until you close that Terminal and open another (nothing can alter the environment of a running program except the program itself).
It's in /opt/local/bin. MacPorts updates .bash_profile to include this in the path, but obviously existing shells don't see the updated PATH variable...
It's probably because you're trying ports but the command is called port: See http://guide.macports.org/#using.port