LD_LIBRARY_PATH - c

Can I set LD_LIBRARY_PATH for an individual application?
I am looking into system call failure, so is there any way I can set set the correct path using the LD_LIBRARY_PATH setting?

Simplest way would be to create a shell script.
Have the shell script export your new LD_LIBRARY_PATH variable then launch your application
e.g. (where foo is your app)
#!/bin/sh
LD_LIBRARY_PATH=some_path:$LD_LIBRARY_PATH
export LD_LIBRARY_PATH
foo

As simple as:
LD_LIBRARY_PATH=new_path:$LD_LIBRARY_PATH foo
which works in bash. I think it works in all bourne shell derivatives, but I can't guarantee it.
Of course, with this approach, you have to type the path every time. To do it repeatedly, prefer Glen's approach.

One item to be aware of: you cannot set LD_LIBRARY_PATH within a program and make it have any effect on the current program. This is because the dynamic loader (ld.so.1 or some similar name) is already loaded and has read and processed the environment variable before any of your code is run. You can set it in the current process's environment, and that value will then affect any child processes, and you could use one of the exec() family of functions to run a program with the environment set. In an extreme case, you could re-execute the current program - but that is extreme!

Related

dlopen() ignoring $LD_LIBRARY_PATH [duplicate]

What is export for?
What is the difference between:
export name=value
and
name=value
export makes the variable available to sub-processes.
That is,
export name=value
means that the variable name is available to any process you run from that shell process. If you want a process to make use of this variable, use export, and run the process from that shell.
name=value
means the variable scope is restricted to the shell, and is not available to any other process. You would use this for (say) loop variables, temporary variables etc.
It's important to note that exporting a variable doesn't make it available to parent processes. That is, specifying and exporting a variable in a spawned process doesn't make it available in the process that launched it.
To illustrate what the other answers are saying:
$ foo="Hello, World"
$ echo $foo
Hello, World
$ bar="Goodbye"
$ export foo
$ bash
bash-3.2$ echo $foo
Hello, World
bash-3.2$ echo $bar
bash-3.2$
It has been said that it's not necessary to export in bash when spawning subshells, while others said the exact opposite. It is important to note the difference between subshells (those that are created by (), ``, $() or loops) and subprocesses (processes that are invoked by name, for example a literal bash appearing in your script).
Subshells will have access to all variables from the parent, regardless of their exported state.
Subprocesses will only see the exported variables.
What is common in these two constructs is that neither can pass variables back to the parent shell.
$ noexport=noexport; export export=export; (echo subshell: $noexport $export; subshell=subshell); bash -c 'echo subprocess: $noexport $export; subprocess=subprocess'; echo parent: $subshell $subprocess
subshell: noexport export
subprocess: export
parent:
There is one more source of confusion: some think that 'forked' subprocesses are the ones that don't see non-exported variables. Usually fork()s are immediately followed by exec()s, and that's why it would seem that the fork() is the thing to look for, while in fact it's the exec(). You can run commands without fork()ing first with the exec command, and processes started by this method will also have no access to unexported variables:
$ noexport=noexport; export export=export; exec bash -c 'echo execd process: $noexport $export; execd=execd'; echo parent: $execd
execd process: export
Note that we don't see the parent: line this time, because we have replaced the parent shell with the exec command, so there's nothing left to execute that command.
This answer is wrong but retained for historical purposes. See 2nd edit below.
Others have answered that export makes the variable available to subshells, and that is correct but merely a side effect. When you export a variable, it puts that variable in the environment of the current shell (ie the shell calls putenv(3) or setenv(3)).
The environment of a process is inherited across exec, making the variable visible in subshells.
Edit (with 5 year's perspective):
This is a silly answer. The purpose of 'export' is to make variables "be in the environment of subsequently executed commands", whether those commands be subshells or subprocesses. A naive implementation would be to simply put the variable in the environment of the shell, but this would make it impossible to implement export -p.
2nd Edit (with another 5 years in passing).
This answer is just bizarre. Perhaps I had some reason at one point to claim that bash puts the exported variable into its own environment, but those reasons were not given here and are now lost to history. See Exporting a function local variable to the environment.
export NAME=value for settings and variables that have meaning to a subprocess.
NAME=value for temporary or loop variables private to the current shell process.
In more detail, export marks the variable name in the environment that copies to a subprocesses and their subprocesses upon creation. No name or value is ever copied back from the subprocess.
A common error is to place a space around the equal sign:
$ export FOO = "bar"
bash: export: `=': not a valid identifier
Only the exported variable (B) is seen by the subprocess:
$ A="Alice"; export B="Bob"; echo "echo A is \$A. B is \$B" | bash
A is . B is Bob
Changes in the subprocess do not change the main shell:
$ export B="Bob"; echo 'B="Banana"' | bash; echo $B
Bob
Variables marked for export have values copied when the subprocess is created:
$ export B="Bob"; echo '(sleep 30; echo "Subprocess 1 has B=$B")' | bash &
[1] 3306
$ B="Banana"; echo '(sleep 30; echo "Subprocess 2 has B=$B")' | bash
Subprocess 1 has B=Bob
Subprocess 2 has B=Banana
[1]+ Done echo '(sleep 30; echo "Subprocess 1 has B=$B")' | bash
Only exported variables become part of the environment (man environ):
$ ALICE="Alice"; export BOB="Bob"; env | grep "ALICE\|BOB"
BOB=Bob
So, now it should be as clear as is the summer's sun! Thanks to Brain Agnew, alexp, and William Prusell.
It should be noted that you can export a variable and later change the value. The variable's changed value will be available to child processes. Once export has been set for a variable you must do export -n <var> to remove the property.
$ K=1
$ export K
$ K=2
$ bash -c 'echo ${K-unset}'
2
$ export -n K
$ bash -c 'echo ${K-unset}'
unset
export will make the variable available to all shells forked from the current shell.
As you might already know, UNIX allows processes to have a set of environment variables, which are key/value pairs, both key and value being strings.
Operating system is responsible for keeping these pairs for each process separately.
Program can access its environment variables through this UNIX API:
char *getenv(const char *name);
int setenv(const char *name, const char *value, int override);
int unsetenv(const char *name);
Processes also inherit environment variables from parent processes. Operating system is responsible for creating a copy of all "envars" at the moment the child process is created.
Bash, among other shells, is capable of setting its environment variables on user request. This is what export exists for.
export is a Bash command to set environment variable for Bash. All variables set with this command would be inherited by all processes that this Bash would create.
More on Environment in Bash
Another kind of variable in Bash is internal variable. Since Bash is not just interactive shell, it is in fact a script interpreter, as any other interpreter (e.g. Python) it is capable of keeping its own set of variables. It should be mentioned that Bash (unlike Python) supports only string variables.
Notation for defining Bash variables is name=value. These variables stay inside Bash and have nothing to do with environment variables kept by operating system.
More on Shell Parameters (including variables)
Also worth noting that, according to Bash reference manual:
The environment for any simple command or function may be augmented
temporarily by prefixing it with parameter assignments, as described
in Shell Parameters. These assignment statements affect only the
environment seen by that command.
To sum things up:
export is used to set environment variable in operating system. This variable will be available to all child processes created by current Bash process ever after.
Bash variable notation (name=value) is used to set local variables available only to current process of bash
Bash variable notation prefixing another command creates environment variable only for scope of that command.
The accepted answer implies this, but I'd like to make explicit the connection to shell builtins:
As mentioned already, export will make a variable available to both the shell and children. If export is not used, the variable will only be available in the shell, and only shell builtins can access it.
That is,
tango=3
env | grep tango # prints nothing, since env is a child process
set | grep tango # prints tango=3 - "type set" shows `set` is a shell builtin
Two of the creators of UNIX, Brian Kernighan and Rob Pike, explain this in their book "The UNIX Programming Environment". Google for the title and you'll easily find a pdf version.
They address shell variables in section 3.6, and focus on the use of the export command at the end of that section:
When you want to make the value of a variable accessible in sub-shells, the shell's export command should be used. (You might think about why there is no way to export the value of a variable from a sub-shell to its parent).
Here's yet another example:
VARTEST="value of VARTEST"
#export VARTEST="value of VARTEST"
sudo env | grep -i vartest
sudo echo ${SUDO_USER} ${SUDO_UID}:${SUDO_GID} "${VARTEST}"
sudo bash -c 'echo ${SUDO_USER} ${SUDO_UID}:${SUDO_GID} "${VARTEST}"'
Only by using export VARTEST the value of VARTEST is available in sudo bash -c '...'!
For further examples see:
http://mywiki.wooledge.org/SubShell
bash-hackers.org/wiki/doku.php/scripting/processtree
Just to show the difference between an exported variable being in the environment (env) and a non-exported variable not being in the environment:
If I do this:
$ MYNAME=Fred
$ export OURNAME=Jim
then only $OURNAME appears in the env. The variable $MYNAME is not in the env.
$ env | grep NAME
OURNAME=Jim
but the variable $MYNAME does exist in the shell
$ echo $MYNAME
Fred
By default, variables created within a script are only available to the current shell; child processes (sub-shells) will not have access to values that have been set or modified. Allowing child processes to see the values, requires use of the export command.
As yet another corollary to the existing answers here, let's rephrase the problem statement.
The answer to "should I export" is identical to the answer to the question "Does your subsequent code run a command which implicitly accesses this variable?"
For a properly documented standard utility, the answer to this can be found in the ENVIRONMENT section of the utility's man page. So, for example, the git manual page mentions that GIT_PAGER controls which utility is used to browse multi-page output from git. Hence,
# XXX FIXME: buggy
branch="main"
GIT_PAGER="less"
git log -n 25 --oneline "$branch"
git log "$branch"
will not work correctly, because you did not export GIT_PAGER. (Of course, if your system already declared the variable as exported somewhere else, the bug is not reproducible.)
We are explicitly referring to the variable $branch, and the git program code doesn't refer to a system variable branch anywhere (as also suggested by the fact that its name is written in lower case; but many beginners erroneously use upper case for their private variables, too! See Correct Bash and shell script variable capitalization for a discussion) so there is no reason to export branch.
The correct code would look like
branch="main"
export GIT_PAGER="less"
git log -n 25 --oneline "$branch"
git log -p "$branch"
(or equivalently, you can explicitly prefix each invocation of git with the temporary assignment
branch="main"
GIT_PAGER="less" git log -n 25 --oneline "$branch"
GIT_PAGER="less" git log -p "$branch"
In case it's not obvious, the shell script syntax
var=value command arguments
temporarily sets var to value for the duration of the execution of
command arguments
and exports it to the command subprocess, and then afterwards, reverts it back to the previous value, which could be undefined, or defined with a different - possibly empty - value, and unexported if that's what it was before.)
For internal, ad-hoc or otherwise poorly documented tools, you simply have to know whether they silently inspect their environment. This is rarely important in practice, outside of a few specific use cases, such as passing a password or authentication token or other secret information to a process running in some sort of container or isolated environment.
If you really need to know, and have access to the source code, look for code which uses the getenv system call (or on Windows, with my condolences, variations like getenv_s, w_getenv, etc). For some scripting languages (such as Perl or Ruby), look for ENV. For Python, look for os.environ (but notice also that e.g. from os import environ as foo means that foo is now an alias for os.environ). In Node, look for process.env. For C and related languages, look for envp (but this is just a convention for what to call the optional third argument to main, after argc and argv; the language lets you call them anything you like). For shell scripts (as briefly mentioned above), perhaps look for variables with uppercase or occasionally mixed-case names, or usage of the utility env. Many informal scripts have undocumented but discoverable assignments usually near the beginning of the script; in particular, look for the ?= default assignment parameter expansion.
For a brief demo, here is a shell script which invokes a Python script which looks for $NICKNAME, and falls back to a default value if it's unset.
#!/bin/sh
NICKNAME="Handsome Guy"
demo () {
python3 <<\____
from os import environ as env
print("Hello, %s" % env.get("NICKNAME", "Anonymous Coward"))
____
}
demo
# prints "Hello, Anonymous Coward"
# Fix: forgot export
export NICKNAME
demo
# prints "Hello, Handsome Guy"
As another tangential remark, let me reiterate that you only ever need to export a variable once. Many beginners cargo-cult code like
# XXX FIXME: redundant exports
export PATH="$HOME/bin:$PATH"
export PATH="/opt/acme/bin:$PATH"
but typically, your operating system has already declared PATH as exported, so this is better written
PATH="$HOME/bin:$PATH"
PATH="/opt/acme/bin:$PATH"
or perhaps refactored to something like
for p in "$HOME/bin" "/opt/acme/bin"
do
case :$PATH: in
*:"$p":*) ;;
*) PATH="$p:$PATH";;
esac
done
# Avoid polluting the variable namespace of your interactive shell
unset p
which avoids adding duplicate entries to your PATH.
Although not explicitly mentioned in the discussion, it is NOT necessary to use export when spawning a subshell from inside bash since all the variables are copied into the child process.

How to export an environment variable for children processes inside a ksh script?

I'm working on an old C software. There is one ksh script which executes a C program, which then creates some other processes and ends. These processes remain alive.
I'm trying to set an environment variable inside my ksh script, so that it could be accessible in the newly created processes that are still alive.
I have tried this way :
#!/bin/ksh
VARIABLE=value
export VARIABLE
my_c_program
But that doesn't work... I have tried to :
change my ksh script to bash
create a wrapper script that creates and exports the variable and then executes the original ksh script (which just executes the C program)
sourcing my ksh script (or my wrapper script when trying with 2.) instead of executing
it
But nothing from that worked.
The only thing that works for now is when I explicitly, by hand, execute the command :
export VARIABLE
In the current bash terminal.
Why? Isn't it possible to do the export inside a script instead of doing it manually?
Everything is ok actually...
The fact is that the process I thought was the child of the C program executed in my ksh script was the child of another process executed before. The C program was just sending a message via shared memory to tell the other program to execute its child.
So indeed the environment variable never went from my C program to the other's child. The only time when I had that variable set in the child is when I executed the other program (the one which is the real parent of the child) in a shell where the variable was exported.
The code above looks correct and it should work. Another way to do it is:
VARIABLE=value my_c_program
which exports the variable just for the program. Afterwards, the variable will be set but other external processes don't get a copy.
So why doesn't your script work? It's hard to tell but here are some tips to debug the issue:
Use #!/bin/ksh -x to enable debug output. Save the output in a file and then grep VARIABLE to make see what happens with it.
Check for typos.
Another shell script is like an external process. So create a script
#!/bin/ksh
echo $VARIABLE
and call it instead of my_c_program just to make sure passing the variable on works.
Maybe the C does something unexpected. Use a debugger to make sure it does what you expect.

Hooks on terminal. Can I call a method before a command is run in the terminal?

I am wanting to make a terminal app that stores information about files/directories. I want a way to keep the information if the file is moved or renamed.
What I thought I could do is have a function execute before any command is run. I found this:
http://www.twistedmatrix.com/users/glyph/preexec.bash.txt
But I was wondering if this would be a good way to go about it. Or should I do something else?
I would like to call that function from a C program whenever mv is entered I suppose.
If what you're trying to do is attach some sort of metadata to files, there's a much better supported way to do that -- extended attributes.
Another solution might be to use the file's inode number as an index into a database you maintain yourself.
Can you alias the mv command? in .profile or .bashrc
alias mv=/usr/bin/local/mymv
where mymv is a compiled executable that runs your C code function and calls /usr/bin/mv.
precmd and preeexec add some overhead to every bash script that gets run, even if the script never calls mv. The downside to alias is that it requires new code in /usr/local and if scripts or users employ /usr/bin/mv instead of mv it will not do what you want. Generally doing something like this often means there is a better way to handle the problem with some kind of service (daemon) or driver. Plus, what happens if your C code cannot correctly handle interesting input like
mv somefille /dev/null
If you want to run command each time after some command was executed in the terminal, just put the following in ~/.bashrc:
PROMPT_COMMAND="your_command;$PROMPT_COMMAND"
If you want your command to be executed each time before mv is executing, put the following in ~/.bashrc:
alias mv="your_script"
Make sure that your script will execute real mv if needed.
You can use inotify library to track filesystem changes. It's good solution, but once user remove file, it's already gone.
You might be able to make use of the DEBUG trap in Bash.
From man bash:
If a sigspec is DEBUG, the command arg is executed before every
simple command, for command, case command, select command, every
arithmetic for command, and before the first command executes in
a shell function
I found this article when I was forced to work in tcsh and wanted to ensure a specific environemtn variable was present when the user ran a program from a certain folder (without setting that variable globally)
tcsh can do this.
tcsh has special alias, one of which is precmd
This can be used to run a script just before the shell prompt is printed.
e.g. I used set precmd 'bash $HOME/.local/bin/on_cd.sh'
This might be one of the very few useful features in csh.
It is a shame but I don't think the same or similar feature is in bash or other sh derivites (ash, dash etc). Related answer.

Using the exec() family to run the "cd" command

I know that cd is a shell built-in ,and I can run it by using system().
But is that possible to run the cd command by the exec() family, like execvp()?
Edit: And I just noticed that system("cd") is also meaningless。Thanks for the help of everyone.
exec loads an executable file and replaces the current program image with it. As you rightly noted, cd is not an executable file, but rather a shell builtin. So the executable that you want to run is the shell itself. This is of course what system() does for you, but if you want to be explicit about it, you can use exec:
execl("/bin/sh", "-c", "cd", (const char *)0);
Since this replaces your current process image, you should do this after fork()ing off a new process.
However, this entire procedure has absolutely no effect. If you want to change the directory in your current process, use chdir().
You're better off using int chdir(const char *path); found in unistd.h.
No it is not, and it would be of no use. chdir (the function that changes a process's current directory) only affects the process that calls it (and its children). It does not affect its parent in particular.
So execing cd has no point, since the process would exit immediately after having changed directories.
(You could exec something like bash -c cd /tmp if you really want to, but as I said, this is fruitless.)
While, as already stated system("cd xxx") wouldn't change your application current directory, it is not completely useless.
You can still use system exit status to know if changing your current directory to the one stated would succeed or not.
Similarly, if you like complex solutions, you could also do the same with fork/exec, either with exec'ing /bin/sh -c cd xxx or simply /bin/cd xxx with OSes that provide an independent cd executable.
I would however recommend this non overkill faster equivalent access("xxx", X_OK|R_OK)
Note: All POSIX compliant OSes must provide an independent cd executable. This is at least the case with Solaris, AIX, HP-UX and Mac OS/X.
When a fork is done the environment variable CWD(current working directory) is inherited by the child from the parent.If fork and exec is done as usual then the child calls chdir() which simply changes the directory to the new directory and exits but this does not affect the parent.Hence, the new environment is lost..

Execute SET command in c program

I created a little mini shell and it let's the user enter a command like 'ls' and it will list the contents of the directory like it's supposed to using execv() in my code, but that doesn't seem to work for when the user enters something like 'set name="bob"'. I've been looking all over the place for what I should use in my code to execute a set command when the user enters it and the best I can find is system(), but that still isn't working for me. Any ideas?
set is a shell-builtin command, not an external command (indeed it needs to be to have the intended effect, which is to modify a shell variable within the shell process itself).
This means that you need to look for and handle set within your shell itself, by adding the named variable to some internal data structure that tracks shell variables (or updating it if it already exists there).
Since you're doing a fork-and-exec or a system(), the command is really being run in a separate process. What happens in that process (like setting an environment variable) does not affect the parent's environment. (A separate issue is that set doesn't actually create an environment variable. You'd need export in [ba]sh or setenv in [t]csh to do that.)
So you need to code your mini-shell to handle the set command explicitly, rather than passing it off to another program.
You might want to look at setenv(3) and getenv(3). These are functions for changing and reading environment variables from within a C program.

Resources