Set environment variables in C - c

Is there a way to set environment variables in Linux using C?
I tried setenv() and putenv(), but they don't seem to be working for me.

I'm going to make a wild guess here, but the normal reason that these functions appear to not work is not because they don't work, but because the user doesn't really understand how environment variables work. For example, if I have this program:
int main(int argc, char **argv)
{
putenv("SomeVariable=SomeValue");
return 0;
}
And then I run it from the shell, it won't modify the shell's environment - there's no way for a child process to do that. That's why the shell commands that modify the environment are builtins, and why you need to source a script that contains variable settings you want to add to your shell, rather than simply running it.

Any unix program runs in a separate process from the process which starts it; this is a 'child' process.
When a program is started up -- be that at the command line or any other way -- the system creates a new process which is (more-or-less) a copy of the parent process. That copy includes the environment variables in the parent process, and this is the mechanism by which the child process 'inherits' the environment variables of its parent. (this is all largely what other answers here have said)
That is, a process only ever sets its own environment variables.
Others have mentioned sourcing a shell script, as a way of setting environment variables in the current process, but if you need to set variables in the current (shell) process programmatically, then there is a slightly indirect way that it's possible.
Consider this:
% cat envs.c
#include <stdio.h>
int main(int argc, char**argv)
{
int i;
for (i=1; i<argc; i++) {
printf("ENV%d=%s\n", i, argv[i]);
}
}
% echo $ENV1
% ./envs one two
ENV1=one
ENV2=two
% eval `./envs one two`
% echo $ENV1
one
%
The built-in eval evaluates its argument as if that argument were typed at the shell prompt. This is a sh-style example; the csh-style variant is left as an exercise!

The environment variable set by setenv()/putenv() will be set for the process that executed these functions and will be inherited by the processes launched by it. However, it will not be broadcasted into the shell that executed your program.
Why isn't my wrapper around setenv() working?

The environment block is process-local, and copied to child processes. So if you change variables, the new value only affects your process and child processes spawned after the change. Assuredly it will not change the shell you launched from.

Not an answer to this question, just wanna say that putenv is dangerous, use setenv instead.
putenv(char *string) is dangerous for the reason that all it does is simply append the address of your key-value pair string to the environ array. Therefore, if we subsequently modify the bytes pointed to by string, the change will affect the process environment.
#include <stdlib.h>
int main(void) {
char new_env[] = "A=A";
putenv(new_env);
// modifying your `new_env` also modifies the environment
// vairable
new_env[0] = 'B';
return EXIT_SUCCESS;
}
Since environ only stores the address of our string argument, string has to be static to prevent the dangling pointer.
#include <stdlib.h>
void foo();
int main(void) {
foo();
return EXIT_SUCCESS;
}
void foo() {
char new_env[] = "A=B";
putenv(new_env);
}
When the stack frame for foo function is deallocated, the bytes of new_env are gone, and the address stored in environ becomes a dangling pointer.

Related

Virtually memory of created process with ASLR disabled

I am trying to understand why the address of stack variables of a given process have different values when the process is executed on its own on the command line and when another process starts the process with an execl call (without forking). I am running these program in an intel 64 bit kali linux machine with ASLR disabled.
Here is the code for the program that has its stack variable address printed:
int main(int argc, char *argv[]){
char buffer;
printf("buffer: %p\n", &buffer);
return 1;
}
Here is the code for the program that executes the code above without forking:
int main(int argc, char * argv[]){
int var;
printf("var is at %p\n", &var);
execl("./a", "a", NULL);
return 1;
}
Since executing a process in the shell, to my knowledge, is the same as the shell executing an exec after forking itself, I tested having the code above fork itself first then have the child perform an exec. Here is the code for the third experiment:
int main(int argc, char * argv[]){
int var;
printf("var is at %p\n", &var);
if(fork()==0)
{
execl("./a", "a", NULL);
}
return 1;
}
All three programs above are compiled with gcc with the options: -fno-stack-protector -z execstack.
Here are the results of the various tests I conducted:
1.) Executing first program on shell:
buffer: 0x7fffffffe2df
2.) Executing first program on a process using exec (without forking):
var is at 0x7fffffffe2dc
buffer: 0x7fffffffe2ef
3.) Executing first program on a process using exec (with forking):
var is at 0x7fffffffe2cc
buffer: 0x7fffffffe2df
I expected that the location of buffer in the first two tests to be the same due to each process supposedly not having an effect on each other's memory. I tested to see whether the location of global variables changed and sure enough they stayed in the same addresses whatever the test.
To my knowledge, one reason the stack is shifted when a process executes is due to inheriting environment variables from the shell. So I theorize that executing a process by having a different process use execs causes the former to inherit variables located in the stack.
However my third test says otherwise. It simulated how the shell executes a process and it got the same address as executing the process from the shell directly. I expected the result to be the same as my second experiment due to it using the same process context as the second experiment (thus inheriting the same environment) as the second experiment. The only difference was I forked first and had the child execute the exec call.
I would deeply appreciate it If someone can point me to the direction as to why the stack behaves this way.

Globally static int in self calling module

(Disclamer: This is homework)
I am creating a shell program, lets call it fancysh. I am trying to add PATH (and other env vars) functionality to my shell, so far all is good. My naive approach was to store all these variables as static variables in fancysh.c. Now however I am trying to implement the environment variable SHLVL which holds the current "depth" of the shell. For example I can be running in the first instance of fancysh and the SHLVL should read 1, upon calling fancysh again the SHLVL should increment (and decrement when a shell is exited).
What I have tried...
fancysh.h
#ifndef FANCYSH_H
#define FANCYSH_H
extern int SHLVL;
#endif
fancysh.c
#include "fancysh.h"
int SHLVL;
int main(){
/* some fancy code to determine if SHLVL is initalized */
/* if not init to 0 */
SHLVL ++;
printf("%d\n", SHLVL);
/* Test Code Only */
int pid = fork();
if(pid == 0 && SHLVL < 10)
exec("fancysh");
wait();
/* Test Code Only */
/* shell code */
SHLVL--;
printf("%d\n", SHLVL);
exit(0);
}
I used the answers here and here as part of this solution.
So how would I go about implementing the fancy code to determine if SHLVL is initialized? I had some ideas about using a combination of #ifdef and #define but I'm not 100% sure how to do this.
You need to get a grasp of the fact that different shell processes are different processes. Just because one instance of the shell is started within the scope of another instance of the shell does not mean that the former automatically inherits any data from the latter.
Or not directly, anyway. Any new instance of your shell will receive an environment from the process that starts it. If that environment contains a SHLVL variable then the new shell process can of course read that value, and it may possibly present a different value of that environment variable within its own scope.

Forks and Pointers in C

Can someone help me understand how the system handles variables that are set before a process makes a fork() call. Below is a small test program I wrote to try understanding what is going on behind the scenes.
I understand that the current state of a process is "cloned", variables included, at the time of the forking. My thought was, that if I malloc'd a 2D array before calling fork, I would need to free the array both in the parent and the child processes.
As you can see from the results below the sample code, the two values act as if they are totally separate from each other, yet they have the exact same address space. I expected that my final result for tmp would be -4 no matter which process completed first.
I am newer to C and Linux, so could someone explain how this is possible? Perhaps the variable tmp becomes a pointer to a pointer which is distinct in each process? Thanks so much.
#include <sys/types.h>
#include <unistd.h>
#include <stdlib.h>
int main()
{
int tmp = 1;
pid_t forkReturn= fork();
if(!forkReturn) {
/*child*/
tmp=tmp+5;
printf("Value for child %d\n",tmp);
printf("Address for child %p\n",&tmp);
}
else if(forkReturn > 0){
/*parent*/
tmp=tmp-10;
printf("Value for parent %d\n",tmp);
printf("Address for parent %p\n",&tmp);
}
else {
/*Error calling fork*/
printf("Error calling fork);
}
return 0;
}
RESULTS of standard out:
Value for child 6
Address for child 0xbfb478d8
Value for parent -9
Address for parent 0xbfb478d8
It did indeed copy the entire address space, and changing memory in the child process does not affect the parent. The key to understanding this is to remember that a pointer can only point to something in your own process, and the copy happens at a lower level.
However, you should not call malloc() or free() at all in the child of fork. This can deadlock (another thread was in malloc() when you called fork()). The only functions safe to call in the child are the ones also listed as safe for signal handlers. I used to be able to claim this was true only if you wrote multithreaded code; however Apple was kind enough to spawn a background thread in the standard library, so the deadlock is real all the time. The child of fork should never be allowed to drop out of the if block. Call _exit to make sure it doesn't.

why using pthread_exit?

I'm trying to figure out the usage of pthread_exit using this example code:
void* PrintVar(void* arg)
{
int * a = (int *) arg; // we can access memory of a!!!
printf( "%d\n", *a);
}
int main(int argc, char*argv[])
{
int a, rc;
a = 10;
pthread_t thr;
pthread_create( &thr, NULL, PrintVar, &a );
//why do I need it here?//
pthread_exit(&rc); /* process continues until last
threads termintates */
there are two things I'm not quite sure about :
when we are using pthread_create - I'm passing 'a' parameter's address,
but is this paramter being "saved" under "arg" of the PrintVar function?
for example if I was using : PrintVar(void *blabla) , and wanted to pass 2 parameters from main function : int a = 10, int b= 20 .. how can I do that?
Why the pthread_exit needed? it means - wait for proccess to end - but what scenario can I get if I won't use that line?
thanks alot!
when we are using pthread_create - I'm passing 'a' parameter's address, but is this paramter being "saved" under "arg" of the PrintVar function?
The "original" a (the one defined in main) is not being copied, you are only passing around a pointer to it.
for example if I was using : PrintVar(void *blabla) , and wanted to pass 2 parameters from main function : int a = 10, int b= 20 .. how can I do that?
Put those two values in a struct and pass a pointer to such struct as argument to pthread_create (PrintVar, thus, will receive such a pointer and will be able to retrieve the two values).
and my second question is why the pthread_exit needed? it means - wait for proccess to end - but what scenario can I get if I won't use that line?
pthread_exit terminates the current thread without terminating the process if other threads are still running; returning from main, instead, is equivalent to calling exit which, as far as the standard is concerned, should "terminate the program" (thus implicitly killing all the threads).
Now, being the C standard thread-agnostic (until C11) and support for threading in the various Unixes a relatively recent addition, depending from libc/kernel/whatever version exit may or may not kill just the current thread or all the threads.
Still, in current versions of libc, exit (and thus return from main) should terminate the process (and thus all its threads), actually using the syscall exit_group on Linux.
Notice that a similar discussion applies for the Windows CRT.
The detached attribute merely determines the behavior of the system when the thread terminates; it does not
prevent the thread from being terminated if the process terminates using exit(3) (or equivalently, if the
main thread returns).

Exit Handler in C

All,
I want to develop an exit handler in my program.
I'm really new to C; is it all about managing signals in C?
How do I know if my program ended in a good way or not?
If not, how do I get the maximum information when exiting?
C (C89 and C99 standards) provides atexit() to register function to be called when the program exits. This has nothing to do with signals. Unlike signal handlers, you can register multiple exit handlers. The exit handlers are called in reverse order of how they were registered with atexit().
The convention is that when program exits cleanly it returns exit status 0. This can be done by return 0 from main() or exit(0) from anywhere in your program.
In Unix/Linux/POSIX type operating system (not sure of Windows), the parent process get exit status information about the child process using wait() system call or its variants.
Example: Here is a simple program and its output to demonstrate atexit():
#include <stdlib.h>
#include <stdio.h>
static void exit_handler1(void)
{
printf("Inside exit_handler1()!n");
}
static void exit_handler2(void)
{
printf("Inside exit_handler2()!n");
}
int main(int argc, char *argv[])
{
atexit(exit_handler1);
atexit(exit_handler2);
return 0;
}
Output generated by the program:
Inside exit_handler2()!
Inside exit_handler1()!
Look here you will find all what you want:
http://www.cplusplus.com/reference/cstdlib/exit/
I added a new link here take a look:
Exception libraries for C (not C++)
If i am not get wrong you ask about giving back results from program when exiting. You should use exit(x); function to return value from your program. You can put any integer value as parameter x. And dont forget to use #include <stdlib.h> in your program start.

Resources