Hiding command line arguments for C program in Linux - c

How can I hide the command line argument for C program running in Linux so that they aren't visible to other users via "w", "ps auxwww" or similar commands?

It's actually rather difficult (I'll stop short of saying impossible since there may be a way I'm not aware of) to do this, especially if a user has access to the /proc file system for your process.
Perhaps the best way to prevent people from seeing your command line arguments is to not use command line arguments :-)
You could stash your arguments in a suitably protected file called (for example) myargs.txt then run your program with:
myprog #myargs.txt
Of course, you'll have to modify myprog to handle the "arguments in a file" scenario.
Alternatively, you could set the arguments into environment variables and have your program use getenv.
However, I'm not aware of any method that can protect you from a suitable-empowered process (such as one run by root).

Modify the content of argv in your program:
#include <stdio.h>
#include <time.h>
void delay (long int msecs)
{
clock_t delay = msecs * CLOCKS_PER_SEC / 1000;
clock_t start = clock();
while (clock() - start < delay);
}
void main (int argc, char **argv)
{
if (argc == 2)
{
printf ("%s\n", argv[1]);
delay (6000);
argv[1][0] = 'x';
argv[1][1] = '.';
argv[1][2] = 'x';
printf ("%s\n", argv[1]);
delay (5000);
printf ("done\n");
}
else printf ("argc != 1: %d\n", argc);
}
Invocation:
./argumentClear foo
foo
x.x
done
Result, viewn by ps:
asux:~ > ps auxwww | grep argu
stefan 13439 75.5 0.0 1620 352 pts/5 R+ 17:15 0:01 ./argumentClear foo
stefan 13443 0.0 0.0 3332 796 pts/3 S+ 17:15 0:00 grep argu
asux:~ > ps auxwww | grep argu
stefan 13439 69.6 0.0 1620 352 pts/5 R+ 17:15 0:02 ./argumentClear x.x
stefan 13446 0.0 0.0 3332 796 pts/3 S+ 17:15 0:00 grep argu
Remark: My delay-function doesn't work as expected. Instead of 11 seconds, the program runs in about 2-3. I'm not the big C-programmer. :) The delay-function needs improvement here.

As far as I know, that information is stored in kernel space. Short of writing a kernel module, you will not be able to hide this information because any program can query the proc filesystem to see the command line arguments (this is what ps does).
As an alternative, you can read in your command line args on stdin then populate an array to pass to the command line argument handler. Or, better yet, add support for your program to read a configuration file that contains the same command line argument information and set the permissions so that only the owner can read the file.
I hope this helps.

To hide the arguments from the ps command, you could use the hack i always use:
sprintf(argv[0], "My super long argument list
");
Be sure to use spaces of about 3 lines using the space bar, otherwise the compiler will trow an error !
Keep in mind to change argv[0] after parsing the command line !
59982 pts/1 SLl+ 0:00 My super long argument list
strings /proc/59982/cmdline
My super long argument list
It's a hack, but an intruder will issue a "ps axw" first.
Always monitor mission critical server and check the logged in users !!!

Related

C & bash redirection processus communication

look at this bash :
mkfifo fifo
./processA <fifo | processB >fifo
In my process A, i generate a file which is send by process B. Then I want to process the result of processB.
So in A I just send info to B with printfs into std out. Then I create a thread who just read(stdin). After creating this thread, I send infos to B via printf.
I do not understand why this whole sh block. The read never recieive anything. Why? the two process are tested and work fine separatly. The whole sh also work perfectly (dont block) if I dont read (but then I cant process B output).
Can somebody explain me what i am understanding wrong?
Sorry for my approximative English. I am also intersted by your clean solution if you have one (but it would prefer understanding why this one is not working).
//edit
Here is the main (process A):
//managing some arguments threatment, constructing object...
pthread_t thread;//creation of the thread supposed to read
if(pthread_create(&thread, NULL,IsKhacToolKit::threadRead, solver) != 0) {
fprintf(stderr,"\nSomething went wrong while creating Reader thread\n" );
}
solver->generateDimacFile();//printing to stdout
pthread_exit(0);
}
the function executed by the thread is just supposed to read stdin and printing into stderr the string obtened (for now). Nothing is printed in stderr right now.
generateDimacFile print a char* into stdout (and flush(stdout) at the end) that processB use. The process B is that one: http://www.labri.fr/perso/lsimon/glucose/
here ise the function executed by the thread now :
char* satResult=(char*)malloc(sizeof(char)* solutionSize);
for (int i=0; i<2; i++){
read(0, satResult, solutionSize );
fprintf(stderr, "\n%s\n", satResult);
}
DEBUGFLAG("Getting result from glucose");
Ok so now thanks to Maxim Egorushkin, I discovered that the first read dont block, but the next one block using that bash instead:
./processA <fifo | stdbuf -o0 ./processB >fifo
and if I use that one :
stdbuf -o0 ./processA <fifo | stdbuf -o0 ./processB >fifo
Most of the time I can read twice whitout blocking (some time it block). I still can't read 3 times. I dont understand why it change anything because I flush stdout in generateDimacFile.
Look at what's actually printed when it dont block(reading twice) in stderr:
c
c This is glucose 4.0 -- based on MiniSAT (Many thanks to MiniSAT team)
c
c This is glucose 4.0 -- based on MiniSAT (Many thanks to MiniSAT team)
c
c Reading from standard input... Use '--help' for help.
s to MiniSAT team)
c
The coresponding expected result:
c
c This is glucose 4.0 -- based on MiniSAT (Many thanks to MiniSAT team)
c
c Reading from standard input... Use '--help' for help.
c | |
s UNSATISFIABLE
You have a potentially blocking race condition. If the processB needs to read a large amount of data before it produces anything, then it is possible that processA will be data starved before it produces enough data. Once that happens, there's a deadlock. Or, if processA never generates any data until it reads something, then both processes will just sit there. It really depends on what processA and processB are doing.
If the processes are sufficiently simple, what you are doing should work. For instance:
$ cat a.sh
#!/bin/sh
echo "$$"
while read line; do echo $(($line + 1 )); echo $$ read: $line >&2; sleep 1; done
$ ./a.sh < fifo | ./a.sh > fifo
26385 read: 26384
26384 read: 26385
26385 read: 26386
26384 read: 26385
26385 read: 26386
26384 read: 26387
26385 read: 26388
26384 read: 26387
^C
Using | or > in bash makes the process block-buffered, so that it does not output anything until the buffer is full or fflush is invoked.
Try disabling all buffering with stdbuf -o0 ./processA <fifo | stdbuf -o0 processB >fifo.
stderr does not get redirected in your command line, I am not sure why you write into it. Write into stdout.
Another issue is that
read(0, satResult, solutionSize);
fprintf(stderr, "\n%s\n", satResult);
is incorrect, satResult is not zero-terminated and the errors are not handled. A fixL
ssize_t r = read(0, satResult, solutionSize);
if(r > 0)
fwrite(satResult, r, 1, stdout);
else
// Handle read error.

Why background process could be interrupted after some time of successful execution? (exit code 248)

I wrote a C program for Raspberry Pi which reads Wiegand card ID from two readers and put in a text file. Program is based on pigpio library and in fact is just modified example:
#include <stdio.h>
#include <pigpio.h>
#include "wiegand.h"
void callback1(int bits, uint32_t value)
{
FILE *saved = stdout;
stdout = fopen("log_readers.txt", "a");
printf("Reader_1: bits=%d value=%u\n", bits, value);
fclose(stdout);
stdout = saved;
}
void callback2(int bits, uint32_t value)
{
FILE *saved = stdout;
stdout = fopen("log_readers.txt", "a");
printf("Reader_2: bits=%d value=%u\n", bits, value);
fclose(stdout);
stdout = saved;
}
int main(int argc, char *argv[])
{
Pi_Wieg_t * w1;
Pi_Wieg_t * w2;
if (gpioInitialise() < 0) return 1;
w1 = Pi_Wieg(14, 15, callback1, 5);
w2 = Pi_Wieg(23, 24, callback2, 5);
sleep(300);
Pi_Wieg_cancel(w1);
Pi_Wieg_cancel(w2);
gpioTerminate();
}
When I compile and run the program everything works fine
(checked log_readers.txt file with tail -f)
When I run the binary in background mode
sudo ./all_readers.bin &
it's also executed correctly, but after some time is stop working.
Immediately after running ps see the process:
pi#raspberrypi ~/sandbox $ ps ax | grep all_readers
3768 pts/0 S 0:00 sudo ./all_readers.bin
3769 pts/0 SLl 0:00 ./all_readers.bin
But if I run same command after 5 minutes no output in ps:
pi#raspberrypi ~/sandbox $ ps ax | grep all_readers
3782 pts/0 S+ 0:00 grep --color=auto all_readers
[2]- Exit 248 sudo ./all_readers.bin
Looks like proccess became terminated. According to my observations it's not depending on program-related events like reading card. Also it's enough free memory in RAM and on disk. I tried to catch the problem via pidstat utility, but didn't seen any error string.
What does exit code 248 mean? And what could be the reason of terminating background process and how to diagnose that?
Any suggestion is much appreciated.
Haven't noticed obvious thing. Need to remove sleep(300) and make infinite loop like this while (1) { sleep(1); }

Advanced I/O redirection in a C program.(Simple Shell)

I'm designing a simple shell but I have a problem with advanced redirection.
I am able to do this : ls -al > a.txt
But i couldn't do this : wc < a.txt > b.txt
How can i do that?
Here is where i perform my i/o redirection :
char *inpu=NULL; //Inpu is a global variable.
#define CREATE_FLAGS (O_WRONLY | O_CREAT | O_TRUNC)
#define CREATE_FLAGS1 (O_WRONLY | O_CREAT | O_APPEND)
#define CREATE_FLAGS2 (O_RDONLY | O_CREAT | O_APPEND)
#define CREATE_MODE (S_IRUSR | S_IWUSR | S_IRGRP | S_IROTH)
#define MAXCHARNUM 128
#define MAXARGNUM 32
char *argsexec[MAXARGNUM]; /*This stores my executable arguments like cd ls.*/
void ioredirection(int f){
int k,i,m;
int input=-1;
int output=-1;
int append=-1;
int fdin,fdout;
for(k=0; k<f; k++){
if(strcmp(argsexec[k],"<")==0){
input=k; // argument place of "<"
m=1;
argsexec[k]=NULL;
}
else if(strcmp(argsexec[k],">")==0){
output=k; // argument place of ">"
m=2;
argsexec[k]=NULL;
}
else if(strcmp(argsexec[k],">>")==0){
append=k; // argument place of ">>"
m=3;
argsexec[k]=NULL;
}
}
if(m==1){
int inp1;
fdin=open(argsexec[input+1],O_RDONLY,CREATE_MODE);
dup2(fdin,STDIN_FILENO);
close(fdin);
inp1=execlp(argsexec[0],argsexec[0],NULL);
}
if(m==2){
fdout = open(argsexec[output+1], CREATE_FLAGS, CREATE_MODE);
dup2 (fdout, STDOUT_FILENO);
close(fdout);
execvp(argsexec[0],argsexec);
}
if(m==3){
fdout = open(argsexec[append+1], CREATE_FLAGS1, CREATE_MODE) ;
dup2 (fdout, STDOUT_FILENO);
close(fdout);
execvp(argsexec[0],argsexec);
}
}
b is changing like this :
inpu=strtok(str," ");
while(inpu!=NULL){
argsexec[b]=inpu;
b++;
inpu = strtok(NULL, " ");
}
And i'm calling it with child process.
if (pid==0){
ioredirection(b);
I hope it is clear to understand, my full is code really long i tried to cut it like this. Any suggestion will be appreciated.
I ran your code through a formatter and got (greatly simplified):
void ioredirection(int f) {
for (k over all arguments) {
set m based on argsexec[k]
}
test m against 1, 2, and 3
}
Thus, if there are two (or more) "re-direction" operations, the loop (for k) will set m two (or more) times. Then, the loop having terminated, you (finally) test m once against each of three possible values.
The first problem should now be clear (but since this is a school project I'm not going to solve it for you :-) ).
The second problem is clear only in looking at the three tests performed on m. Looking at just one will suffice here:
if (m == 1) {
int inp1;
fdin = open(argsexec[input + 1], O_RDONLY, CREATE_MODE);
dup2(fdin, STDIN_FILENO);
close(fdin);
inp1 = execlp(argsexec[0], argsexec[0], NULL);
}
(The other two use execvp rather than execlp.1) If an exec* family function call succeeds, the current process will be replaced immediately, so that the exec* never returns. So if you need to redirect two (or more) *_FILENO values, the eventual exec* call has to be put off until after all other redirections are also done.
1One of these is not the appropriate function. OK, no dancing around here: execvp is the proper choice. :-)
A third problem occurs if a redirect is not followed by a file name (see below).
The last two potential problems with this code snippet is much less obvious and needs looking at the whole thing. Whether one is a "real bug" depends on how simple your shell is meant to be.
The argsexec array can hold up to 128 char * pointer values. The entry in argsexec[0] should be the name of the binary to run—it's supplied to execvp after all—and to use execvp, argsexec[0] must be the first of however many char *s in sequence:
argsexec[0] becomes argv[0] (in the program you invoke),
argsexec[1] becomes argv[1] (the program's first argument),
argsexec[2] becomes argv[2],
and so on ... until:
argsexec[i], for the smallest integer i, is NULL: and that tells the execv* family of functions: "OK, you can stop copying now."
Let's skip over definite bug #3 for a bit longer, and talk about potential bug #4: It's not clear here whether any argsexec[i] has been set to NULL. The argument f to iodirection is the last i for which argsexec[i] must not be NULL; we can't tell, from this code fragment, whether some (presumably the f+1th) argsexec[i] is NULL. To use the execvp function, it will need to be NULL.
If there are some I/O redirections, you will set some argsexec[i] to NULL. That will terminate the array and make the execvp call work correctly. If not ... ?
This leads to potential bug #5: In "real" Unix shells you can place I/O redirections "in the middle" of a command:
prog > stdout arg1 2> stderr arg2 < stdin arg3
This runs prog with three arguments (arg1, arg2, and arg3). In fact, you can even put the redirections first:
<stdin >stdout 2>stderr prog arg1 arg2 arg3
and in some shells you can "re-direct" more than once (and then not even bother running a command, if you don't want one):
$ ls
$ > foo > bar > baz
$ ls
bar baz foo
(but other shells forbid it:
$ rm *; exec csh -f
% > foo > bar > baz
Ambiguous output redirect.
not that csh is anything to emulate. :-) )
If—this is a big "if"—you want to allow this or something similar, you'll need to "execute and remove" each I/O redirection as it appears, moving remaining arguments down so that argsexec remains "densely populated" with the various char * values that are to be supplied to the program. For instance, if len is the length of the valid, non-NULL entries in the array, so that argsexec[len] is NULL, and you need to "remove" argsexec[j] and argsexec[j + 1] (which contain a redirection like ">", and a file name, respectively), then:
for (i = j + 2; i <= len; i++) {
argsexec[i - 2] = argsexec[i];
}
would do the trick (the loop runs to i <= len so that it copies the terminating NULL as well).
So, finally, definite bug #3: what happens if a redirect is at the very last position, argsexec[f - 1]? The for (k ...) loop runs k from 0 to f - 1 inclusive. If, when k == f - 1, argsexec[k] is a redirect, the file name must be in argsexec[f].
But we just noted (above) that argsexec[f] needs to be NULL. That is, if someone tries:
ls >
then argsexec[] should contain "ls", ">", and NULL.
Here's what the "real shells" sh and csh do in that case:
$ ls >
Syntax error: newline unexpected
% ls >
Missing name for redirect.
You'll need something similar: a way to reject the attempt, if the file name after the redirection is missing.

Unix/Linux pipe behavior when reading process terminates before writing process

I have this:
$ ls -lh file
-rw-r--r-- 1 ankur root 181M Sep 23 20:09 file
$ head -6 file
z
abc
abc
abc
abc
abc
$ cat file | grep -m 1 z
z
Question:
Why is the cat command line in the last line not dying prematurely with SIGPIPE? I think this should happen because grep terminates in no time compared to cat file that cats 183MB of file. With reading process gone cat will try to write to a broken pipe and should die with SIGPIPE.
Update:
I ended up writing this: readstdin.c
# include <unistd.h>
# include <stdio.h>
int main() {
ssize_t n ;
char a[5];
n = read(0, a, 3);
printf("read %zd bytes\n", n);
return(0);
}
I use it like this:
$ cat file | ./readstdin
$ yes | ./readstdin
But still cat or yes does not die prematurely. I expect it to because by reading process is terminating before writing process is done writing.
If the read end of some pipe(2) is close(2)-ed, further write(2)s will get a SIGPIPE signal(7). Read also pipe(7).
They would get the SIGPIPE when the pipe buffer becomes full.
In the yes | ./readstdin command, the yes command gets a a SIGPIPE signal. Just try yes in a terminal, it spits some output indefinitely ad nauseam till you kill it.
In the cat file | ./readstdin command, it could happen (notably if file is quite small, less that sysconf(_POSIX_PIPE_BUF) bytes, which might be 4096 bytes), that the cat command is close(2)-ing the STDOUT_FILENO descriptor and that the pipe is still not full. Then cat may not get any SIGPIPE.
Normal processes close the input stream causing a SIGPIPE. In the man page, it mentions that -m stops reading, and "ensures that standard input is positioned to just after the last matching line before exiting". So it doesn't actually close the stream. You can demonstrate like this:
cat file | (grep -m1 z && grep -m1 c)
You'll get the first c after the first z, which is sometimes useful. After the last grep exits, there is no place for the stream to go, so it's left unread and the whole group of commands exits. You can demonstrate:
(while true; do echo z; sleep 1; done) | grep -m3 z
(while true; do echo z; sleep 1; done) | grep --line-buffered z | head -3

Any idea why this code isn't running in parallel

Below is my current code which as the title says, I thought would be running in parallel. I am working in Mac OSX, and in terminal I am using bash. The code is written in C and I am trying to use openmp. It compiles and runs without any errors, but I do not believe it is running in parallel.
To explain the code for easier understanding. First block is just declarations of a bunch of variables. The next chunk is the for loop, which runs commands in terminal.
First command is to run an executable program with four parameters: a double, a fixed integer, a string, and another fixed integer. The double is dependent on which iteration of the for loop you are on.
Second, third, fourth and fifth command all deal with renaming and moving files which the executable program spits out. And this completes the for loop. My hopes were that this for loop would run in parallel, since each iteration takes about 30 seconds.
Once outside the four loop, a file which has been written to in each loop is then moved. I realize the ordering which the file is written to might be faulty, but that is only going to be a concern when it is actually running in parallel!
#include <stdio.h>
#include <string.h>
int main(){
int spot;
double th;
char command[50];
char path0[] = "/home/path0";
char path1[] = "/home/path1";
char path2[] = "/home/path2";
char path3[] = "/home/path3";
#pragma omp parallel for private(command,path)
for (th=0.004, spot =0; th<1; th += 0.005, spot++) {
sprintf(command, "./program %lf 19 %s 418", th, path0);
system(command);
sprintf(command, "mv fileA.ppm a.%04d.ppm", spot);
system(command);
sprintf(command, "mv a.%04d.ppm %s", spot, path1);
system(command);
sprintf(command, "mv fileB.ppm b.%04d.ppm", spot);
system(command);
sprintf(command, "mv b.%04d.ppm %s", spot, path2);
system(command);
}
sprintf(command, "mv FNums.txt %s", path3);
system(command);
return(0);
}
Thanks for any insight and help you guys can offer.
Since this is basically shell script based already, consider using xargs:
First of all, make sure multiple instances of ./program don't overwrite each other's fileA.ppm if you run it in parallel. I'll assume you'll start writing them out as fileA.ppm.0.004 in this example.
Then make a script you can invoke with the spot number:
#!/bin/sh
spot=$1
th=$(echo "$spot" | awk '{print 0.004 + 0.005*$1 }')
./program "$th" 19 /home/path0 418
mv "fileA.ppm.$th" "$(printf '/home/path1/a.%04d.ppm' "$spot")"
mv "fileB.ppm.$th" "$(printf '/home/path2/b.%04d.ppm' "$spot")"
chmod a+x yourscript, and you can now run and test each instance using ./yourscript 0, ./yourscript 1, etc.
When it works, run them 8 (or more) in parallel using:
printf "%s\n" {0..199} | xargs -P 8 -n 1 ./yourscript

Resources