Below is my current code which as the title says, I thought would be running in parallel. I am working in Mac OSX, and in terminal I am using bash. The code is written in C and I am trying to use openmp. It compiles and runs without any errors, but I do not believe it is running in parallel.
To explain the code for easier understanding. First block is just declarations of a bunch of variables. The next chunk is the for loop, which runs commands in terminal.
First command is to run an executable program with four parameters: a double, a fixed integer, a string, and another fixed integer. The double is dependent on which iteration of the for loop you are on.
Second, third, fourth and fifth command all deal with renaming and moving files which the executable program spits out. And this completes the for loop. My hopes were that this for loop would run in parallel, since each iteration takes about 30 seconds.
Once outside the four loop, a file which has been written to in each loop is then moved. I realize the ordering which the file is written to might be faulty, but that is only going to be a concern when it is actually running in parallel!
#include <stdio.h>
#include <string.h>
int main(){
int spot;
double th;
char command[50];
char path0[] = "/home/path0";
char path1[] = "/home/path1";
char path2[] = "/home/path2";
char path3[] = "/home/path3";
#pragma omp parallel for private(command,path)
for (th=0.004, spot =0; th<1; th += 0.005, spot++) {
sprintf(command, "./program %lf 19 %s 418", th, path0);
system(command);
sprintf(command, "mv fileA.ppm a.%04d.ppm", spot);
system(command);
sprintf(command, "mv a.%04d.ppm %s", spot, path1);
system(command);
sprintf(command, "mv fileB.ppm b.%04d.ppm", spot);
system(command);
sprintf(command, "mv b.%04d.ppm %s", spot, path2);
system(command);
}
sprintf(command, "mv FNums.txt %s", path3);
system(command);
return(0);
}
Thanks for any insight and help you guys can offer.
Since this is basically shell script based already, consider using xargs:
First of all, make sure multiple instances of ./program don't overwrite each other's fileA.ppm if you run it in parallel. I'll assume you'll start writing them out as fileA.ppm.0.004 in this example.
Then make a script you can invoke with the spot number:
#!/bin/sh
spot=$1
th=$(echo "$spot" | awk '{print 0.004 + 0.005*$1 }')
./program "$th" 19 /home/path0 418
mv "fileA.ppm.$th" "$(printf '/home/path1/a.%04d.ppm' "$spot")"
mv "fileB.ppm.$th" "$(printf '/home/path2/b.%04d.ppm' "$spot")"
chmod a+x yourscript, and you can now run and test each instance using ./yourscript 0, ./yourscript 1, etc.
When it works, run them 8 (or more) in parallel using:
printf "%s\n" {0..199} | xargs -P 8 -n 1 ./yourscript
Related
Trying to write a script that takes a file of inputs and sending them one-by-one to a C program whenever the program asks for input (scanf).
I want my script to print every input before it sends it to the program.
The whole output of the C program (including the inputs I provided) should be print to a file.
Looking for a solution without changing my C code.
For ex:
My C Program: test.c
#include <stdio.h>
int main()
{
char a[50];
int b;
printf("Enter your name:\n");
scanf("%s",a);
printf("HELLO: %s\n\n", a);
printf("Enter your age:\n");
scanf("%d",&b);
printf("Your age is: %d\n\n", b);
return 0;
}
My Script: myscript.sh
#!/bin/bash
gcc test.c -o mytest
cat input | ./mytest >> outputfile
I also tried
#!/bin/bash
gcc test.c -o mytest
./mytest < input > outputfile
My Input File: input
Itzik
25
My Output File: outputfile
Enter your name:
HELLO: Itzik
Enter your age:
Your age is: 25
Desired outPutFile:
Enter your name:
Itzik
HELLO: Itzik
Enter your age:
25
Your age is: 25
Thanks a lot!
Oh my .. this is going to be a bit ugly.
You can start the program in the background, with it reading from a pipe, then hijack that pipe and write to it, but only when the program is waiting for input. And before you write to the pipe, you write to standard output.
# Launch program in background
#
# The tail command hangs forever and does not produce output, thus
# prog will wait.
tail -f /dev/null | ./prog &
# Capture the PIDs of the two processes
PROGPID=$!
TAILPID=$(jobs -p %+)
# Hijack the write end of the pipe (standard out of the tail command).
# Afterwards, the tail command can be killed.
exec 3>/proc/$TAILPID/fd/1
kill $TAILPID
# Now read line by line from our own standard input
while IFS= read -r line
do
# Check the state of prog ... we wait while it is Running. More
# complex programs than prog might enter other states which you
# need to take care of!
state=$(ps --no-headers -wwo stat -p $PROGPID)
while [[ "x$state" == xR* ]]
do
sleep 0.01
state=$(ps --no-headers -wwo stat -p $PROGPID)
done
# Now prog is waiting for input. Display our line, and then send
# it to prog.
echo $line
echo $line >&3
done
# Close the pipe
exec 3>&-
I've compiled your source code above to an executable named prog and saved above code into pibs.sh. The result:
$ bash pibs.sh < input
Enter your name:
Daniel
HELLO: Daniel
Enter your age:
29
Your age is: 29
What you are asking is really not possible without writing a program that parses the output from test.c and knows what is a prompt for input, and what is not.
Depending on how complicated your program is, you may have some luck with the chat program (see man chat) or GNU expect.
Your best bet is, as "BobRun" says, to modify your program. No matter how much you don't want to modify your program, it is time to put all those scanf() you might have littered through you code, behind proper input functions like this:
int input_int(const char *prompt)
{
printf ("%s:\n", prompt)
int i = 0;
scanf("%d", &i);
/* Eat rest of line */
int ch;
do
ch = fgetc(stdin);
while (ch != '\n' && ch != EOF)
return i;
}
Now, adding error checking and echoing of input becomes trivial. And your program might become easier to read
Getting rid of that bug/security-hole to happen scanf("%s", ...) would also become easy.
And if you think this is a large job, well suck it up. You should really have done it from the beginning when the job was small. And if you delay the job any more, it will soon be humungus.
I'm trying to use stdbuf to line buffer the output of a program but I can't seem to make it work as I would expect. Using this code:
#include <stdio.h>
#include <unistd.h>
int main (void)
{
int i=0;
for(i=0; i<10; i++)
{
printf("This is part one");
fflush(stdout);
sleep(1);
printf(" and this is part two\n");
}
return 0;
}
I see This is part one, a one second wait then and this is part two\nThis is part one.
I expected that running it as
stdbuf --output=L ./test.out
would cause the output to be a 1 second delay and then This is part one and this is part two\n repeating at one second intervals. Instead I see the same output as in the case when I don't use stdbuf.
Am I using stdbuf incorrectly or does the call to fflush count as "adjusting" the buffering as described in the sdtbuf man page?
If I can't use stdbuf to line buffer in this way is there another command line tool that makes it possible?
Here are a couple of options that work for me, given the sample code, and run interactively (the output was to a pseudo-TTY):
./program | grep ^
./program | while IFS= read -r line; do printf "%s\n" "$line"; done
In a couple of quick tests, both output a complete line at a time. If you need to do pipe it further, grep's --line-buffered option should be useful.
gcc version 5.3.0 20151204 (Ubuntu 5.3.0-3ubuntu1~14.04)
I read this and and I find this line:
int exit_status = system("gnome-terminal");
so when I add it to my code it only open a new terminal window (well that's what he was asking for) but my program runs in the old one.
is there any way to run my program in a new terminal window.
and also when the program finish executing, the terminal window get closed like I typed the exit command
system("gnome-terminal"); will run the given command, wait for it to exit, and then continue with your program. That's why your program continues to run in the current terminal window.
Rather than trying to do this in C, it probably makes more sense to write a shell script wrapper for your program, and use that script to launch your program in a new terminal window:
#!/bin/bash
gnome-terminal -e ./your-program-name your program arguments
Make the script executable (chmod +x script-name), and then you can run it just like you would a C program. You can even have it forward the arguments from the script to your actual program:
#!/bin/bash
gnome-terminal -e ./your-program-name "$#"
Note that rather than using gnome-terminal (which assumes the user has gnome installed), you can use the more neutral x-terminal-emulator command instead (see How can I make a script that opens terminal windows and executes commands in them?).
If you really want to do this from your C program, then I'd recommend doing something like this:
#include <stdio.h>
#include <stdlib.h>
char cmd[1024];
int main(int argc, char *argv[]){
// re-launch in new window, if needed
char *new_window_val = getenv("IN_NEW_WINDOW");
const char *user_arg = argc < 2 ? "" : argv[1];
if (!new_window_val || new_window_val[0] != '1') {
snprintf(cmd, sizeof(cmd), "gnome-terminal -e IN_NEW_WINDOW=1 %s %s", argv[0], user_arg);
printf("RELAUNCH! %s\n", cmd);
return system(cmd);
}
// do normal stuff
printf("User text: %s\n", argv[1]);
return 0;
}
Using an environment variable (IN_NEW_WINDOW in this case) to check if you've already launched in a new window should make it so that the new window only opens once. Note that the above code assumes a program with only one argument.
However, I still think using the wrapper script is a better solution.
I'm trying to make a simple userspace program that dynamically generates file contents when a file is read, much like a virtual filesystem. I know there are programs like FUSE, but they seem a bit heavy for what I want to do.
For example, a simple counter implementation would look like:
$ cat specialFile
0
$ cat specialFile
1
$ cat specialFile
2
I was thinking that specialFile could be a named pipe, but I haven't had much luck. I was also thinking select may help here, but I'm not sure how I would use it. Am I missing some fundamental concept?
#include <stdio.h>
int main(void)
{
char stdoutEmpty;
char counter;
while (1) {
if (stdoutEmpty = feof(stdout)) { // stdout is never EOF (empty)?
printf("%d\n", counter++);
fflush(stdout);
}
}
return 0;
}
Then usage would be something like:
shell 1 $ mkfifo testing
shell 1 $ ./main > testing
shell 2 $ cat testing
# should be 0?
shell 2 $ cat testing
# should be 1?
You need to use FUSE. A FIFO will not work, because either your program keeps pushing content to stdout (in which case cat will never stop), or it closes stdout, in which case you obviously can't write to it anymore.
How can I hide the command line argument for C program running in Linux so that they aren't visible to other users via "w", "ps auxwww" or similar commands?
It's actually rather difficult (I'll stop short of saying impossible since there may be a way I'm not aware of) to do this, especially if a user has access to the /proc file system for your process.
Perhaps the best way to prevent people from seeing your command line arguments is to not use command line arguments :-)
You could stash your arguments in a suitably protected file called (for example) myargs.txt then run your program with:
myprog #myargs.txt
Of course, you'll have to modify myprog to handle the "arguments in a file" scenario.
Alternatively, you could set the arguments into environment variables and have your program use getenv.
However, I'm not aware of any method that can protect you from a suitable-empowered process (such as one run by root).
Modify the content of argv in your program:
#include <stdio.h>
#include <time.h>
void delay (long int msecs)
{
clock_t delay = msecs * CLOCKS_PER_SEC / 1000;
clock_t start = clock();
while (clock() - start < delay);
}
void main (int argc, char **argv)
{
if (argc == 2)
{
printf ("%s\n", argv[1]);
delay (6000);
argv[1][0] = 'x';
argv[1][1] = '.';
argv[1][2] = 'x';
printf ("%s\n", argv[1]);
delay (5000);
printf ("done\n");
}
else printf ("argc != 1: %d\n", argc);
}
Invocation:
./argumentClear foo
foo
x.x
done
Result, viewn by ps:
asux:~ > ps auxwww | grep argu
stefan 13439 75.5 0.0 1620 352 pts/5 R+ 17:15 0:01 ./argumentClear foo
stefan 13443 0.0 0.0 3332 796 pts/3 S+ 17:15 0:00 grep argu
asux:~ > ps auxwww | grep argu
stefan 13439 69.6 0.0 1620 352 pts/5 R+ 17:15 0:02 ./argumentClear x.x
stefan 13446 0.0 0.0 3332 796 pts/3 S+ 17:15 0:00 grep argu
Remark: My delay-function doesn't work as expected. Instead of 11 seconds, the program runs in about 2-3. I'm not the big C-programmer. :) The delay-function needs improvement here.
As far as I know, that information is stored in kernel space. Short of writing a kernel module, you will not be able to hide this information because any program can query the proc filesystem to see the command line arguments (this is what ps does).
As an alternative, you can read in your command line args on stdin then populate an array to pass to the command line argument handler. Or, better yet, add support for your program to read a configuration file that contains the same command line argument information and set the permissions so that only the owner can read the file.
I hope this helps.
To hide the arguments from the ps command, you could use the hack i always use:
sprintf(argv[0], "My super long argument list
");
Be sure to use spaces of about 3 lines using the space bar, otherwise the compiler will trow an error !
Keep in mind to change argv[0] after parsing the command line !
59982 pts/1 SLl+ 0:00 My super long argument list
strings /proc/59982/cmdline
My super long argument list
It's a hack, but an intruder will issue a "ps axw" first.
Always monitor mission critical server and check the logged in users !!!