Trying to write a script that takes a file of inputs and sending them one-by-one to a C program whenever the program asks for input (scanf).
I want my script to print every input before it sends it to the program.
The whole output of the C program (including the inputs I provided) should be print to a file.
Looking for a solution without changing my C code.
For ex:
My C Program: test.c
#include <stdio.h>
int main()
{
char a[50];
int b;
printf("Enter your name:\n");
scanf("%s",a);
printf("HELLO: %s\n\n", a);
printf("Enter your age:\n");
scanf("%d",&b);
printf("Your age is: %d\n\n", b);
return 0;
}
My Script: myscript.sh
#!/bin/bash
gcc test.c -o mytest
cat input | ./mytest >> outputfile
I also tried
#!/bin/bash
gcc test.c -o mytest
./mytest < input > outputfile
My Input File: input
Itzik
25
My Output File: outputfile
Enter your name:
HELLO: Itzik
Enter your age:
Your age is: 25
Desired outPutFile:
Enter your name:
Itzik
HELLO: Itzik
Enter your age:
25
Your age is: 25
Thanks a lot!
Oh my .. this is going to be a bit ugly.
You can start the program in the background, with it reading from a pipe, then hijack that pipe and write to it, but only when the program is waiting for input. And before you write to the pipe, you write to standard output.
# Launch program in background
#
# The tail command hangs forever and does not produce output, thus
# prog will wait.
tail -f /dev/null | ./prog &
# Capture the PIDs of the two processes
PROGPID=$!
TAILPID=$(jobs -p %+)
# Hijack the write end of the pipe (standard out of the tail command).
# Afterwards, the tail command can be killed.
exec 3>/proc/$TAILPID/fd/1
kill $TAILPID
# Now read line by line from our own standard input
while IFS= read -r line
do
# Check the state of prog ... we wait while it is Running. More
# complex programs than prog might enter other states which you
# need to take care of!
state=$(ps --no-headers -wwo stat -p $PROGPID)
while [[ "x$state" == xR* ]]
do
sleep 0.01
state=$(ps --no-headers -wwo stat -p $PROGPID)
done
# Now prog is waiting for input. Display our line, and then send
# it to prog.
echo $line
echo $line >&3
done
# Close the pipe
exec 3>&-
I've compiled your source code above to an executable named prog and saved above code into pibs.sh. The result:
$ bash pibs.sh < input
Enter your name:
Daniel
HELLO: Daniel
Enter your age:
29
Your age is: 29
What you are asking is really not possible without writing a program that parses the output from test.c and knows what is a prompt for input, and what is not.
Depending on how complicated your program is, you may have some luck with the chat program (see man chat) or GNU expect.
Your best bet is, as "BobRun" says, to modify your program. No matter how much you don't want to modify your program, it is time to put all those scanf() you might have littered through you code, behind proper input functions like this:
int input_int(const char *prompt)
{
printf ("%s:\n", prompt)
int i = 0;
scanf("%d", &i);
/* Eat rest of line */
int ch;
do
ch = fgetc(stdin);
while (ch != '\n' && ch != EOF)
return i;
}
Now, adding error checking and echoing of input becomes trivial. And your program might become easier to read
Getting rid of that bug/security-hole to happen scanf("%s", ...) would also become easy.
And if you think this is a large job, well suck it up. You should really have done it from the beginning when the job was small. And if you delay the job any more, it will soon be humungus.
Related
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
void eat() // clears stdin upto and including \n OR EOF
{
int eat;while ((eat = getchar()) != '\n' && eat != EOF);
}
int main(){
printf("\n COMMAND : "); char cmd[21]=""; scanf("%20s",cmd);
if(strcmp(cmd,"shell")==0||strcmp(cmd,"sh")==0)
{
getchar(); // absorb whitespace char separating 'shell' and the command, say 'ls'
while(1)
{
printf("\n sh >>> "); // print prompt
char shellcmd[1024]=""; // str to store command
scanf("%1023[^\n]",shellcmd); eat(); // take input of command and clear stdin
if(strcmp("close",shellcmd)==0||strcmp("x",shellcmd)==0)
break;
else
system(shellcmd);
}
}
}
In the code, some anomalous behavior is occurring which I'm unable to catch.
Upon entering sh ls and pressing [ENTER], the expected response is:
1st scanf() stored sh in cmd[] and leaves ls\n in stdin.
getchar() takes up the space.
printf() prints \n sh >>> to Terminal
second scanf() stores ls in shellcmd[], leaves \n in stdin
eat() reads the \n from stdin, leaving it empty
system("ls") is executed
I.e. results should be like this:
COMMAND : sh ls
sh >>>
file1 file 2 file3 ...
sh >>> | (cursor)
BUT
What I get:
COMMAND : sh ls
file1 file2 file3 ...
sh >>>
sh >>> |
Apparently, the 2nd scanf() and shell() are executing before printf() , or at least that's my assumption.
What's amiss?
Compiled on Clang and GCC with cc -Wall -Wextra -pedantic and tested on bash on MacOS as well as Linux
As you can find in the man page:
If a stream refers to a terminal (as stdout normally does) it is line buffered
So you might experience a delay in seeing the message printed by printf whenever it doesn't contain a newline. On the other end, the previos message is displayed as soon as the leading newline of the next printf is sent.
Solutions:
Add a newline at the end your message printf("\n sh >>> \n");
Force the current buffer to be displayed even in absence of a newline by calling flush() function (fflush(stdout))
Change the current stdout buffering behavior with setvbuf() function
setvbuf(stdout,NULL,_IONBF,0);
This is related to a stack smash attack.
Basically, I am trying to smash the stack by giving a program a particular input. The program takes in a user input like this, using getchar:
for (i = 0; (c = getchar()) != '\n'; i++) buf[i] = c;
I want to overwrite memory to become 0x000000a1. Unfortunately, 0xa1 is not an ascii character, so I cannot just input something like ยก (inverted exclamation) because that ends up giving 0x0000a1c2 in memory. How can I overwrite the value to be just 0x000000a1 without changing how the user input is processed in the program?
You can use bash to inject arbitrary characters:
echo -e '\xA1' | /path/to/program
You can add additional input, put the echo in a loop, etc.
echo -e 'Something\xA1\xA1\xA1' | /path/to/program
Your system's information is not provided, but usually the standard input is just a byte stream. It means that you can send arbitrary byte stream, not just valid characters.
For example, if your victim program is ./a.out, you can create a program to emit a payload
#include <stdio.h>
int main(void) {
putchar(0xa1);
putchar('\n'); /* to have the victim finish reading input */
return 0;
}
and compile to, for example, ./b.out and execute using a pipe
$ ./b.out | ./a.out
($ is your terminal's prompt)
I'm trying to use stdbuf to line buffer the output of a program but I can't seem to make it work as I would expect. Using this code:
#include <stdio.h>
#include <unistd.h>
int main (void)
{
int i=0;
for(i=0; i<10; i++)
{
printf("This is part one");
fflush(stdout);
sleep(1);
printf(" and this is part two\n");
}
return 0;
}
I see This is part one, a one second wait then and this is part two\nThis is part one.
I expected that running it as
stdbuf --output=L ./test.out
would cause the output to be a 1 second delay and then This is part one and this is part two\n repeating at one second intervals. Instead I see the same output as in the case when I don't use stdbuf.
Am I using stdbuf incorrectly or does the call to fflush count as "adjusting" the buffering as described in the sdtbuf man page?
If I can't use stdbuf to line buffer in this way is there another command line tool that makes it possible?
Here are a couple of options that work for me, given the sample code, and run interactively (the output was to a pseudo-TTY):
./program | grep ^
./program | while IFS= read -r line; do printf "%s\n" "$line"; done
In a couple of quick tests, both output a complete line at a time. If you need to do pipe it further, grep's --line-buffered option should be useful.
I'm trying to make a simple userspace program that dynamically generates file contents when a file is read, much like a virtual filesystem. I know there are programs like FUSE, but they seem a bit heavy for what I want to do.
For example, a simple counter implementation would look like:
$ cat specialFile
0
$ cat specialFile
1
$ cat specialFile
2
I was thinking that specialFile could be a named pipe, but I haven't had much luck. I was also thinking select may help here, but I'm not sure how I would use it. Am I missing some fundamental concept?
#include <stdio.h>
int main(void)
{
char stdoutEmpty;
char counter;
while (1) {
if (stdoutEmpty = feof(stdout)) { // stdout is never EOF (empty)?
printf("%d\n", counter++);
fflush(stdout);
}
}
return 0;
}
Then usage would be something like:
shell 1 $ mkfifo testing
shell 1 $ ./main > testing
shell 2 $ cat testing
# should be 0?
shell 2 $ cat testing
# should be 1?
You need to use FUSE. A FIFO will not work, because either your program keeps pushing content to stdout (in which case cat will never stop), or it closes stdout, in which case you obviously can't write to it anymore.
Below is my current code which as the title says, I thought would be running in parallel. I am working in Mac OSX, and in terminal I am using bash. The code is written in C and I am trying to use openmp. It compiles and runs without any errors, but I do not believe it is running in parallel.
To explain the code for easier understanding. First block is just declarations of a bunch of variables. The next chunk is the for loop, which runs commands in terminal.
First command is to run an executable program with four parameters: a double, a fixed integer, a string, and another fixed integer. The double is dependent on which iteration of the for loop you are on.
Second, third, fourth and fifth command all deal with renaming and moving files which the executable program spits out. And this completes the for loop. My hopes were that this for loop would run in parallel, since each iteration takes about 30 seconds.
Once outside the four loop, a file which has been written to in each loop is then moved. I realize the ordering which the file is written to might be faulty, but that is only going to be a concern when it is actually running in parallel!
#include <stdio.h>
#include <string.h>
int main(){
int spot;
double th;
char command[50];
char path0[] = "/home/path0";
char path1[] = "/home/path1";
char path2[] = "/home/path2";
char path3[] = "/home/path3";
#pragma omp parallel for private(command,path)
for (th=0.004, spot =0; th<1; th += 0.005, spot++) {
sprintf(command, "./program %lf 19 %s 418", th, path0);
system(command);
sprintf(command, "mv fileA.ppm a.%04d.ppm", spot);
system(command);
sprintf(command, "mv a.%04d.ppm %s", spot, path1);
system(command);
sprintf(command, "mv fileB.ppm b.%04d.ppm", spot);
system(command);
sprintf(command, "mv b.%04d.ppm %s", spot, path2);
system(command);
}
sprintf(command, "mv FNums.txt %s", path3);
system(command);
return(0);
}
Thanks for any insight and help you guys can offer.
Since this is basically shell script based already, consider using xargs:
First of all, make sure multiple instances of ./program don't overwrite each other's fileA.ppm if you run it in parallel. I'll assume you'll start writing them out as fileA.ppm.0.004 in this example.
Then make a script you can invoke with the spot number:
#!/bin/sh
spot=$1
th=$(echo "$spot" | awk '{print 0.004 + 0.005*$1 }')
./program "$th" 19 /home/path0 418
mv "fileA.ppm.$th" "$(printf '/home/path1/a.%04d.ppm' "$spot")"
mv "fileB.ppm.$th" "$(printf '/home/path2/b.%04d.ppm' "$spot")"
chmod a+x yourscript, and you can now run and test each instance using ./yourscript 0, ./yourscript 1, etc.
When it works, run them 8 (or more) in parallel using:
printf "%s\n" {0..199} | xargs -P 8 -n 1 ./yourscript