Bash shell program starts C programs - c

Is there a way for a bash shell program, that takes a command-line argument x, that will make x (C program) processes start?
.

It's fairly simple:
#!/bin/bash
$1
If you want to pass the rest of the parameters as parameters to the function, do this:
$#
(i.e. foo.sh echo hi executes echo hi)
If you want to steal some parameters and pass others, use shift:
param1=$1
shift
echo $# # contains parameters 2+

#!/bin/bash
(( $# != 1 )) && echo "Usage: $0 num" && exit -1
for (( c=1; c<=$1; c++ ))
do
./run_c_program &
done
wait
$1 represents the first command line argument
$# represents the number of arguments
$0 is the name of the script
run_c_program is the executable of the c program
with & the c programs are executed in the background
with wait the scripts waits for the c programs to terminate (optional)

You can try to use system function
system("./script.sh");

Related

getopt not getting parameters in some order

I'm using getopt to parse the parameters from the command line, and I have problems to make it recognize the optional parameter order.
I want to achieve these cases:
$ ./program -i file.html -o output.html #ex 1
$ ./program -i -o file.html output.html #ex 2
$ ./program -o output.html -i file.html #ex 3
my code looks like this
while((c = getopt(argc, argv, "hi:o:")) != -1) {
switch(c) {
case 'h':
//prints the help file
break;
case 'i':
ivalue = optarg;
break;
case 'f':
fvalue = optarg;
break;
case '?':
//prints an error
break;
default:
abort();
}
}
to debug better this I've also wrote outside the while
for(int i = optind; i < argc; i++) {
printf("non optional argument %s\n", argv[i]);
return 0;
}
so the examples 1 and 3 are correctly working, while the example 2 is not getting the parameters straight. At first I thought that it was simply not possible with this function, but then in this example I saw it was.
There's also a bonus question: how come calling the program with no parameters doesn't abort()?
I'm using ubuntu 15.10 and gcc 5.2.1 installed with apt-get (dunno if useful but better safe that sorry).
in this example I saw it was [possible].
No, you didn't. Your option -i requires an argument, but in your non-working case you try to insert a different option between the -i and its argument. That is not allowed -- if an option takes an argument, whether optional or required, that argument (if provided) must immediately follow the option letter. The example you linked does not show differently.
What you are trying to do not only is not supported by getopt(), it does not follow standard Unix convention. Moreover, although it could conceivably work the way you imagine for options with required arguments, it is wholly unworkable for options with optional arguments. When arguments are optional, there is no way to correctly match them with their options if they are not required to follow directly.
There's also a bonus question: how come calling the program with no parameters doesn't abort()?
Why would it? If no options are specified to the program on its command line then getopt() returns -1 on the first call, and the loop body is never executed. To get an abort() from your code, you would need to specify an option that is in the option string, but for which you do not provide a specific case, or else an option that requires an argument, but without providing the argument (in which event getopt() returns ':', for which you do not provide a specific case). As the code is presently written, that would need to be -o, either with or without an argument, or -i without an argument.
What you're trying to do isn't possible with getopt. The "i:" in your optstring means that if the -i option is present, it must have argument. In your second example this means that -o is interpreted as the argument to -i and not as an option.
If you're on Linux and linking with glibc you can use the :: GNU extension (eg. "hi::o:") to make -i take an optional argument. However this would break your first and third examples as the optional argument is required to appear immediately after option (eg. -ifile.html).
To get around this "limitation"... I'd more call it intended functionality, I used flags and functions. You move through each getopt argument and set flags and variables as you pass through (and in some cases send to functions to get returns and unset other flags if you want a particular one to take precedent over another. Then once complete you perform some mental math to determine what happens in what order.
For example for a script to make ACL setting easier for an end user (psuedocode areas are clearly marked):
getHelp () {
printf "\nThis does stuff with [OPTIONS] and stuff...\n\n"
exit 0
}
main () {
if [[ $targetFile == "-h" ]]; then
usage
exit 0
fi
[[ ! -f $targetFile ]] && [[ ! -d $targetF ]] && {
printf "\nYou must specify a target File or Folder. Exiting.\n"
exit 1
}
modeOctal=1
if [[ readMode -eq 1 ]] || [[ writeMode -eq 1 ]]; then
printf "We're doing stuff to this file or folder...\n"
if [[ readMode -eq 1 ]]; then
modeOctal=`expr $modeOctal + 4`
elif [[ writeMode -eq 1 ]]; then
modeOctal=`expr $modeOctal + 2`
fi
*code to do stuff to setfacl on target*
fi
if [[ removeMode -eq 1 ]]; then
*code to do stuff to setfacl -x permissions on target*
fi
while true; do
case "$1" in
-h)
getHelp
exit 0
;;
-r)
readMode=1
;;
-w)
writeMode=1
;;
-R)
readMode=0
writeMode=0
removeMode=1
;;
-t)
shift
targetFile=$1
targetIsSet=1
;;
--)
shift
break
;;
esac
shift
done
if [[ $targetIsSet == 1 ]]; then
main
exit 0
fi
exit 0
Since you can count on things going in the same order everytime, you can be sure that your execution will be consistent no matter what order you go in. In this example, you also allow for someone using the -t option which requires a second argument to use '-h' to get help there instead.
You can use similar logic to put scripting in the functions that handle your '-i' or '-o' to behave differently if the next item is the other option (ex: -i -o output.file or -o -i input.file)

Get the return code of a C program in my shell program

Suppose I have a C program named Foo.c which is printing few things and returning a value named rc which I am executing in my shell program as follows :
foobar=$(Foo | tail -1)
Now, the variable foobar has the last printed value of the program Foo. But without disturbing this, I want to get the return code rc of the program in my shell program.
You can use "set -o pipefail" option.
[root#myserver Test]# set -o pipefail
[root#myserver Test]# ./a.out | tail -l
[root#myserver Test]# echo $?
100
Here my program a.out returns 100.
Or another options is to use pipestatus environment variable. You can read about it here.
http://www.linuxnix.com/2011/03/pipestatus-internal-variable.html
If you are using bash shell, you can use PIPESTATUS array variable to get the status of the pipe process.
$ tail sat | wc -l
tail: cannot open ‘sat’ for reading: No such file or directory
0
$ echo "${PIPESTATUS[0]} ${PIPESTATUS[1]}"
1 0
$
From man bash:
PIPESTATUS
An array variable containing a list of exit status values from the processes in the most-recently-executed foreground pipeline (which may contain only a single command).
This assigns the last line of the output of Foo to foobar and Foo's exit code is assigned to code:
{ read -r foobar; read code; } < <( (Foo; echo $? ) | tail -2)
The <(...) construct is called process substitution. In the code above, the read commands receive their stdin from the process substitution. Because of the tail -2, the process substitution produces a total of two lines. The first line is the last line produced by Foo and it is assigned to foobar. The second is assigned to code.
The space between the first and second < is essential.
Example
After creating a function Foo, the above can be tested:
$ Foo() { echo "Testing"; false; }
$ { read -r foobar; read code; } < <( (echo "Testing"; false; echo $? ) | tail -2)
$ echo "foobar=$foobar code=$code"
foobar=Testing code=1
And:
$ Foo() { echo "2nd Test"; true; }
$ { read -r foobar; read code; } < <( (Foo; echo $? ) | tail -2)
$ echo "foobar=$foobar code=$code"
foobar=2nd Test code=0
I'm afraid that you have to use a temporal file to store the output of the Foo program, get the return code and then perform the tail -1. Just like the following:
Foo > /tmp/temp_file
ret=$?
foobar=$(tail -1 /tmp/temp_file)
$? gives the return value of the last executed command.

creating unit testing using makefile

gcc 4.7.2
c89
Hello,
I am wondering does any one know of any tutorials or text books that cover using makefile to create some simple unit testing for my c programs.
I would like to run some automated testing that will create a test suite and add this to my Makefile.
Just want some ideas on how to get started.
Many thanks for any suggestions
Yes indeed, less than 30 lines of makefile you can build a generic unit test engine.
Note that I wrote the following for testing gawk and lisp scripts but it can be easily customized for c. Actually, IMHO, the whole thing is a nice example of the power of shell scripting.
To begin, you place all your tests is executable files in some $Testdir. In this example, all the tests have file names 001 002, etc (with no extension).
Next, you need some set up stuff:
Here=$(PWD)
Testdir=$(Here)/eg
ready: $(Testdir) $(Tmp)
$(Tmp) :
# - [ ! -d $(Tmp) ] && mkdir $(Tmp)
Then you'll need to collect all the tests into a list called $Tests
Tests:=$(shell cd $(Testdir); ls | egrep '^[0-9]+$$' | sort -n )
(Note the use of :=. The is a slight optimization that builds $Tests once, and uses it many times.)
Each file $(X) in my list of $Tests can be executed in two ways. Firstly, you can just run it. Secondly, you can run it and cache the results in $(X).want.
run : ready $(Testdir)/$(X)
#echo $X 2>&1
#cat $(Testdir)/$(X) | $(Run)
cache : ready
#$(MAKE) run | tee $(Testdir)/$X.want
#echo new test result cached to $(Testdir)/$X.want
I cache a test outcome once the test is ready to go and is producing the right output.
The actual execution is defined by a magic variable called $(Run). This is something you have to write specifically for the language being tested. For the record, I'm testing Gawk scripts so my $(Run) is just as follows (and you can change it to whatever you need).
Run= gawk -f mycode.awk
Once that is done, then to run one test, I just compare what I get after running $(X) to the cached copy:
test : ready $(Testdir)/$(X).want
#$(MAKE) run > $(Tmp)/$X.got
#if diff -s $(Tmp)/$X.got $(Testdir)/$X.want > /dev/null; \
then echo PASSED $X ; \
else echo FAILED $X, got $(Tmp)/$X.got; \
fi
This is how I run all my tests:
tests:; #$(foreach x, $(Tests), $(MAKE) X=$x test;)
You can also do a batch cache of all the current outputs (warning: do not do this unless your tests are currently generating the right output):
cache :
#$(foreach x, $(Tests), $(MAKE) X=$x cache;)
Finally, if you want, you can also get final score of the PASSEDs and FAILEDs:
score :
#$(MAKE) tests | cut -d\ -f 1 | egrep '(PASSED|FAILED)' | sort | uniq -c
That's it: as promised- a generic unit tool in Make in under 30 lines. Share and enjoy.
Generating Test Function Snippets
Generating function snippets is easy with the help of gccxml and nm. The following bash script generate_snippets.sh can be called with one command line argument to generate a test function snippet for each function defined in a source file:
#!/bin/bash
#
# Generate stub functions for one file
#
# # Initialize
FILE=$1
[ ! -e "$FILE" ] && echo "file doesn't exist" && exit -1
# # Compile
OBJECT=$(mktemp)
gcc -c $FILE -o $OBJECT
# # Get Functions
# ## Get all symbols in the text sections
Y=$(mktemp)
nm $OBJECT | grep " T " | awk '{print $3;}' | sort > $Y
# ## Get functions defined in the file (including #includes)
# get all functions defined in the compilation unit of $FILE, excluding included
# dynamically linked functions
XML=$(mktemp)
X=$(mktemp)
gccxml $FILE -fxml=$XML
grep "<Function" $XML | sed 's/^.*name="\([^"]*\).*/\1/g' | sort > $X
# ## get the common lines
# This is done to get those functions which are defined in the source file and end up in
# the compiled object file.
COMMON=$(comm $Y $X -1 -2)
# # Create stubs
for func in $COMMON;
do
cat <<_
// Test stub for $func. Returns 1 if it fails.
char test_$func() {
return 1;
}
_
done
# # Clean up
rm $OBJECT $XML $X $Y
TODOS
The script is not yet perfect. You should probably include a test to only generate test functions for those functions which aren't tested yet. As this is done analogous to finding the common names between $X and $Y, I leave this as an exercise. When this is implemented, it makes sense to run this script from a Makefile. See the other answer for pointers on that.
Example Usage
Consider the C file hello.c:
#include <stdio.h>
int foo();
int bar();
int main() {
printf("hello, world\n");
return 0;
}
int foo() {
printf("nothing\n");
return 1;
}
int bar() {
printf("still nothing\n");
return 1;
}
Running the script above with this file as input yields the following output:
// Test stub for bar. Returns 1 if it fails.
char test_bar() {
return 1;
}
// Test stub for foo. Returns 1 if it fails.
char test_foo() {
return 1;
}
// Test stub for main. Returns 1 if it fails.
char test_main() {
return 1;
}
Just put those snippets into the appropriate file and fill them with logic as needed. After that, compile the test suite and run it.

Replace char arrays with an index using pre-processor

I've got a known, predetermined set of calls to a function
FUNC_A("ABCD");
FUNC_A("EFGH");
And what I was hoping to do was something like
#define FUNC_A("ABCD") 0
#define FUNC_A("EFGH") 1
#define FUNC_A(X) 0xFF
So that the whole thing gets replaced by the integer before compiling and I can switch off the value and not have to store the string or do the comparaison at run-time.
I realize that we can't do this in the preprocessor but was just wondering if anyone has come across some nifty way of getting around this seemingly solveable problem.
You may handcraft your comparison if you need that, but this will be tedious. For simplicity let us suppose that we want to do it for the string "AB":
#define testAB(X) ((X) && (X)[0] == 'A' && (X)[1] == 'B' && !(X)[2])
this will return 1 when the string is equal to "AB" and 0 otherwise, and also take care that the string is of the correct length, not access beyond array bounds etc.
The only thing that you'd have to worry, is that the argument X is evaluated multiple times. This isn't a problem if you pass in a string literal, but would be for expressions with side effects.
For string literals any decent compiler should be able to replace such an expression at compile time.
For doing as you describe, avoiding strings and run-time comparisons, I can only think of a pre-preprocessor. Would it be for just a quick hacking around, in a Unix environment I'd try a simple wrapper for the preprocessor using a bash script that in turn uses sed or awk to replace the functions and arguments mentioned and then calling the real cpp preprocessor. I'd consider this just as a quick hack.
Update: In linux and gcc, it seems easier to do a post-preprocessor, because we can replace the generated .i file (but we can't generally do that with the original .c file). For doing that, we can make a cc1 wrapper.
Warning: this is another dangerous and ugly hack. Also see Custom gcc preprocessor
This is a cc1 wrapper for doing that. It's a bash script for linux and gcc 4.6:
#!/bin/bash
# cc1 that does post preprocessing on generated .i files, replacing function calls
#
# note: doing post preprocessing is easier than pre preprocessing, because in post preprocessing we can replace the temporary .i file generated by the preprocessor (in case of doing pre preprocessing, we should change the original .c file -this is unacceptable-; or generate a new temp .c file with our preprocessing before calling the real preprocessor, but then eventual error messages are now referring to the temp .c file..)
convert ()
{
local i=$1
local o=$2
ascript=$(cat <<- 'EOAWK'
{
FUNCT=$1;
ARGS=$2;
RESULT=$3;
printf "s/%s[ \\t]*([ \\t]*%s[ \\t]*)/%s/g\n", FUNCT, ARGS, RESULT;
}
EOAWK
)
seds=$(awk -F '|' -- "$ascript" << EOFUNCS
FUNC_A|"ABCD"|0
FUNC_A|"EFGH"|1
FUNC_A|X|0xFF
EOFUNCS
)
sedfile=$(mktemp --tmpdir prepro.sed.XXX)
echo -n "$seds" > "$sedfile"
sed -f "$sedfile" "$i" > "$o"
rc=$?
rm "$sedfile"
return $rc
}
for a
do
if [[ $a = -E ]]
then
isprepro=1
elif [[ $isprepro && $a = -o ]]
then
getfile=1
elif [[ $isprepro && $getfile && $a =~ ^[^-].*[.]i ]]
then
ifile=$a
break
fi
done
#echo "args:$#"
#echo "getfile=$getfile"
#echo "ifile=$ifile"
realcc1=/usr/lib/gcc/i686-linux-gnu/4.6/cc1
$realcc1 "$#"
rc=$?
if [[ $rc -eq 0 && $isprepro && $ifile ]]
then
newifile=$(mktemp --tmpdir prepro.XXX.i)
convert "$ifile" "$newifile" && mv "$newifile" "$ifile"
fi
exit $rc
How to use it: call gcc using flags -B (directory where cc1 wrapper resides) and --no-integrated-cpp

Why is getlogin() succeeding even if there is no controlling terminal

I wrote a simple C program:
#include <unistd.h>
#include <stdio.h>
int main( int argc, char *argv[] ) {
printf( "%s\n", getlogin() );
return 0;
}
... to try some things out. I've tried making getlogin() fail by making sure there isn't a controlling terminal, but it's still getting the login name and printing it. The most extreme example to demonstrate this:
#!/bin/bash
for i in $(env | grep -vP ^PATH\\b | awk -F= \{print \$1\}); do
unset $i;
done;
(tty; perl -e 'setpgrp; sleep( 1 ); exec( qw( getlogin_test ) );' ) &
In the way of explanation: it un-sets all environment variables except PATH, then runs a sub-shell that executes 'tty' then a perl instance; the sub-shell is backgrounded. Calls setpgrp to make sure it isn't using the process group to find the parent's controlling terminal (I don't believe it does, but I put it in there in case that assumption was wrong).
At this point, I'm at a loss. It still prints the username. A simpler example that I've seen from quite a few sources has the same behavior:
sh -c 'time perl -e '"'"'$|=1; print getlogin(), chr(10);'"'"' &'
sh -c 'time perl -e '"'"'$|=1; print getlogin(), chr(10);'"'"' & wait'
Both of these still print the username, under both Solaris 10 and Redhat 6 with differing versions of perl, bash, sh, and tcsh.
Closing or redirecting STDIN to a file does the trick.
$ perl -wE'say getlogin()'
eric
$ perl -wE'open STDIN, "<", "/dev/null" or die $!; say getlogin()'
Use of uninitialized value in say at -e line 1.
This is a self-built Perl (default options) on a Debian box.

Resources