gcc 4.7.2
c89
Hello,
I am wondering does any one know of any tutorials or text books that cover using makefile to create some simple unit testing for my c programs.
I would like to run some automated testing that will create a test suite and add this to my Makefile.
Just want some ideas on how to get started.
Many thanks for any suggestions
Yes indeed, less than 30 lines of makefile you can build a generic unit test engine.
Note that I wrote the following for testing gawk and lisp scripts but it can be easily customized for c. Actually, IMHO, the whole thing is a nice example of the power of shell scripting.
To begin, you place all your tests is executable files in some $Testdir. In this example, all the tests have file names 001 002, etc (with no extension).
Next, you need some set up stuff:
Here=$(PWD)
Testdir=$(Here)/eg
ready: $(Testdir) $(Tmp)
$(Tmp) :
# - [ ! -d $(Tmp) ] && mkdir $(Tmp)
Then you'll need to collect all the tests into a list called $Tests
Tests:=$(shell cd $(Testdir); ls | egrep '^[0-9]+$$' | sort -n )
(Note the use of :=. The is a slight optimization that builds $Tests once, and uses it many times.)
Each file $(X) in my list of $Tests can be executed in two ways. Firstly, you can just run it. Secondly, you can run it and cache the results in $(X).want.
run : ready $(Testdir)/$(X)
#echo $X 2>&1
#cat $(Testdir)/$(X) | $(Run)
cache : ready
#$(MAKE) run | tee $(Testdir)/$X.want
#echo new test result cached to $(Testdir)/$X.want
I cache a test outcome once the test is ready to go and is producing the right output.
The actual execution is defined by a magic variable called $(Run). This is something you have to write specifically for the language being tested. For the record, I'm testing Gawk scripts so my $(Run) is just as follows (and you can change it to whatever you need).
Run= gawk -f mycode.awk
Once that is done, then to run one test, I just compare what I get after running $(X) to the cached copy:
test : ready $(Testdir)/$(X).want
#$(MAKE) run > $(Tmp)/$X.got
#if diff -s $(Tmp)/$X.got $(Testdir)/$X.want > /dev/null; \
then echo PASSED $X ; \
else echo FAILED $X, got $(Tmp)/$X.got; \
fi
This is how I run all my tests:
tests:; #$(foreach x, $(Tests), $(MAKE) X=$x test;)
You can also do a batch cache of all the current outputs (warning: do not do this unless your tests are currently generating the right output):
cache :
#$(foreach x, $(Tests), $(MAKE) X=$x cache;)
Finally, if you want, you can also get final score of the PASSEDs and FAILEDs:
score :
#$(MAKE) tests | cut -d\ -f 1 | egrep '(PASSED|FAILED)' | sort | uniq -c
That's it: as promised- a generic unit tool in Make in under 30 lines. Share and enjoy.
Generating Test Function Snippets
Generating function snippets is easy with the help of gccxml and nm. The following bash script generate_snippets.sh can be called with one command line argument to generate a test function snippet for each function defined in a source file:
#!/bin/bash
#
# Generate stub functions for one file
#
# # Initialize
FILE=$1
[ ! -e "$FILE" ] && echo "file doesn't exist" && exit -1
# # Compile
OBJECT=$(mktemp)
gcc -c $FILE -o $OBJECT
# # Get Functions
# ## Get all symbols in the text sections
Y=$(mktemp)
nm $OBJECT | grep " T " | awk '{print $3;}' | sort > $Y
# ## Get functions defined in the file (including #includes)
# get all functions defined in the compilation unit of $FILE, excluding included
# dynamically linked functions
XML=$(mktemp)
X=$(mktemp)
gccxml $FILE -fxml=$XML
grep "<Function" $XML | sed 's/^.*name="\([^"]*\).*/\1/g' | sort > $X
# ## get the common lines
# This is done to get those functions which are defined in the source file and end up in
# the compiled object file.
COMMON=$(comm $Y $X -1 -2)
# # Create stubs
for func in $COMMON;
do
cat <<_
// Test stub for $func. Returns 1 if it fails.
char test_$func() {
return 1;
}
_
done
# # Clean up
rm $OBJECT $XML $X $Y
TODOS
The script is not yet perfect. You should probably include a test to only generate test functions for those functions which aren't tested yet. As this is done analogous to finding the common names between $X and $Y, I leave this as an exercise. When this is implemented, it makes sense to run this script from a Makefile. See the other answer for pointers on that.
Example Usage
Consider the C file hello.c:
#include <stdio.h>
int foo();
int bar();
int main() {
printf("hello, world\n");
return 0;
}
int foo() {
printf("nothing\n");
return 1;
}
int bar() {
printf("still nothing\n");
return 1;
}
Running the script above with this file as input yields the following output:
// Test stub for bar. Returns 1 if it fails.
char test_bar() {
return 1;
}
// Test stub for foo. Returns 1 if it fails.
char test_foo() {
return 1;
}
// Test stub for main. Returns 1 if it fails.
char test_main() {
return 1;
}
Just put those snippets into the appropriate file and fill them with logic as needed. After that, compile the test suite and run it.
Related
I am trying to create a simple unit test library in C, similar to googletest (yes I know I could use that, not the point of the exercise).
/* In unit_test.h */
#define UNIT_TEST_HELPER(SuiteName, TestName) void SuiteName##TestName
#define UNIT_TEST(SuiteName, TestName) UNIT_TEST_HELPER(SuiteName, TestName)()
/* in some other file */
#include "unit_test.h"
/* This successfully creates a function 'void HelloTest() */
UNIT_TEST(Hello, Test) {
/* This is where testing code goes */
printf("Calling from HelloTest()\n");
}
/* In main I am to do the following */
int main() {
HelloTest();
}
What I would like to do is either:
a) Somehow call HelloTest() after its fully defined
b) Add HelloTest() to a list of functions to call (a list of void function pointers)
int main() {
UnitTestRun(); /* Loop through function pointers */
}
I have no idea if this is possible with out a lot of work (have to look for functions with a certain signature or something). Goal is to try and avoid having to call each UNIT_TEST() function explicility, what I am currently doing and trying to make life simpler.
"Accumulate a list" is not within the capabilities of the C preprocessor, I'm afraid.
But it's pretty easy to do with an external preprocessor, and any decent build system will have a way to automatically run an external preprocessor as part of the build process.
If you're careful not to use the sequence UNIT_TEST( anywhere in your source code other than in the definition of a unit test, and also to ensure that you never spread UNIT_TEST(a, b) over two lines (which doesn't seem like a big restriction), then you could use a simple shell script to build a stand-alone unit-test caller:
#!/bin/sh
echo '#define CONCAT_(a,b) a##b'
echo '#define UNIT_TEST(a,b) void CONCAT_(a,b)(void); CONCAT_(a,b)();'
echo 'void UnitTestRun(void) {';
grep -hEo 'UNIT_TEST\([^)]*)' "$#"
echo '}'
This allows you to define unit tests in various files, and accumulate them into a single file containing only the function UnitTestRun. The file is completely stand-alone; it contains both declarations and invocations of each unit test, so it doesn't require any #includes. (This does require C99; if you needed to separate the declarations from the invocations, you could do two scans over the files.)
Here's a simple sample run of the script. I placed declarations of the test functions in suite1.c and suite2.c, and the above script in unit_test_maker.sh:
$ # The input files
$ cat suite1.c
UNIT_TEST(Suite1, FirstTest) {
/* blah, blah, blah */
}
UNIT_TEST(Suite1, SecondTest) {
/* More blah */
}
$ cat suite2.c
UNIT_TEST(Suite2, OnlyTest) {
/* blah, blah, blah */
}
$ # The output of the script
$ ./unit_test_maker.sh suite1.c suite2.c
#define CONCAT_(a,b) a##b
#define UNIT_TEST(a,b) void CONCAT_(a,b)(void); CONCAT_(a,b)();
void UnitTestRun(void) {
UNIT_TEST(Suite1, FirstTest)
UNIT_TEST(Suite1, SecondTest)
UNIT_TEST(Suite2, OnlyTest)
}
$ # The output from preprocessing the script
$ ./unit_test_maker.sh suite1.c suite2.c | gcc -E -x c -
# 1 "<stdin>"
# 1 "<built-in>"
# 1 "<command-line>"
# 31 "<command-line>"
# 1 "/usr/include/stdc-predef.h" 1 3 4
# 32 "<command-line>" 2
# 1 "<stdin>"
void UnitTestRun(void) {
void Suite1FirstTest(void); Suite1FirstTest();
void Suite1SecondTest(void); Suite1SecondTest();
void Suite2OnlyTest(void); Suite2OnlyTest();
}
$ # Compiles without warnings with `-Wall`.
$ ./unit_test_maker.sh suite1.c suite2.c | gcc -Wall -x c -c -o unit_test_run.o -
$
I'd like to know if it's possible to output 'preprocessed' code wit gcc but 'ignoring' (not expanding) includes:
ES I got this main:
#include <stdio.h>
#define prn(s) printf("this is a macro for printing a string: %s\n", s);
int int(){
char str[5] = "test";
prn(str);
return 0;
}
I run gcc -E main -o out.c
I got:
/*
all stdio stuff
*/
int int(){
char str[5] = "test";
printf("this is a macro for printing a string: %s\n", str);
return 0;
}
I'd like to output only:
#include <stdio.h>
int int(){
char str[5] = "test";
printf("this is a macro for printing a string: %s\n", str);
return 0;
}
or, at least, just
int int(){
char str[5] = "test";
printf("this is a macro for printing a string: %s\n", str);
return 0;
}
PS: would be great if possible to expand "local" "" includes and not to expand "global" <> includes
I agree with Matteo Italia's comment that if you just prevent the #include directives from being expanded, then the resulting code won't represent what the compiler actually sees, and therefore it will be of limited use in troubleshooting.
Here's an idea to get around that. Add a variable declaration before and after your includes. Any variable that is reasonably unique will do.
int begin_includes_tag;
#include <stdio.h>
... other includes
int end_includes_tag;
Then you can do:
> gcc -E main -o out.c | sed '/begin_includes_tag/,/end_includes_tag/d'
The sed command will delete everything between those variable declarations.
When cpp expands includes it adds # directives (linemarkers) to trace back errors to the original files.
You can add a post processing step (it can be trivially written in any scripting language, or even in C if you feel like it) to parse just the linemarkers and filter out the lines coming from files outside of your project directory; even better, one of the flags (3) marks system header files (stuff coming from paths provided through -isystem, either implicitly by the compiler driver or explicitly), so that's something you could exploit as well.
For example in Python 3:
#!/usr/bin/env python3
import sys
skip = False
for l in sys.stdin:
if not skip:
sys.stdout.write(l)
if l.startswith("# "):
toks = l.strip().split(" ")
linenum, filename = toks[1:3]
flags = toks[3:]
skip = "3" in flags
Using gcc -E foo.c | ./filter.py I get
# 1 "foo.c"
# 1 "<built-in>"
# 1 "<command-line>"
# 31 "<command-line>"
# 1 "/usr/include/stdc-predef.h" 1 3 4
# 1 "foo.c"
# 1 "/usr/include/stdio.h" 1 3 4
# 4 "foo.c"
int int(){
char str[5] = "test";
printf("this is a macro for printing a string: %s\n", str);;
return 0;
}
Protect the #includes from getting expanded, run the preprocessor textually, remove the # 1 "<stdint>" etc. junk the textual preprocessor generates and reexpose the protected #includes.
This shell function does it:
expand_cpp(){
sed 's|^\([ \t]*#[ \t]*include\)|magic_fjdsa9f8j932j9\1|' "$#" \
| cpp | sed 's|^magic_fjdsa9f8j932j9||; /^# [0-9]/d'
}
as long as you keep the include word together instead of doing crazy stuff like
#i\
ncl\
u??/
de <iostream>
(above you can see 2 backslash continuation lines + 1 trigraph (??/ == \ ) backslash continuation line).
If you wish, you can protect #ifs #ifdefs #ifndefs #endifs and #elses the same way.
Applied to your example
example.c:
#include <stdio.h>
#define prn(s) printf("this is a macro for printing a string: %s\n", s);
int int(){
char str[5] = "test";
prn(str);
return 0;
}
like as with expand_cpp < example.c or expand_cpp example.c, it generates:
#include <stdio.h>
int int(){
char str[5] = "test";
printf("this is a macro for printing a string: %s\n", str);;
return 0;
}
You can use -dI to show the #include directives and post-process the preprocessor output.
Assuming the name of your your file is foo.c
SOURCEFILE=foo.c
gcc -E -dI "$SOURCEFILE" | awk '
/^# [0-9]* "/ { if ($3 == "\"'"$SOURCEFILE"'\"") show=1; else show=0; }
{ if(show) print; }'
or to suppress all # line_number "file" lines for $SOURCEFILE:
SOURCEFILE=foo.c
gcc -E -dI "$SOURCEFILE" | awk '
/^# [0-9]* "/ { ignore = 1; if ($3 == "\"'"$SOURCEFILE"'\"") show=1; else show=0; }
{ if(ignore) ignore=0; else if(show) print; }'
Note: The AWK scripts do not work for file names that include whitespace. To handle file names with spaces you could modify the AWK script to compare $0 instead of $3.
supposing the file is named c.c :
gcc -E c.c | tail -n +`gcc -E c.c | grep -n -e "#*\"c.c\"" | tail -1 | awk -F: '{print $1}'`
It seems # <number> "c.c" marks the lines after each #include
Of course you can also save gcc -E c.c in a file to not do it two times
The advantage is to not modify the source nor to remove the #include before to do the gcc -E, that just removes all the lines from the top up to the last produced by an #include ... if I am right
Many previous answers went in the direction of using the tracing # directives.
It's actually a one-liner in classical Unix (with awk):
gcc -E file.c | awk '/# [1-9][0-9]* "file.c"/ {skip=0; next} /# [1-9][0-9]* ".*"/ {skip=1} (skip<1) {print}'
TL;DR
Assign file name to fname and run following commands in shell. Throughout this ansfer fname is assumed to be sh variable containing the source file to be processed.
fname=file_to_process.c ;
grep -G '^#include' <./"$fname" ;
grep -Gv '^#include[ ]*<' <./"$fname" | gcc -x c - -E -o - $(grep -G '^#include[ ]*<' <./"$fname" | xargs -I {} -- expr "{}" : '#include[ ]*<[ ]*\(.*\)[ ]*>' | xargs -I {} printf '-imacros %s ' "{}" ) | grep -Ev '^([ ]*|#.*)$'
All except gcc here is pure POSIX sh, no bashisms, or nonportable options. First grep is there to output #include directives.
GCC's -imacros
From gcc documentation:
-imacros file: Exactly like ‘-include’, except that any output produced by scanning file is
thrown away. Macros it defines remain defined. This allows you to acquire all
the macros from a header without also processing its declarations
So, what is -include anyway?
-include file: Process file as if #include "file" appeared as the first line of the primary
source file. However, the first directory searched for file is the preprocessor’s
working directory instead of the directory containing the main source file. If
not found there, it is searched for in the remainder of the #include "..."
search chain as normal.
Simply speaking, because you cannot use <> or "" in -include directive, it will always behave as if #include <file> were in source code.
First approach
ANSI C guarantees assert to be macro, so it is perfect for simple test:
printf 'int main(){\nassert(1);\nreturn 0;}\n' | gcc -x c -E - -imacros assert.h.
Options -x c and - tells gcc to read source file from stdin and that the language used is C. Output doesn't contain any declarations from assert.h, but there is still mess, that can be cleaned up with grep:
printf 'int main(){\nassert(1);\nreturn 0;}\n' | gcc -x c -E - -imacros assert.h | grep -Ev '^([ ]*|#.*)$'
Note: in general, gcc won't expand tokens that intended to be macros, but the definition is missing. Nevertheless assert happens to expand entirely: __extension__ is compiler option, __assert_fail is function, and __PRETTY_FUNCTION__ is string literal.
Automatisation
Previous approach works, but it can be tedious;
each #include needs to be deleted from file manually, and
it has to be added to gcc call as -imacros's argument.
First part is easy to script: pipe grep -Gv '^#include[ ]*<' <./"$fname" to gcc.
Second part takes some exercising (at least without awk):
2.1 Drop -v negative matching from previous grep command: grep -G '^#include[ ]*<' <./"$fname"
2.2 Pipe previous to expr inside xarg to extract header name from each include directive: xargs -I {} -- expr "{}" : '#include[ ]*<[ ]*\(.*\)[ ]*>'
2.3 Pipe again to xarg, and printf with -imacros prefix: xargs -I {} printf '-imacros %s ' "{}"
2.4 Enclose all in command substitution "$()" and place inside gcc.
Done. This is how you end up with the lengthy command from the beginning of my answer.
Solving subtle problems
This solution still has flaws; if local header files themselves contains global ones, these global will be expanded. One way to solve this problem is to use grep+sed to transfer all global includes from local files and collect them in each *.c file.
printf '' > std ;
for header in *.h ; do
grep -G '^#include[ ]*<' <./$header >> std ;
sed -i '/#include[ ]*</d' $header ;
done;
for source in *.c ; do
cat std > tmp;
cat $source >> tmp;
mv -f tmp $source ;
done
Now the processing script can be called on any *.c file inside pwd without worry, that anything from global includes would leak into. The final problem is duplication. Local headers including themselves local includes might be duplicated, but this could occur only, when headers aren't guarded, and in general every header should be always guarded.
Final version and example
To show these scripts in action, here is small demo:
File h1.h:
#ifndef H1H
#define H1H
#include <stdio.h>
#include <limits.h>
#define H1 printf("H1:%i\n", h1_int)
int h1_int=INT_MAX;
#endif
File h2.h:
#ifndef H2H
#define H2H
#include <stdio.h>
#include "h1.h"
#define H2 printf("H2:%i\n", h2_int)
int h2_int;
#endif
File main.c:
#include <assert.h>
#include "h1.h"
#include "h2.h"
int main(){
assert(1);
H1;
H2;
}
Final version of the script preproc.sh:
fname="$1"
printf '' > std ;
for source in *.[ch] ; do
grep -G '^#include[ ]*<' <./$source >> std ;
sed -i '/#include[ ]*</d' $source ;
sort -u std > std2;
mv -f std2 std;
done;
for source in *.c ; do
cat std > tmp;
cat $source >> tmp;
mv -f tmp $source ;
done
grep -G '^#include[ ]*<' <./"$fname" ;
grep -Gv '^#include[ ]*<' <./"$fname" | gcc -x c - -E -o - $(grep -G '^#include[ ]*<' <./"$fname" | xargs -I {} -- expr "{}" : '#include[ ]*<[ ]*\(.*\)[ ]*>' | xargs -I {} printf '-imacros %s ' "{}" ) | grep -Ev '^([ ]*|#.*)$'
Output of the call ./preproc.sh main.c:
#include <assert.h>
#include <limits.h>
#include <stdio.h>
int h1_int=0x7fffffff;
int h2_int;
int main(){
((void) sizeof ((
1
) ? 1 : 0), __extension__ ({ if (
1
) ; else __assert_fail (
"1"
, "<stdin>", 4, __extension__ __PRETTY_FUNCTION__); }))
;
printf("H1:%i\n", h1_int);
printf("H2:%i\n", h2_int);
}
This should always compile. If you really want to print every #include "file", then delete < from grep pattern '^#include[ ]*<' in 16-th line of preproc.sh`, but be warned, that content of headers will then be duplicated, and code might fail, if headers contain initialisation of variables. This is purposefully the case in my example to address the problem.
Summary
There are plenty of good answers here so why yet another? Because this seems to be unique solution with following properties:
Local includes are expanded
Global included are discarded
Macros defined either in local or global includes are expanded
Approach is general enough to be usable not only with toy examples, but actually in small and medium projects that reside in a single directory.
I would like to expand include directives of a C file of my working directory only; not the system directory.
I tried the following:
gcc -E -nostdinc -I./ input.c
But it stops preprocessing when it fails to find the included system headers in input.c. I would like it to copy the include directive when it can't find it and keep preprocessing the file.
if your input.c file contains some system headers, it's normal that the preprocessor crashes when it cannot find them.
You could first use grep -v to remove all #include of system headers in your code, achieving something like this (list is non-exhaustive):
grep -vE "(stdio|stdlib)\.h" code.c > code_.c
you get for instance:
#define EXITCODE 0
int main(){
int i = EOF;
printf("hello\n");
return EXITCODE;
}
then pre-process:
S:\c>gcc -E code_.c
# 1 "code_.c"
# 1 "<built-in>"
# 1 "<command-line>"
# 1 "code_.c"
int main(){
int i = EOF;
printf("hello\n");
return 0;
}
note that the pre-processor doesn't care about functions or macros not defined. You get your code preprocessed (and your macros expanded), not the system ones.
You have to process all included files as well of course. That means an extra layer of tools to create temp source files and work from there.
I found a utility that does exactly what I was looking for:
$ cpphs --nowarn --nomacro -I./ input.c | sed -E 's|#line 1 "missing file: (.*)"|#include <\1>|'
I have two (Ubuntu Linux) bash scripts which take input arguments. They need to be run simultaneously. I tried execve with arguments e.g.
char *argv[10] = { "/mnt/hgfs/F/working/script.sh", "file1", "file2", NULL };
execve(argv[0], argv, NULL)
but the bash script can't seem to find any arguments at e.g. $0, $1, $2.
printf "gcc -c ./%s.c -o ./%s.o\n" $1 $1;
gcc -c ./$1.c -o ./$1.o -g
exit 0;
output is gcc -c ./main.c -o ./main.o
and then a lot of errors like /usr/include/libio.h:53:21: error: stdarg.h: No such file or directory
What's missing?
Does your script start with the hashbang line? I think that's a must, something like:
#!/bin/bash
For example, see the following C program:
#include <stdio.h>
#include <unistd.h>
char *argv[10] = { "./qq.sh", "file1", NULL };
int main (void) {
int rc = execve (argv[0], argv, NULL);
printf ("rc = %d\n", rc);
return 0;
}
When this is compiled and run with the following qq.sh file, it outputs rc = -1:
echo $1
when you change the file to:
#!/bin/bash
echo $1
it outputs:
file1
as expected.
The other thing you need to watch out for is with using these VMWare shared folders, evidenced by /mnt/hgfs. If the file was created with a Windows-type editor, it may have the "DOS" line endings of carriage-return/line-feed - that may well be causing problems with the execution of the scripts.
You can check for this by running:
od -xcb /mnt/hgfs/F/working/script.sh
and seeing if any \r characters appear.
For example, if I use the shell script with the hashbang line in it (but appen a carriage return to the line), I also get the rc = -1 output, meaning it couldn't find the shell.
And, now, based on your edits, your script has no trouble interpreting the arguments at all. The fact that it outputs:
gcc -c ./main.c -o ./main.o
is proof positive of this since it's seeing $1 as main.
The problem you actually have is that the compiler is working but it cannot find strdarg.h included from your libio.h file - this has nothing to do with whether bash can see those arguments.
My suggestion is to try and compile it manually with that command and see if you get the same errors. If so, it's a problem with what you're trying to compile rather than a bash or exec issue.
If it does compile okay, it may be because of the destruction of the environment variables in your execve call.
I've got a known, predetermined set of calls to a function
FUNC_A("ABCD");
FUNC_A("EFGH");
And what I was hoping to do was something like
#define FUNC_A("ABCD") 0
#define FUNC_A("EFGH") 1
#define FUNC_A(X) 0xFF
So that the whole thing gets replaced by the integer before compiling and I can switch off the value and not have to store the string or do the comparaison at run-time.
I realize that we can't do this in the preprocessor but was just wondering if anyone has come across some nifty way of getting around this seemingly solveable problem.
You may handcraft your comparison if you need that, but this will be tedious. For simplicity let us suppose that we want to do it for the string "AB":
#define testAB(X) ((X) && (X)[0] == 'A' && (X)[1] == 'B' && !(X)[2])
this will return 1 when the string is equal to "AB" and 0 otherwise, and also take care that the string is of the correct length, not access beyond array bounds etc.
The only thing that you'd have to worry, is that the argument X is evaluated multiple times. This isn't a problem if you pass in a string literal, but would be for expressions with side effects.
For string literals any decent compiler should be able to replace such an expression at compile time.
For doing as you describe, avoiding strings and run-time comparisons, I can only think of a pre-preprocessor. Would it be for just a quick hacking around, in a Unix environment I'd try a simple wrapper for the preprocessor using a bash script that in turn uses sed or awk to replace the functions and arguments mentioned and then calling the real cpp preprocessor. I'd consider this just as a quick hack.
Update: In linux and gcc, it seems easier to do a post-preprocessor, because we can replace the generated .i file (but we can't generally do that with the original .c file). For doing that, we can make a cc1 wrapper.
Warning: this is another dangerous and ugly hack. Also see Custom gcc preprocessor
This is a cc1 wrapper for doing that. It's a bash script for linux and gcc 4.6:
#!/bin/bash
# cc1 that does post preprocessing on generated .i files, replacing function calls
#
# note: doing post preprocessing is easier than pre preprocessing, because in post preprocessing we can replace the temporary .i file generated by the preprocessor (in case of doing pre preprocessing, we should change the original .c file -this is unacceptable-; or generate a new temp .c file with our preprocessing before calling the real preprocessor, but then eventual error messages are now referring to the temp .c file..)
convert ()
{
local i=$1
local o=$2
ascript=$(cat <<- 'EOAWK'
{
FUNCT=$1;
ARGS=$2;
RESULT=$3;
printf "s/%s[ \\t]*([ \\t]*%s[ \\t]*)/%s/g\n", FUNCT, ARGS, RESULT;
}
EOAWK
)
seds=$(awk -F '|' -- "$ascript" << EOFUNCS
FUNC_A|"ABCD"|0
FUNC_A|"EFGH"|1
FUNC_A|X|0xFF
EOFUNCS
)
sedfile=$(mktemp --tmpdir prepro.sed.XXX)
echo -n "$seds" > "$sedfile"
sed -f "$sedfile" "$i" > "$o"
rc=$?
rm "$sedfile"
return $rc
}
for a
do
if [[ $a = -E ]]
then
isprepro=1
elif [[ $isprepro && $a = -o ]]
then
getfile=1
elif [[ $isprepro && $getfile && $a =~ ^[^-].*[.]i ]]
then
ifile=$a
break
fi
done
#echo "args:$#"
#echo "getfile=$getfile"
#echo "ifile=$ifile"
realcc1=/usr/lib/gcc/i686-linux-gnu/4.6/cc1
$realcc1 "$#"
rc=$?
if [[ $rc -eq 0 && $isprepro && $ifile ]]
then
newifile=$(mktemp --tmpdir prepro.XXX.i)
convert "$ifile" "$newifile" && mv "$newifile" "$ifile"
fi
exit $rc
How to use it: call gcc using flags -B (directory where cc1 wrapper resides) and --no-integrated-cpp