Is there a file include mechanism for YACC files? - c

I have three programs that are currently using YACC files to do configuration file parsing. For simplicity, they all read the same configuration file, however, they each respond to keys/values uniquely (so the same .y file can't be used for more than 1 program). It would be nice not to have to repeat the %token declarations for each one - if I want to add one token, I have to change 3 files? What year is it??
These methods aren't working or are giving me issues:
The C preprocessor is obviously run AFTER we YACC the file, so #include for a #define or other macro will not work.
I've tried to script up something similar using sed:
REPLACE_DATA=$(cat <file>)
NEW_FILE=<file>.tmp
sed 's/$PLACEHOLDER/$REPLACE_DATA/g' <file> > $NEW_FILE
However it seems that it's stripping my newlines in REPLACE_DATA and then not replacing instances of $PLACEHOLDER instead of replacing the contents of the variables PLACEHOLDER.
Is there a real include mechanism in YACC, or are there other solutions I'm missing? This is a maintenance nightmare and I'm hoping someone else has run into a similar situation. Thanks in advance.

here's a sed version from http://www.grymoire.com/Unix/Sed.html#uh-37
#!/bin/sh
# watch out for a '/' in the parameter
# use alternate search delimiter
sed -e '\_#INCLUDE <'"$1"'>_{
r '"$1"'
d
}'
But traditionally, we used the m4 preprocessor before yacc.

Related

Get all the functions' names from c/cpp files

For example, there is a C file a.c, there are three functions in this file: funA(), funB() and funC().
I want to get all the function names from this file.
Additionally, I also want to get the start line number and end line number of each function.
Is there any solution?
Can I use clang to implement it?
You can compile the file and use nm http://en.wikipedia.org/wiki/Nm_(Unix) on the generated binary. You can then just parse the output of nm to get the function names.
If you want to get line numbers, you can use the function names to parse the source file for line numbers.
All the this can be accomplished with a short perl script that makes system calls to gcc and nm.
This is assuming you are using a *nix system of course...
One solution that works well for the job is cproto. It will scan source files (in K&R or ANSI-C format) and output the function prototypes. You can process entire directories of source files with a find command similar to:
find "$dirname" -type f -name "*.c" \
-exec /path/to/cproto -s \
-I/path/to/extra/includes '{}' >> "$outputfile" \;
While the cproto project is no longer actively developed, the cproto application continues to work very, very well. It provides function output in a reasonable form that can be fairly easily parsed/formatted as you desire.
Note: this is just one option based on my use. There are many others available.

Bash script for extracting function calls from c files

I'm new to scripting, and I'm attempting to extract all function calls from a c files, all present in a directory.
Here is my code so far, but it seems to be giving no output.
#!/bin/bash
awk '/[ \t]*[a-zA-Z_]*\(([a-zA-Z_]*[ \t]*,?)*\);/ {print $0}' *.c
I'm stumped.
Also the c files all have at least one function call.
You should debug your regexp. Reduce it until you get some matches, then add again the other parts, checking if you get the expected results.

Shell redirection vs explicit file handling code

I am not a native english speaker so please excuse the awkward title of this question. I just not knew how to phrase it better.
I am on a FreeBSD box and I have a little filter tool written in C which reads a list of data via stdin and outputs a processed list via stdout. I invoke it somewhat like this: find . -type f | myfilter > /tmp/processed.txt.
Now I want to give my filter a little bit more exposure and publish it. Convention says that tools should allow something like this: find . -type f | myfilter -f - -o /tmp/processed.text
This would force me to write code that simply is not needed since the shell can do the job, therefore I tend to leave it out.
My question is: Do I miss some argument (other but convention) why the reading and writing of files should be done in my code an not delegated to shell redirection?
There's absolutely nothing wrong with this. Your filter would have an interface similar to, say, c++filt.
You might consider file handling if you wanted to automatically choose an output file based on the name of an input file or if you wanted to special handling for processing multiple files in a single command.
If you don't want to do any either of these then there's nothing wrong with being a simple filter. Anyone can provide a set of simple shell wrappers to provide a cmd infile outfile syntax if they wish.
That's a needlessly limiting interface. Accepting arguments from the command line is more flexible,
grep foo file | myfilter > /tmp/processed.text
and it doesn't preclude find from being used
find . -type f -exec myfilter {} + > /tmp/processed.text
Actually to have the same effect as shell redirection you can do this:
freopen( "filename" , "wb" , stdout );
and so if you have used printf throughout your code, outputs will be redirected to the file. So you don't need to modify any of the code you've written before and easily adapt to the convention.
It is nice to have as option run any command with filename argument. As in your example:
myfilter [-f ./infile] [-o ./outfile] #or
myfilter [-o outfile] [filename] #and (the best one)
myfilter [-f file] [-o file] #so, when the input and output are the same file - the filter should working correctly anyway
For the nice example check the sort command. Usually used as filer in pipes, but can do [-o output] and correctly handle the same input/output problem too...
And why it is good? For example, when want run the command from "C" by "fork/exec" and don't want start the shell for handling I/O. In this case is much easier (and faster) execve(.....) with arguments as start the cmd with a shell wrapper.

How to search and replace a particular code line in C source using Perl/anything else?

I have a C source code spread over many source files (*.c). For porting reasons say I need to comment out below statements from all source files
fprintf(stderr,"......",....);
The problem is these fprintfs could be multiline statements, meaning broken into two/or more lines spread over two lines in source files with a newline character(carriage returned entered at end of one line).
How can I find such fprintfs scattered across all source files, replace them with a
multiline C comment as:
/*
*/
Since they are multiline, the find and replace command of source editors did not help.
I am trying to read the source file using a PERL script but, parse them to do this but could not.
Any pointers would be useful.
thank you.
-AD.
What you are looking for is named "coccinelle", it's semantic patch tool for C, via this you can easily do this. viz. http://coccinelle.lip6.fr/
Just
#undef fprintf
#define fprintf(stream, format, ...) 42
at the top of your files and be happy.
For a pure perl solution try something like this:
my $all_lines;
{ # (limit scope of $/, if appropriate)
local $/; #slurp entire file in one go
$all_lines = <$file_handle>
$all_lines =~ s|(\bfprintf\s*\(\s*stderr\s*,.*?\)\s*;)|/* $1 */|sg;
}

implementing globbing in a shell prototype

I'm implementing a linux shell for my weekend assignment and I am having some problems implementing wilcard matching as a feature in shell. As we all know, shells are a complete language by themselves, e.g. bash, ksh, etc. I don't need to implement the complete features like control structures, jobs etc. But how to implement the *?
A quick analysis gives you the following result:
echo *
lists all the files in the current directory. Is this the only logical manifestation of the shell? I mean, not considering the language-specific features of bash, is this what a shell does, internally? Replace a * with all the files in the current directory matching the pattern?
Also I have heard about Perl Compatible Regular Expression , but it seems to complex to use a third party library.
Any suggestions, links, etc.? I will try to look at the source code as well, for bash.
This is called "globbing" and the function performing this is named the same: glob(3)
Yes, that's what shell does. It will replace '*' characters by all files and folder names in cwd. It is in fact very basic regular expressions supporting only '?' and '*' and matching with file and folder names in cwd.
Remark that backslashed \* and '*' enclosed between simple or double quotes ' or " are not replaced (backslash and quotes are removed before passing to the command executed).
If you want more control than glob gives, the standard function fnmatch performs just glob matching.
Note that shells also performs word expansion (e.g. "~" → "/home/user"), which should be done before glob expansion, if you're doing filename matching manually. (Or use wordexp.)

Resources