strange cscope command line limitation - c

Cscope has eleven search input fields in interactive mode. But when I try to use it in line-oriented output mode and specify Find all symbol assignments: field using -10 switch it does not work. Any ideas?
Thanks.

I also see some little strange-ness.
In terminal,
cscope -d
gives the following options
Find this C symbol:
Find this global definition:
Find functions called by this function:
Find functions calling this function:
Find this text string:
Change this text string:
Find this egrep pattern:
Find this file:
Find files #including this file:
But, using my cscope plugin in gvim,
:cs help
gives the following options
find : Query for a pattern (Usage: find c|d|e|f|g|i|s|t name)
c: Find functions calling this function
d: Find functions called by this function
e: Find this egrep pattern
f: Find this file
g: Find this definition
i: Find files #including this file
s: Find this C symbol
t: Find assignments to
The "Find assignments to" option is available only in the second.
So, for line-oriented output mode, the closest seems to be the "Find this text string:" option. That can be done as
cscope -d -L -4 <text>

The assignment option was added by RedHat patch, it is not part of the original cscope. Seems like they patched only ncurses interface without updating the corresponding command line options.

Related

Get all the functions' names from c/cpp files

For example, there is a C file a.c, there are three functions in this file: funA(), funB() and funC().
I want to get all the function names from this file.
Additionally, I also want to get the start line number and end line number of each function.
Is there any solution?
Can I use clang to implement it?
You can compile the file and use nm http://en.wikipedia.org/wiki/Nm_(Unix) on the generated binary. You can then just parse the output of nm to get the function names.
If you want to get line numbers, you can use the function names to parse the source file for line numbers.
All the this can be accomplished with a short perl script that makes system calls to gcc and nm.
This is assuming you are using a *nix system of course...
One solution that works well for the job is cproto. It will scan source files (in K&R or ANSI-C format) and output the function prototypes. You can process entire directories of source files with a find command similar to:
find "$dirname" -type f -name "*.c" \
-exec /path/to/cproto -s \
-I/path/to/extra/includes '{}' >> "$outputfile" \;
While the cproto project is no longer actively developed, the cproto application continues to work very, very well. It provides function output in a reasonable form that can be fairly easily parsed/formatted as you desire.
Note: this is just one option based on my use. There are many others available.

Bash script for extracting function calls from c files

I'm new to scripting, and I'm attempting to extract all function calls from a c files, all present in a directory.
Here is my code so far, but it seems to be giving no output.
#!/bin/bash
awk '/[ \t]*[a-zA-Z_]*\(([a-zA-Z_]*[ \t]*,?)*\);/ {print $0}' *.c
I'm stumped.
Also the c files all have at least one function call.
You should debug your regexp. Reduce it until you get some matches, then add again the other parts, checking if you get the expected results.

How can I convert path containing wildcard to corresponding file entries in C program?

I'm trying to implement the ls command with wildcard, *.
I have just learned the fact that most shells convert ls-argument containing * to the corresponding entries when performing ls command.
For example, The directory foo consist of a.file, b.file, and directory bar.
Then, the directory bar has c.file, d.file, and e.file.
and assume that current directory is the directory foo.
the argument */* is converted is to the following entries.
"bar/c.file", "bar/d.file", "bar/e.file"
How can program perform this? I don't know where to start from. And
there are many possible cases.
*/../*, ../../*, */*/*, etc.
Any advice would be awesome. Thank you..
You can of couse use glob() to do a lot of this work.
Such patterns are called globs, for some reason I won't dig up now. :)
POSIX provides glob(3) for programmatic wildcard path expansion.

Is there a file include mechanism for YACC files?

I have three programs that are currently using YACC files to do configuration file parsing. For simplicity, they all read the same configuration file, however, they each respond to keys/values uniquely (so the same .y file can't be used for more than 1 program). It would be nice not to have to repeat the %token declarations for each one - if I want to add one token, I have to change 3 files? What year is it??
These methods aren't working or are giving me issues:
The C preprocessor is obviously run AFTER we YACC the file, so #include for a #define or other macro will not work.
I've tried to script up something similar using sed:
REPLACE_DATA=$(cat <file>)
NEW_FILE=<file>.tmp
sed 's/$PLACEHOLDER/$REPLACE_DATA/g' <file> > $NEW_FILE
However it seems that it's stripping my newlines in REPLACE_DATA and then not replacing instances of $PLACEHOLDER instead of replacing the contents of the variables PLACEHOLDER.
Is there a real include mechanism in YACC, or are there other solutions I'm missing? This is a maintenance nightmare and I'm hoping someone else has run into a similar situation. Thanks in advance.
here's a sed version from http://www.grymoire.com/Unix/Sed.html#uh-37
#!/bin/sh
# watch out for a '/' in the parameter
# use alternate search delimiter
sed -e '\_#INCLUDE <'"$1"'>_{
r '"$1"'
d
}'
But traditionally, we used the m4 preprocessor before yacc.

Shell redirection vs explicit file handling code

I am not a native english speaker so please excuse the awkward title of this question. I just not knew how to phrase it better.
I am on a FreeBSD box and I have a little filter tool written in C which reads a list of data via stdin and outputs a processed list via stdout. I invoke it somewhat like this: find . -type f | myfilter > /tmp/processed.txt.
Now I want to give my filter a little bit more exposure and publish it. Convention says that tools should allow something like this: find . -type f | myfilter -f - -o /tmp/processed.text
This would force me to write code that simply is not needed since the shell can do the job, therefore I tend to leave it out.
My question is: Do I miss some argument (other but convention) why the reading and writing of files should be done in my code an not delegated to shell redirection?
There's absolutely nothing wrong with this. Your filter would have an interface similar to, say, c++filt.
You might consider file handling if you wanted to automatically choose an output file based on the name of an input file or if you wanted to special handling for processing multiple files in a single command.
If you don't want to do any either of these then there's nothing wrong with being a simple filter. Anyone can provide a set of simple shell wrappers to provide a cmd infile outfile syntax if they wish.
That's a needlessly limiting interface. Accepting arguments from the command line is more flexible,
grep foo file | myfilter > /tmp/processed.text
and it doesn't preclude find from being used
find . -type f -exec myfilter {} + > /tmp/processed.text
Actually to have the same effect as shell redirection you can do this:
freopen( "filename" , "wb" , stdout );
and so if you have used printf throughout your code, outputs will be redirected to the file. So you don't need to modify any of the code you've written before and easily adapt to the convention.
It is nice to have as option run any command with filename argument. As in your example:
myfilter [-f ./infile] [-o ./outfile] #or
myfilter [-o outfile] [filename] #and (the best one)
myfilter [-f file] [-o file] #so, when the input and output are the same file - the filter should working correctly anyway
For the nice example check the sort command. Usually used as filer in pipes, but can do [-o output] and correctly handle the same input/output problem too...
And why it is good? For example, when want run the command from "C" by "fork/exec" and don't want start the shell for handling I/O. In this case is much easier (and faster) execve(.....) with arguments as start the cmd with a shell wrapper.

Resources