Are there any libraries out there that I can pass my .c files through and will count the visible number of, of example, "if" statements?
We don't have to worry about "if" statement in other files called by the current file, just count of the current file.
I can do a simple grep or regex but wanted to check if there is something better (but still simple)
If you want to be sure it's done right, I'd probably make use of clang and walk the ast. A URL to get you started:
http://clang.llvm.org/docs/IntroductionToTheClangAST.html
First off, there is no way to use regular expressions or grep to give you the correct answer you are looking for. There are lots of ways that you would find those strings, but they could be buried in any amount of escape characters, quotations, comments, etc.
As some commenters have stated, you will need to use a parser/lexer that understands the C language. You want something simple, you said, so you won't be writing this yourself :)
This seems like it might be usable for you:
http://wiki.tcl.tk/3891
From the page:
lexes a string containing C source into a list of tokens
That will probably get you what you want, but even then it's not going to be trivial.
What everyone has said so far is correct; it seems a lot easier to just grep the shit out of your file. The performance hit of this is neglagible compared to the alternative which is to go get the gcc source code (or whichever compiler you're using), and then go through the parsing code and hook in what you want to do while it parses the syntax tree. This seems like a pain in the ass, especially when all you're worried about is the conditional statements. If you actually care about the branches, you could actually just take a look at the object code and count the number of if statements in the assembly, which would correctly tell you the number of branches (rather than just relying on how many times you typed a conditional, which will not translate exactly to the branching of the program).
Related
For a certain time now, I'm looking to build a logging framework in C (not C++!), but for small microcontrollers or devices with a small footprint of some sort. For this, I've had the idea of hashing the strings that are being logged to a certain value and just saving the hashed value with the timestamp instead of the complete ASCII string. The hash can then be correlated with a 'database' file that would be generated from an external process that parses the strings out of the C source files and saves the logged strings along with the hash value.
After doing a little bit of research, this idea is not new, but I do not find an implementation of this idea in C. In other languages, this idea has been worked out, but that is not the goal of my exercise. An example may be this talk where the same concept has been worked out in C++: youtube.com/watch?v=Dt0vx-7e_B0
Some of the requirements that I've set myself for this library are the following:
as portable C code as possible
COMPILE TIME optimization/hashing for the string hash conversion, it should be equivalent to just printf("%d\n", hashed_value) for a single log statement. (Assuming no parameters/arguments for this particular logging statement).
arguments can be passed to the logging statement similar to the printf function.
user can define their own output function (being console, file descriptor, sending the data directly over an UART connection,...)
fast to run!! fast to compile is nice to have, but it should not be terribly slow.
very easy to use, no very complicated API to use the library.
But to achieve this in C, what is a good approach? I've tried several things now, but do not seem to have found a good method of achieving this.
An overview of things I've tried so far, along with the drawbacks are:
Full pre-processor string hashing: did get it working, but the compile time is terribly slow. Also, this code does not feel to be very portable over multiple C compilers.
Semi pre-processor string hashing: The idea was to generate a hash for each string and make an external header file with the defines in of each string with their hash value. The problem here is that I cannot figure out a way of converting the string to the correct define preprocessor value.
Letting go of the default logging macro with a string pointer: Instead of working with the most used method of LOG_DEBUG("Some logging statement"), converting it with an external parser to /*LOG_DEBUG("Some logging statement") */ LOG_RAW(45). This solves the problem of hashing the string since the hash will be replaced by the external parser with the correct hash, but is not the cleanest to read since the original statement will be a comment.
Also expanding this idea to take care of arguments proved to be tricky. How to take care of multiple types of variables as efficiently as possible?
I've tried some other methods but all without success. Especially when I want to add arguments to log the value of a variable, for example, it gets very complicated, and I do not get the required result...
I'm currently creating Linux shell to learn more about system calls.
I've already figured out most of the things. Parser, token generation, passing appropriate things to appropriate system calls - works.
The thing is, that even before I start making tokens, I split whole command string into separate words. It's based on array of separators, and it works surprisingly good. Except that I'm struggling with adding additional functionality to it, like escape sequences or quotes. I can't really live without it, since even people using basic grep commands use arguments with quotes. I'll need to add functionality for:
' ' - ignore every other separator, operator or double quotes found between those two, pass this as one string, don't include these quotation marks into resulting word,
" "- same as above, but ignore single quotes,
\\ - escape this into single backslash,
\(space) - escape this into space, do not parse resulting space as separator
\", \' - analogously to the above.
Many other things that I haven't figured out I need yet
and every single one of them seems like an exception on its own. Each of them must operate on diversity of possible positions in commands, being included into result or not, having influence on the rest of the parsing. It makes my code look like big ball of mud.
Is there a better approach to do this? Is there a more general algorithm for that purpose?
You are trying to solve a classic problem in program analysis (of lexing and parsing) using a nontraditional structure for lexer ( I split whole command string into separate words... ). OK, then you will have non-traditional troubles with getting the lexer "right".
That doesn't mean that way is doomed to failure, and without seeing specific instances of your problem, (you list a set of constructs you want to handle, but don't say why these are hard to process), it is hard to provide any specific advice. It also doesn't mean that way will lead to success; splitting the line may break tokens that shouldn't be broken (usually by getting confused about what has been escaped).
The point of using a standard lexer (such as Flex or any of the 1000 variants you can get) is that they provide a proven approach to complex lexing problems, based generally on the idea that one can use regular expressions to describe the shape of individual lexemes. Thus, you get one regexp per lexeme type, thus an ocean of them but each one is pretty easy to specify by itself.
I've done ~~40 languages using strong lexers and parsers (using one of the ones in that list). I assure you the standard approach is empirically pretty effective. The types of surprises are well understood and manageable. A nonstandard approach always has the risk that it will surprise you in a bad way.
Last remark: shell languages for Unix have had people adding crazy stuff for 40 years. Expect the job to be at least medium hard, and don't expect it to be pretty like Wirth's original Pascal.
I´m searching information about how to compare two codes and decide if the code submitted by someone is correct or not (based on a solution code defined before).
I could compare the output but many codes may have the same output. Then I think I must compare someway the codes and give a percentage of similitude.
Anybody can help me?
(the language code is C but I think this isn´t important)
Some of my teachers used online automated program grading systems like http://web-cat.org/
In the assignment they would specify a public api you must provide, and then they would just write tests against your functions, much like unit tests. They would intentionally pick tests that would exploit boundary conditions and other things students are notorious for not thinking about, and just call your code with many different inputs to try to get your code to fail.
Sometimes they would hardcode the expected values, other times they would allow values within a range, and other times they just did the assignment themselves and made it so your own code has to match the results produced by their code.
Obviously, not all programs can be effectively graded this way. It's also kinda error prone in that sometimes even the teacher made a mistake and overflowed an int or something, then the correct student submissions wouldn't match the teachers incorrect results. But, a system doesn't need to be perfect to be useful. But I think this raises an important point in that manually grading by reading the code won't necessarily reveal all mistakes either.
Another possibility is copy the submitted code, strip out all of the white space and search for substrings that must exist for the code to be correct and/or substrings that cannot exist for the code to be considered correct. The troublesome bit might be setting up to allow for some of the more tricky requirements such as [(a or c),((a or b) and c),((a or b) and c)], where the variables are the result of a boolean check as to if the substring related to the variable exists within the code.
For example, [("printf"),("for"), (not "1,2,3,4,5,6,7,9,10")], would require that "printf" and "for" be substrings in the code, while "1,2,3,4,5,6,7,9,10" i I'm not familiar with C, so I'm I'm assuming here that "printf" is required to be able to print anything without involving output streams, which could be accounted for by something like [("printf" or "out"),("for"), (not "1,2,3,4,5,6,7,9,10")], where "out" is part of C code required to make use of output streams.
It might be possible to automatically find required substrings based on a "correct" code, but as others have mentioned, there are alternative ways to do things. Which is why hard-coding the "solution" is probably required. Even so, it's quite possible that you'll miss a required substring, and it'll be marked as wrong, but it's probably the only way you can do what you ask with some degree of success.
Regular expressions might be useful here.
When I have to parse text (e.g. config files or other rather simple/descriptive languages), there are several solutions that come to my mind:
using library functions, e.g. strtok(), sscanf()
a finite state machine which processes one char at a time, tokenizing and parsing
using the explode() function I once wrote out of pure boredom
using lex/yacc (read: flex/bison) to generate an appropriate parser
I don't like the "library functions" approach. It feels clumsy and awkward. explode(), while it doesn't take much new code, feels even more blown up. And flex/bison often seems like sheer overkill.
I usually implement a FSM, but at the same time I already feel sorry for the poor guy that may have to maintain my code at a later point.
Hence my question:
What is the best way to parse relatively simple text files?
Does it matter at all?
Is there a commonly agreed-upon approach?
I'm going to break the rules a bit and answer your questions out of order.
Is there a commonly agreed-upon approach?
Absolutely not. IMHO the solution you choose should depend on (to name a few) your text, your timeframe, your experience, even your personality. If the text is simple enough to make flex and bison overkill, maybe C is itself overkill. Is it more important to be fast, or robust? Does it need to be maintained, or can it start quick and dirty? Are you a passionate C user, or can you be enticed away with the right language features? &c., &c.
Does it matter at all?
Again, this is something only you can answer. If you're working closely with a team of people, with particular skills and abilities, and the parser is important and needs to be maintained, it sure does matter! If you're writing something "out of pure boredom," I would suggest that it doesn't matter at all, no. :-)
What is the best way to parse relatively simple text files?
Well, I don't know that you're going to like my answer. Maybe first read some of the other fine answers here.
No, really, go ahead. I'll wait.
Ah, you're back and relaxed. Let's ease into things, shall we?
Never write it in 'C' if you can do it in 'awk';
Never do it in 'awk' if 'sed' can handle it;
Never use 'sed' when 'tr' can do the job;
Never invoke 'tr' when 'cat' is sufficient;
Avoid using 'cat' whenever possible.
-- Taylor's Laws of Programming
If you're writing it in C, but C feels like the wrong tool...it really might be the wrong tool. awk or perl will likely do what you're trying to do without all the aggravation. You may even be able to do it with cut or something similar.
On the other hand, if you're writing it in C, you probably have a good reason to write it in C. Maybe your parser is a tiny part of a much larger system, which, for the sake of argument, is embedded, in a refrigerator, on the moon. Or maybe you loooove C. You may even hate awk and perl, heaven forfend.
If you don't hate awk and perl, you may want to embed them into your C program. This is doable, in principle--I've never done it myself. For awk, try libmawk. For perl, there are proably a few ways (TMTOWTDI). You can run perl separately using popen to start it, or you can actually embed a Perl interpreter into your C program--see man perlembed.
Anyhow, as I've said, "the best way to parse" entirely depends on you and your team, the problem space, and your approach to the issue. What I can offer is my opinion.
I'm going to assume that in your C-only solutions (library functions and FSM (considering your explode to essentially be a library function)) you've already done your best at isolating the relevant code, designing the code and files well, and so forth.
Even so, I'm going to recommend lex and yacc.
Library functions feel "clumsy and awkward." A state machine seems unmaintainable. But you say that lex and yacc feel like overkill.
I think you should approach your complaints differently. What you're really doing is specifying a FSM. However, you're also hiring someone to write and maintain it for you, thereby solving most of the maintainability problem. Overkill? Did I mention they'll work for free?
I suspect, but do not know, that the reason lex and yacc originally felt like overkill was that your config / simple files just felt too, well, simple. If I'm right (a big if), you may be able to do most of your work in the lexer. (It's even conceivable that you can do all of your work in the lexer, but I know nothing about your input.) If your input is not only simple but widespread, you may be able to find a lexer/parser combination freely available for what you need.
In short: if you can do this not in C, try something else. If you want C, use lex and yacc--they have a little overhead, but they're a very good solution.
If you can get it to work, I'd go with an FSM, but with a huge assist from Perl-compatible regular expressions. This library is easy to understand, and you ought to be able to trim back sufficient extraneous spaghetti to give your monster that aerodynamic flair to which all flying monsters aspire. That, and plenty of comments in well-structured spaghetti, ought to make your code-maintaining successor comfortable. (And, as I'm sure you know, that code-maintaining successor is you after six months, when you've moved on to something else and the details of this code have slipped your mind.)
My short answer is to use the right too for the problem. If you have configuration files use existing standards and formats e.g. ini Files and parse them using Boost program_options.
If you enter the world of "own" languages use lex/yacc, since they provide you with the required features, but you have to consider the cost of maintaining the grammar and language implementation.
As a result I would recommend to further narrow you problem scope to find the right tool.
I regularly run into C-codes without folding. It is irritating to read them if there is no folding, particularly with long files. How can I fold them?
To fold according to syntax
:set foldmethod=syntax
If you want to do it manually on the bits you want to fold away
:set foldmethod=manual
then create new folds by selecting / moving and pressing zf
e.g.
shift-v j j zf
(ignoring the spaces)
Edit: Also see the comments of this answer for indent and marker foldmethods.
I think you may have mixed the terminology. Do you need "wrapping" or "folding". Wrapping is the one where lines that wouldn't usually fit on screen due to their length, are wrapped, i.e. shown on several consecutive lines on screen (actually, it is one line, in several lines - hard to explain, best to see in practice).
In vim wrapping is set by
:set wrap
to turn it on, and
:set textwidth=80
to determine where vim should wrap the text (80 characters is usually a nice measure).
Folding on the other hand is a completely different matter. It is the one where vim folds several lines of code (for example, a function) into one line of code. It is useful for increasing readability of code. Vim has several folding methods, you can see all of them if you
:help folding
What you are looking for, I think would be, syntax folding, but I could be wrong. I recommend reading the help page, it is not long, and very useful.
Actually, there is another very straight forward and effective way, which is using foldmethod = marker and set foldmarker to be {,}. Then the fold result would looks like:
all of the functions fold-ed. Basically, it looks like the outline in IDE. (and you can also set foldlevel=1or more, if you do not want to fold everything at the beginning)
this is what a normal function looks like when you open it with level-1 via zo.
In addition, to do folding by syntax needs a bit of extra work, and here is a good tutorial about it. But I think fold by marker={,} is quite enough, and most importantly, it's simple and neat.
I've rolled up a fold plugin for C and C++. It goes beyond what is done with syntax folding (may be it could be improved, I don't know), and leaves less noisy and not really useful things unfolded, compared to indentation and marker based folding.
The caveat: in order to have decent reaction times, I had to make some simplifications, and sometimes the result is quite messed-up (we have to type zx to fix it).
Here is a little screencast to see how the plugin folds a correctly balanced C++ source code, which is not currently being modified :(
In vi (as opposed to vim) the answer was:
:set wm=1
This sets the wrap margin to one character before the end of the line. This isn't the world's best specification with variable sized windows (it made sense with green screens when it was hard to change the size).
That means there is also an alternative way to do it in vim:
:set textwidth=30
See: VimDoc User Manual Section 25.1
The you probably want the setting
:set foldmethod=syntax
But don't put that in manually! Thats missing out on one of Vims biggest features which is having custom settings for hundreds of file types already builtin. To get that, add this to your ~/.vimrc
filetype plugin on
filetype indent on
filetype detection is mostly based on extension, in this case *.c files. See :help :filetype for more info. You can also customize these filetype based settings.