Which static code analyzer (if any) do you use? I've been using PyLint for Python and I'm pretty satisfied with it, now I need something similar for C code.
How much of it's output do you have to suppress for normal daily usage?
Wikipedia maintains a list of static code analysis tools for various languages (including C).
Personally, I have used both PC-Lint and Splint. The best choice depends on the type of application you have written. However no matter which tool you use, there will be a low signal to noise ratio until you properly tune the tool and your code.
PC-Lint is the most powerful Lint tool I used. If you add it to an existing project, the signal to noise ratio can be low. However, once the tool and your code are properly configured, it can be used as part of your standard build process. The last major project where I used it, we set it so that PC-Lint warnings would break the build. Licenses for PC-Lint cost $389, but it is worth the cost.
Splint is a great open-source tool. I have used it on several projects, but found that it can be difficult to configure when using a compiler with non-ANSI C extenstions (e.g. on embedded systems projects).
Valgrind is also worth considering as a dynamic analysis tool.
You specifically requested feedback on SourceMonitor. This tool provides interesting metrics on your code, but should be used as a supplement to good Lint tool as it does not provide that kind of analysis.
As stated on their home page, SourceMonitor will:
...find out how much code you have and
to identify the relative complexity of
your modules. For example, you can use
SourceMonitor to identify the code
that is most likely to contain defects
and thus warrants formal review.
I used it on a recent project and found it to be easy to use (even for embedded systems code). The complexity metric is an excellent resource for developing code that will be less error-prone and easier to maintain.
SourceMonitor provides nice graphs of its output as well as well-formatted XML if you want to automate metrics collection. The only downside is that the tool only runs on Windows.
We use PC-Lint and are very happy with it.
There seem to be a few camps regarding message suppression and tuning:
suppress everything, then unsuppress only what you're interested in
unsuppress everything, then suppress warnings you're not interested in
keep everything unsuppressed
We tend to fall somewhere between the second and third categories. This does mean a ludicrous 100MiB+ text dump (one error per line) per lint run across the core libraries (lots of old code).
A custom diff-like tool watches for changes and emails those out to the commit's author, which keeps the amount that most people have to look at down to a few lines. We gather interesting statistics about errors-over-time with some basic data mining.
You can get really polished here, hyperlinking the errors back to more detailed descriptions, providing "points" for fixing existing warnings, etc...
There's splint, although, to be honest, I've never been able to get it to work; on my platform, it really is too overactive. In practice, my most-used "lint" are the following warning flags for gcc
-std=c89 -pedantic -W -Wall -Wstrict-prototypes -Wunreachable-code -Wwrite-strings -Wpointer-arith -Wbad-function-cast -Wcast-align -Wcast-qual
Of course, I've mostly forgotten what half of them mean. But they catch quite a few things.
I'm a big fan of David Evans's work on LC/Lint, which has apparently had its name changed to Splint. It is very aggressive, and you can tell it a lot of useful information by adding annotations to your code. It is designed to be used with programmer annotations. It will function without them, but if you try to use it as a simple checker without providing any annotations, you will probably be disappointed. If what you want is totally automated checking, and if you can deal with a Windows-only tool, you're better off with Gimpel's PC-Lint. Jim Gimpel has had happy customers for over 25 years.
I used PCLint forever and really liked it. I wish they'd get into C#... They are the ones with the pop quizzes on C or C++ code in all the magazines.
There is one in the llvm clang project http://clang-analyzer.llvm.org . I have not tried it myself but i intend to do it.
It looks pretty good in action:
http://www.mikeash.com/?page=pyblog/friday-qa-2009-03-06-using-the-clang-static-analyzer.html
Above is for Objective-C but it should be the same for C.
Related
I understand that one of the main vim's main goals are portability and customisation. Abundance of comments in the manual about things working differently on various problems does not surprise me. However, I don't understand why almost every feature can be disabled with a compile flag, what purpose does it serve? Wouldn't it be easier to be able to disable features in runtime, with some kind of configuration?
As far as I understand (I tried diving into vim code, but didn't write any patches), it makes the code base much more complicated, and that's exactly what Neovim developers are trying to remove. Why did vim developers follow this approach?
There are several forces for having many flags (some of which have already been mentioned in the comments):
portability: with quite different (and exotic, like DOS and Amiga) operating systems to support, different (GUI) libraries must be included, so there's a definite need for conditional compilation via #ifdef
on many Linux distributions, a tiny build of Vim serves as the default implementation for vi; that one should not have any of the extended features
it allows for utter flexibility: developers / packagers can mix and match freely (though I guess most will stick to the default tiny / big / huge feature bundles)
So, my (unofficial) guess is that a useful and necessary (see above) feature was increasingly used for more and more features as Vim grew and matured. Initially, this is good (consistency, a single mechanism to configure). When problems started to appear, it was too late to retroactively switch to a (better?) approach (like runtime configuration); the (development and testing) effort would now be prohibitive.
By pure chance I stumbled over an article mentioning you can "enable" ASLR with -pie -fPIE (or, rather, make your application ASLR-aware). -fstack-protector is also commonly recommended (though I rarely see explanations how and against which kinds of attacks it protects).
Is there a list of useful options and explanations how they increase the security?
...
And how useful are such measures anyway, when your application uses about 30 libraries that use none of those? ;)
Hardened Gentoo uses these flags:
CFLAGS="-fPIE -fstack-protector-all -D_FORTIFY_SOURCE=2"
LDFLAGS="-Wl,-z,now -Wl,-z,relro"
I saw about 5-10% performance drop in comparison to optimized Gentoo linux (incl. PaX/SElinux and other measures, not just CFLAGS) in default phoronix benchmark suite.
As for your final question:
And how useful are such measures anyway, when your application uses about 30 libraries that use none of those? ;)
PIE is only necessary for the main program to be able to be loaded at a random address. ASLR always works for shared libraries, so the benefit of PIE is the same whether you're using one shared library or 100.
Stack protector will only benefit the code that's compiled with stack protector, so using it just in your main program will not help if your libraries are full of vulnerabilities.
In any case, I would encourage you not to consider these options part of your application, but instead part of the whole system integration. If you're using 30+ libraries (probably most of which are junk when it comes to code quality and security) in a program that will be interfacing with untrusted, potentially-malicious data, it would be a good idea to build your whole system with stack protector and other security hardening options.
Do keep in mind, however, that the highest levels of _FORTIFY_SOURCE and perhaps some other new security options break valid things that legitimate, correct programs may need to do, and thus you may need to analyze whether it's safe to use them. One known-dangerous thing that one of the options does (I forget which) is making it so the %n specifier to printf does not work, at least in certain cases. If an application is using %n to get an offset into a generated string and needs to use that offset to later write in it, and the value isn't filled in, that's a potential vulnerability in itself...
The Hardening page on the Debian wiki explains at least the most commons ones which are usable on Linux. Missing from your list is at least -D_FORTIFY_SOURCE=2, -Wformat, -Wformat-security, and for the dynamic loader the relro and now features.
Variations of this question have been asked, but not specific to GNU/Linux and C. I use Komodo Edit as my usual Editor, but I'd actually prefer something that can be used from CLI.
I don't need C++ support; it's fine if the tool can only handle plain C.
I really appreciate any direction, as I was unable to find anything.
I hope I'm not forced to 'roll' something myself.
NOTE: Please refrain from mention vim; I know it exists and what its capabilities are. I purposefully choose to avoid vim, which is why I use Komodo (or nano on the servers).
I don't think that a pure console refactoring tool would be nice to use.
I use Eclipse CDT on linux to write and refactor C-Code.
There exists also Xrefactory for Emacs http://www.xref.sk/xrefactory/main.html
if a non console refactoring tool is o.k for you as well.
C-xrefactory was an open source version of xrefactory, covering C and Java, made available on SourceForge by Marián Vittek under GPLv2.
For those interested, there's an actively maintained c-xrefactory fork on GitHub:
https://github.com/thoni56/c-xrefactory
The goal of the GitHub fork is to refactor c-xrefactory itself, add a test suite, and try to document the original source code (which is rather obscure). Maybe, in the future, also convert it into an LSP C language server and refactoring tool.
C-xrefactory works on Emacs; setup scripts and instructions can be found at the repository. Windows users can run it via WSL/WSL2.
You could consider coding a GCC plugin or a MELT extension (MELT is a domain specific language to extend GCC) for your needs.
However, such approach would take you some time, because you'll need to understand some of GCC internals.
For Windows only, and not FOSS but you said "any direction..."
Our DMS Software Reengineering Toolkit" with its C Front End can apply transformations to C source code. DMS can be configured to carry out custom, complex reliable transformations, although the configuration isn't as easy as typing just a command like "refactor frazzle by doobaz".
One of the principal stumbling blocks is still the preprocessor. DMS can transform code that has preprocessor directives in typical places (around statements, expressions, if/for/while loop heads, declarations, etc.) but other "unstructured conditionals" give it trouble. You can run DMS by expanding the preprocessor directives out of existence, or more imporantly, expanding out the ones that give it trouble, but mostly people don't like this because they prefer to keep thier preprocessor directives. So it isn't perfect.
[Another answer suggested Concinelle, which looks pretty good from my point of view. As far as I know, it doesn't handle preprocessor directives at all; I could be wrong and it might handle some cases as DMS does, but I'm sure it can't handle all the cases].
You don't want to consider rolling your own. Building a transformation/refactoring tool is much harder than you might guess having never tried it. You need full, accurate parsers for the (C) dialect of interest and just that is pretty hard to get right. You need a preprocessor, symbol tables, flow analysis, transformation, code regeneration machinery, ... this stuff takes years of effort to build and get right. Trust me, been there, done that.
I'm really sorry if this sounds kinda dumb. I just finished reading K&R and I worked on some of the exercises. This summer, for my project, I'm thinking of re-implementing a linux utility to expand my understanding of C further so I downloaded the source for GNU tar and sed as they both seem interesting. However, I'm having trouble understanding where it starts, where's the main implementation, where all the weird macros came from, etc.
I have a lot of time so that's not really an issue. Am I supposed to familiarize myself with the GNU toolchain (ie. make, binutils, ..) first in order to understand the programs? Or maybe I should start with something a bit smaller (if there's such a thing) ?
I have little bit of experience with Java, C++ and python if that matters.
Thanks!
The GNU programs big and complicated. The size of GNU Hello World shows that even the simplest GNU project needs a lot of code and configuration around it.
The autotools are hard to understand for a beginner, but you don't need to understand them to read the code. Even if you modify the code, most of the time you can simply run make to compile your changes.
To read code, you need a good editor (VIM, Emacs) or IDE (Eclipse) and some tools to navigate through the source. The tar project contains a src directory, that is a good place to start. A program always start with the main function, so do
grep main *.c
or use your IDE to search for this function. It is in tar.c. Now, skip all the initialization stuff, untill
/* Main command execution. */
There, you see a switch for subcommands. If you pass -x it does this, if you pass -c it does that, etc. This is the branching structure for those commands. If you want to know what these macro's are, run
grep EXTRACT_SUBCOMMAND *.h
there you can see that they are listed in common.h.
Below EXTRACT_SUBCOMMAND you see something funny:
read_and (extract_archive);
The definition of read_and() (again obtained with grep):
read_and (void (*do_something) (void))
The single parameter is a function pointer like a callback, so read_and will supposedly read something and then call the function extract_archive. Again, grep on it and you will see this:
if (prepare_to_extract (current_stat_info.file_name, typeflag, &fun))
{
if (fun && (*fun) (current_stat_info.file_name, typeflag)
&& backup_option)
undo_last_backup ();
}
else
skip_member ();
Note that the real work happens when calling fun. fun is again a function pointer, which is set in prepare_to_extract. fun may point to extract_file, which does the actual writing.
I hope I walked you a great deal through this and shown you how I navigate through source code. Feel free to contact me if you have related questions.
The problem with programs like tar and sed is twofold (this is just my opinion, of course!). First of all, they're both really old. That means they've had multiple people maintain them over the years, with different coding styles and different personalities. For GNU utilities, it's usually pretty good, because they usually enforce a reasonably consistent coding style, but it's still an issue. The other problem is that they're unbelievably portable. Usually "portability" is seen as a good thing, but when taken to extremes, it means your codebase ends up full of little hacks and tricks to work around obscure bugs and corner cases in particular pieces of hardware and systems. And for programs as widely ported as tar and sed, that means there's a lot of corner cases and obscure hardware/compilers/OSes to take into account.
If you want to learn C, then I would say the best place to start is not trying to study code that others have written. Rather, try to write code yourself. If you really want to start with an existing codebase, choose one that's being actively maintained where you can see the changes that other people are making as they make them, follow along in the discussions on the mailing lists and so on.
With well-established programs like tar and sed, you see the result of the discussions that would've happened, but you can't see how software design decisions and changes are being made in real-time. That can only happen with actively-maintained software.
That's just my opinion of course, and you can take it with a grain of salt if you like :)
Why not download the source of the coreutils (http://ftp.gnu.org/gnu/coreutils/) and take a look at tools like yes? Less than 100 lines of C code and a fully functional, useful and really basic piece of GNU software.
GNU Hello is probably the smallest, simplest GNU program and is easy to understand.
I know sometimes it's a mess to navigate through C code, especially if you're not familiarized with it. I suggest you use a tool that will help you browse through the functions, symbols, macros, etc. Then look for the main() function.
You need to familiarize yourself with the tools, of course, but you don't need to become an expert.
Learn how to use grep if you don't know it already and use it to search for the main function and everything else that interests you. You might also want to use code browsing tools like ctags or cscope which can also integrate with vim and emacs or use an IDE if you like that better.
I suggest using ctags or cscope for browsing. You can use them with vim/emacs. They are widely used in the open-source world.
They should be in the repository of every major linux distribution.
Making sense of some code which uses a lot of macros, utility functions, etc, can be hard. To better browse the code of a random C or C++ software, I suggest this approach, which is what I generally use:
Install Qt development tools and Qt Creator
Download the sources you want to inspect, and set them up for compilation (usually just ./configure for GNU stuff).
Run qmake -project in the root of the source directory, to generate Qt .pro file for Qt Creator.
Open the .pro file in Qt Creator (do not use shadow build, when it asks).
Just to be safe, in Qt Creator Projects view, remove the default build steps. The .pro file is just for navigation inside Qt Creator.
Optional: set up custom build and run steps, if you want to build and run/debug under Qt Creator. Not needed for navigation only.
Use Qt Creator to browse the code. Note especially the locator (kb shortcut Ctrl+K) to find stuff by name, and "follow symbol under cursor" (kb shortcut F2), and "find usages" (kb shortcut Ctrl-Shift-U).
I had to take a look at "sed" just to see what the problem was; it shouldn't be that big. I looked and I see what the issue is, and I feel like Charleton Heston catching first sight of a broken statue on the beach. All of what I'm about to describe for "sed" might also apply to "tar". But I haven't looked at it (yet).
A lot of GNU code got seriously grunged up - to the point of unmaintainable morbid legacy - for reasons I don't know. I don't know exactly when it happened, maybe late 1990's or early 2000's, but it was like someone flipped a switch and suddenly, all the nice modular mostly self-contained code widgets got massively grunged with all sorts of extraneous entanglements having little or no connection to what the application itself was trying to do.
In your case, "sed": an entire library got (needlessly) dragged in with the application. This was the case at least as early as version 4.2 (the last version predating your query), probably before that - I'd have to check.
Another thing that got grunged up was the build system (again) to the point of unmaintainability.
So, you're really talking about legacy rescue here.
My advice ... which is generic for any codebase that's been around a long time ... is to dig as deep as you can and go back to its earliest forms first; and to branch out and look at other "sed"'s - like those in the UNIX archive.
https://www.tuhs.org/Archive/
or in the BSD archive:
https://github.com/freebsd
https://github.com/weiss/original-bsd
(the second one goes deeper into early BSD in its earlier commits.)
Many of the "sed"'s on the GNU page - but not all of them - may be found under "Downloads" as a link "mirrors" on the GNU sed page:
https://www.gnu.org/software/sed/
Version 1.18 is still intact. Version 1.17 is implicitly intact, since there is a 1.17 to 1.18 diff present there. Neither version has all the extra stuff piled on top of it. It's more representative of what GNU software looked like, before before becoming knotted up with all the entanglements.
It's actually pretty small - only 8863 lines for the *.c and *.h files, in all. Start with that.
For me the process of analysis of any codebase is destructive of the original and always entails a massive amount of refactoring and re-engineering; and simplification coming from just writing it better and more natively, while yet keeping or increasing its functionality. Almost always, it is written by people who only have a few years' experience (by which I mean: less than 20 years, for instance) and have thus not acquired full-fledged native fluency in the language, nor the breadth of background to be able to program well.
For this, if you do the same, it's strongly advised that you have some of test suite already in place or added. There's one in the version 4.2 software, for instance, though it may be stress-testing new capabilities added between 1.18 and 4.2. Just be aware of that. (So, it might require reducing the test suite to fit 1.18.) Every change you make has to be validated by whatever tests you have in your suite.
You need to have native fluency in the language ... or else the willingness and ability to acquire it by carrying out the exercise and others like it. If you don't have enough years behind you, you're going to hit a soft wall. The deeper you go, the harder it might be to move forward. That's an indication that you're not experienced enough yet, and that you don't have enough breadth. So, this exercise then becomes part of your learning experience, and you'll just have to plod through.
Because of how early the first versions date from, you will have to do some rewriting anyhow, just to bring it up to standard. Later versions can be used as a guide, for this process. At a bare minimum, it should be brought up to C99, as this is virtually mandated as part of POSIX. In other words, you should at least be at least as far up to date as the present century!
Just the challenge of getting it to be functional will be exercise enough. You'll learn a lot of what's in it, just by doing that. The process of getting it to be operational is establishing a "baseline". Once you do that, you have your own version, and you can start with the "analysis".
Once a baseline is established, then you can proceed full throttle forward with refactoring and re-engineering. The test suite helps to provide cover against stumbles and inserted errors. You should keep all the versions that you have (re)made in a local repository so that you can jump back to earlier ones, in case you need to track down the sudden emergence of test failures or other bugs. Some bugs, you may find, were rooted all the way back in the beginning (thus: the discovery of hidden bugs).
After you have the baseline (re)written to your satisfaction, then you can proceed to layer in the subsequent versions. On GNU's archive, 1.18 jumps straight to 2.05. You'll have to make a "diff" between the two to see where all the changes were, and then graft them into your version of 1.18 to get your version of 2.05. This will help you better understand both the issues that the changes made addressed, and what changes were made.
At some point you're going to hit GNU's Grunge Wall. Version 2.05 jumped straight to 3.01 in GNU's historical archive. Some entanglements started slipping in with version 3.01. So, it's a soft wall we have here. But there's also an early test suite with 3.01, which you should use with 1.18, instead of 4.2's test suite.
When you hit the Grunge Wall, you'll see directly what the entanglements were, and you'll have to decide whether to go along for the ride or cast them aside. I can't tell you which direction is the rabbit hole, except that SED has been perfectly fine for a long time, most or all of it is what is listed in and mandated by the POSIX standard (even the current one), and what's there before version 3 serves that end.
I ran diffs. Between 2.05 and 3.01, the diff file is 5000 lines. Ok. That's (mostly) fine and is natural for code that's in development, but some of that may be coming from the soft Grunge Wall. Running a diff on 3.01 versus 4.2 yields a diff file that over 60000 lines. You need only ask yourself: how can a program that's under 10000 lines - that abides by an international standard (POSIX) - be producing 60000 lines of differences? The answer is: that's what we call bloat. So, between 3.01 and 4.2, you're witnessing a problem that is very common to code bases: the rise of bloat.
So, that pretty much tells you which direction ("go along for the ride" versus "cast it aside") is the rabbit hole. I'd probably just stick with 3.01, and do a cursory review of the differences between 3.01 and 4.2 and of the change logs to get an overview of what the changes were, and just leave it at that, except maybe to find a different way to write in what they thought was necessary to change, if the reason for it was valid.
I've done legacy rescue before, before the term "legacy" was even in most people's vocabulary and am quick to recognize the hallmark signs of it. This is the kind of process one might go through.
We've seen it happen with some large codebases already. In effect, the superseding of X11 by Wayland was a massive exercise in legacy rescue. It's also possible that the ongoing superseding of GNU's gcc by clang may be considered instance of that.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 6 years ago.
Improve this question
I'm working on a project where I'm coding in C in a UNIX environment. I've been using the lint tool to check my source code. Lint has been around a long time (since 1979), can anyone suggest a more recent code analysis tool I could use ? Preferably a tool that is free.
Don't overlook the compiler itself. Read the compiler's documentation and find all the warnings and errors it can provide, and then enable as many as make sense for you.
Also make sure to tell your compiler to treat warnings like errors so you're forced to fix them right away (-Werror on gcc).
By the way, don't be fooled -Wall on gcc does not enable all warnings.
You may want to check valgrind (free!) — it "automatically detect[s] many memory management and threading bugs, and profile[s] your programs in detail." It isn't a static checker, but it's a great tool!
For C code, you definitely should definitely use Flexelint. I used it for nearly 15 years and swear by it. One of the really great features it has is that warnings can be selectively turned off and on via comments in the code ("/* lint -e123*/"). This turned out to be a powerful documentation tool when you wanted to something out of the ordinary. "I am turning off warning X, therefore, there is some good reason I'm doing X."
For anybody into interesting C/C++ questions, look at some of their examples on their site and see if you can figure out the bugs without looking at the hints.
I've heard good things about clang static analyzer, which IIRC uses LLVM as it's backend. If that's implemented on your platform, that might be a good choice.
From what I understand, it does a bit more than just syntax analysis. "Automatic Bug Finding", for instance.
You can use cppcheck. It is an easy to use static code analysis tool.For example:
cppcheck --enable=all .
will check all C/C++ files under the current folder.
I recently compiled a list of all the static analysis tools I had at my disposal, I am still in the process of evaluating them all. Note, these are mostly security analysis tools.
splint
RATS
SMATCH
Uno
We've been using Coverity Prevent to check out C++ source code.
It's not a free tool (although I believe they offer free scanning for open source projects), but it's one of the best static analysis tools you'll find. I've heard it's even more impressive on C than on C++, but it's helped us avoid quite a number of bugs so far.
Lint-like tools generally suffer from a "false alarm" problem: they report a lot more issues than really exist. If the proportion of genuinely-useful warnings is too low, the user learns to just ignore the tool. More modern tools expend some effort to focus on the most likely/interesting warnings.
PC-lint/Flexelint are very powerful and useful static analysis tools, and highly configurable, though sadly not free.
When first using a tool like this, they can produce huge numbers of warnings, which can make it hard to differentiate between major and minor ones. Therefore, it is best to start using the tool on your code as early in the project as possible, and then to run it on your code as often as possible, so that you can deal with new warnings as they come up.
With continual use like this, you soon learn how to write your code in a way which confirms to the rules applied by the tool.
Because of this, I prefer tools like Lint which run relatively quickly, and so encourage continual use, rather than the more cumbersome tools which you may end up using less often, if at all.
You can try CppDepend, a pretty complete static analyzer available on windows and linux, throught VS Plugin, IDE or command line, and it's free for open source contributors
You might find the Uno tool useful. It's one of the few free non-toy options. It differs from lint, Flexelint, etc. in focusing on a small number of "semantic" errors (null pointer derefs, out-of-bounds array indices, and use of uninitialized variables). It also allows user-defined checks, like lock-unlock discipline.
I'm working towards a public release of a successor tool, Orion (CONTENT NOT AVAILABLE ANYMORE)
lint is constantly updated... so why would you want a more recent one.
BTW flexelint is lint
G'day,
I totally agree with the suggestions to read and digest what the compiler is telling you after setting -Wall.
A good static analysis tool for security is FlawFinder written by David Wheeler. It does a good job looking for various security exploits,
However, it doesn't replace having a knowledgable someone read through your code. As David says on his web page, "A fool with a tool is still a fool!"
cheers,
Rob
I've found that it's generally best to use multiple static analysis tools to find bugs. Every tool is designed differently, and they can find very different things from each other.
There are some good discussions in some of the talks here. It's from a conference held by the US Department of Homeland Security on static analysis.
Sparse is a computer software tool, already available on Linux, designed to find possible coding faults in the Linux kernel.
There are two active projects of Linux Verification Center aimed to improve quality of the loadable kernel modules.
Linux Driver Verification (LDV) - a comprehensive toolset for static source code verification of Linux device drivers.
KEDR Framework - an extensible framework for dynamic analysis and verification of kernel modules.
Another ongoing project is Linux File System Verification that aims to develop a dedicated toolset for verification of Linux file system implementations.
There is a "-Weffc++" option for gcc which according to the Mac OS X man page will:
Warn about violations of the following style guidelines from Scott Meyers' Effective C++ book:
[snip]
I know you asked about C, but this is the closest I know of..