C99 mixed declarations and code in open source projects? - c

Why is still C99 mixed declarations and code not used in open source C projects like the Linux kernel or GNOME?
I really like mixed declarations and code since it makes the code more readable and prevents hard to see bugs by restricting the scope of the variables to the narrowest possible. This is recommended by Google for C++.
For example, Linux requires at least GCC 3.2 and GCC 3.1 has support for C99 mixed declarations and code

You don't need mixed declaration and code to limit scope. You can do:
{
int c;
c = 1;
{
int d = c + 1;
}
}
in C89. As for why these projects haven't used mixed declarations (assuming this is true), it's most likely a case of "If it ain't broke don't fix it."

This is an old question but I'm going to suggest that inertia is the reason that most of these projects still use ANSI C declarations rules.
However there are a number of other possibilities, ranging from valid to ridiculous:
Portability. Many open source projects work under the assumption that pedantic ANSI C is the most portable way to write software.
Age. Many of these projects predate the C99 spec and the authors may prefer a consistent coding style.
Ignorance. The programmers submitting predate C99 and are unaware of the benefits of mixed declarations and code. (Alternate interpretation: Developers are fully aware of the potential tradeoffs and decide that mixed declarations and statements are not worth the effort. I highly disagree, but it's rare that two programmers will agree on anything.)
FUD. Programmers view mixed declarations and code as a "C++ism" and dislike it for that reason.

There is little reason to rewrite the Linux kernel to make cosmetic changes that offer no performance gains.
If the code base is working, so why change it for cosmetic reasons?

There is no benefit. Declaring all variables at the beginning of the function (pascal like) is much more clear, in C89 you can also declare variables at the beginning of each scope (inside loops example) which is both practical and concise.

I don't remember any interdictions against this in the style guide for kernel code. However, it does say that functions should be as small as possible, and only do one thing. This would explain why a mixture of declarations and code is rare.
In a small function, declaring variables at the start of scope acts as a sort of Introit, telling you something about what's coming soon after. In this case the movement of the variable declaration is so limited that it would likely either have no effect, or serve to hide some information about the functionality by pushing the barker into the crowd, so to speak. There is a reason that the arrival of a king was declared before he entered a room.
OTOH, a function which must mix variables and code to be readable is probably too big. This is one of the signs (along with too-nested blocks, inline comments and other things) that some sections of a function need to be abstracted into separate functions (and declared static, so the optimizer can inline them).
Another reason to keep declarations at the beginning of the functions: should you need to reorder the execution of statements in the code, you may move a variable out of its scope without realizing it, since the scope of a variable declared in the middle of code is not evident in the indentation (unless you use a block to show the scope). This is easily fixed, so it's just an annoyance, but new code often undergoes this kind of transformation, and annoyance can be cumulative.
And another reason: you might be tempted to declare a variable to take the error return code from a function, like so:
void_func();
int ret = func_may_fail();
if (ret) { handle_fail(ret) }
Perfectly reasonable thing to do. But:
void_func();
int ret = func_may_fail();
if (ret) { handle_fail(ret) }
....
int ret = another_func_may_fail();
if (ret) { handle_other_fail(ret); }
Ooops! ret is defined twice. "So? Remove the second declaration." you say. But this makes the code asymmetric, and you end up with more refactoring limitations.
Of course, I mix declarations and code myself; no reason to be dogmatic about it (or else your karma may run over your dogma :-). But you should know what the concomitant problems are.

Maybe it's not needed, maybe the separation is good? I do it in C++, which has this feature as well.

There is no reason to change the code away like this, and C99 is still not widely supported by compilers. It is mostly about portability.

Related

Why was mixing declarations and code forbidden up until C99?

I have recently become a teaching assistant for a university course which primarily teaches C. The course standardized on C90, mostly due to widespread compiler support. One of the very confusing concepts to C newbies with previous Java experience is the rule that variable declarations and code may not be intermingled within a block (compound statement).
This limitation was finally lifted with C99, but I wonder: does anybody know why it was there in the first place? Does it simplify variable scope analysis? Does it allow the programmer to specify at which points of program execution the stack should grow for new variables?
I assume the language designers wouldn't have added such a limitation if it had absolutely no purpose at all.
In the very beginning of C the available memory and CPU resources were really scarce. So it had to compile really fast with minimal memory requirements.
Therefore the C language has been designed to require only a very simple compiler which compiles fast. This in turn lead to "single-pass compiler" concept: The compiler reads the source-file and translates everything into assembler code as soon as possible - usually while reading the source file. For example: When the compiler reads the definition of a global variable the appropriate code is emitted immediately.
This trait is visible in C up until today:
C requires "forward declarations" of all and everything. A multi-pass compiler could look forward and deduce the declarations of variables of functions in the same file by itself.
This in turn makes the *.h files necessary.
When compiling a function, the layout of the stack frame must be computed as soon as possible - otherwise the compiler had to do several passes over the function body.
Nowadays no serious C compiler is still "single pass", because many important optimizations cannot be done within one pass. A little bit more can be found in Wikipedia.
The standard body lingered for quite some time to relax that "single-pass" point in regard to the function body. I assume, that other things were more important.
It was that way because it had always been done that way, it made writing compilers a little easier, and nobody had really thought of doing it any other way. In time people realised that it was more important to favour making life easier for language users rather than compiler writers.
I assume the language designers wouldn't have added such a limitation if it had absolutely no purpose at all.
Don't assume that the language designers set out to restrict the language. Often restrictions like this arise by chance and circumstance.
I guess it should be easier for a non-optimising compiler to produce efficient code this way:
int a;
int b;
int c;
...
Although 3 separate variables are declared, the stack pointer can be incremented at once without optimising strategies such as reordering, etc.
Compare this to:
int a;
foo();
int b;
bar();
int c;
To increment the stack pointer just once, this requires a kind of optimisation, although not a very advanced one.
Moreover, as a stylistic issue, the first approach encourages a more disciplined way of coding (no wonder that Pascal too enforces this) by being able to see all the local variables at one place and eventually inspect them together as a whole. This provides a clearer separation between code and data.
Requiring that variables declarations appear at the start of a compound statement did not impair the expressiveness of C89. Anything that one could legitimately do using a mid-block declaration could be done just as well by adding an open-brace before the declaration and doubling up the closing brace of the enclosing block. While such a requirement may sometimes have cluttered source code with extra opening and closing braces, such braces would not have been just noise--they would have marked the beginning and end of variables' scopes.
Consider the following two code examples:
{
do_something_1();
{
int foo;
foo = something1();
if (foo) do_something_1(foo);
}
{
int bar;
bar = something2();
if (bar) do_something_2(bar);
}
{
int boz;
boz = something3();
if (boz) do_something_3(boz);
}
}
and
{
do_something_1();
int foo;
foo = something1();
if (foo) do_something_1(foo);
int bar;
bar = something2();
if (bar) do_something_2(bar);
int boz;
boz = something3();
if (boz) do_something_3(boz);
}
From a run-time perspective, most modern compilers probably wouldn't care about whether foo is syntactically in scope during the execution of do_something3(), since it could determine that any value it held before that statement would not be used after. On the other hand, encouraging programmers to write declarations in a way which would generate sub-optimal code in the absence of an optimizing compiler is hardly an appealing concept.
Further, while handling the simpler cases involving intermixed variable declarations would not be difficult (even a 1970's compiler could have done it, if the authors wanted to allow such constructs), things become more complicated if the block which contains intermixed declarations also contains any goto or case labels. The creators of C probably thought allowing intermixing of variable declarations and other statements would complicate the standards too much to be worth the benefit.
Back in the days of C youth, when Dennis Ritchie worked on it, computers (PDP-11 for example) have very limited memory (e.g. 64K words), and the compiler had to be small, so it had to optimize very few things and very simply. And at that time (I coded in C on Sun-4/110 in the 1986-89 era), declaring register variables was really useful for the compiler.
Today's compilers are much more complex. For example, a recent version of GCC (4.6) has more 5 or 10 million lines of source code (depending upon how you measure it), and does a big lot of optimizations which did not existed when the first C compilers appeared.
And today's processors are also very different (you cannot suppose that today's machines are just like machines from the 1980s, but thousands of times faster and with thousands times more RAM and disk). Today, the memory hierarchy is very important: cache misses are what the processor does the most (waiting for data from RAM). But in the 1980s access to memory was almost as fast (or as slow, by current standards) than execution of a single machine instruction. This is completely false today: to read your RAM module, your processor may have to wait for several hundreds of nanoseconds, while for data in L1 cache, it can execute more that one instruction each nanosecond.
So don't think of C as a language very close to the hardware: this was true in the 1980s, but it is false today.
Oh, but you could (in a way) mix declarations and code, but declaring new variables was limited to the start of a block. For example, the following is valid C89 code:
void f()
{
int a;
do_something();
{
int b = do_something_else();
}
}

Managing without Objects in C - And, why can I declare variables anywhere in a function in C?

everyone. I actually have two questions, somewhat related.
Question #1: Why is gcc letting me declare variables after action statements? I thought the C89 standard did not allow this. (GCC Version: 4.4.3) It even happens when I explicitly use --std=c89 on the compile line. I know that most compilers implement things that are non-standard, i.e. C compilers allowing // comments, when the standard does not specify that. I'd like to learn just the standard, so that if I ever need to use just the standard, I don't snag on things like this.
Question #2: How do you cope without objects in C? I program as a hobby, and I have not yet used a language that does not have Objects (a.k.a. OO concepts?) -- I already know some C++, and I'd like to learn how to use C on it's own. Supposedly, one way is to make a POD struct and make functions similar to StructName_constructor(), StructName_doSomething(), etc. and pass the struct instance to each function - is this the 'proper' way, or am I totally off?
EDIT: Due to some minor confusion, I am defining what my second question is more clearly: I am not asking How do I use Objects in C? I am asking How do you manage without objects in C?, a.k.a. how do you accomplish things without objects, where you'd normally use objects?
In advance, thanks a lot. I've never used a language without OOP! :)
EDIT: As per request, here is an example of the variable declaration issue:
/* includes, or whatever */
int main(int argc, char *argv[]) {
int myInt = 5;
printf("myInt is %d\n", myInt);
int test = 4; /* This does not result in a compile error */
printf("Test is %d\n", test);
return 0;
}
c89 doesn't allow this, but c99 does. Although it's taken a long time to catch on, some compilers (including gcc) are finally starting to implement c99 features.
IMO, if you want to use OOP, you should probably stick to C++ or try out Objective C. Trying to reinvent OOP built on top of C again just doesn't make much sense.
If you insist on doing it anyway, yes, you can pass a pointer to a struct as an imitation of this -- but it's still not a good idea.
It does often make sense to pass (pointers to) structs around when you need to operate on a data structure. I would not, however, advise working very hard at grouping functions together and having them all take a pointer to a struct as their first parameter, just because that's how other languages happen to implement things.
If you happen to have a number of functions that all operate on/with a particular struct, and it really makes sense for them to all receive a pointer to that struct as their first parameter, that's great -- but don't feel obliged to force it just because C++ happens to do things that way.
Edit: As far as how you manage without objects: well, at least when I'm writing C, I tend to operate on individual characters more often. For what it's worth, in C++ I typically end up with a few relatively long lines of code; in C, I tend toward a lot of short lines instead.
There is more separation between the code and data, but to some extent they're still coupled anyway -- a binary tree (for example) still needs code to insert nodes, delete nodes, walk the tree, etc. Likewise, the code for those operations needs to know about the layout of the structure, and the names given to the pointers and such.
Personally, I tend more toward using a common naming convention in my C code, so (for a few examples) the pointers to subtrees in a binary tree are always just named left and right. If I use a linked list (rare) the pointer to the next node is always named next (and if it's doubly-linked, the other is prev). This helps a lot with being able to write code without having to spend a lot of time looking up a structure definition to figure out what name I used for something this time.
#Question #1: I don't know why there is no error, but you are right, variables have to be declared at the beginning of a block. Good thing is you can declare blocks anywhere you like :). E.g:
{
int some_local_var;
}
#Question #2: actually programming C without inheritance is sometimes quite annoying. but there are possibilities to have OOP to some degree. For example, look at the GTK source code and you will find some examples.
You are right, functions like the ones you have shown are common, but the constructor is commonly devided into an allocation function and an initialization function. E.G:
someStruct* someStruct_alloc() { return (someStruct*)malloc(sizeof(someStruct)); }
void someStruct_init(someStruct* this, int arg1, arg2) {...}
In some libraries, I have even seen some sort of polymorphism, where function pointers are stored within the struct (which have to be set in the initializing function, of course). This results in a C++ like API:
someStruct* str = someStruct_alloc();
someStruct_init(str);
str->someFunc(10, 20, 30);
Regarding OOP in C, have you looked at some of the topics on SO? For instance, Can you write object oriented code in C?.
I can't put my finger on an example, but I think they enforce an OO like discipline in Linux kernel programming as well.
In terms of learning how C works, as opposed to OO in C++, you might find it easier to take a short course in some other language that doesn't have an OO derivative -- say, Modula-2 (one of my favorites) or even BASIC (if you can still find a real BASIC implementation -- last time I wrote BASIC code it was with the QBASIC that came with DOS 5.0, later compiled in full Quick BASIC).
The methods you use to get things done in Modula-2 or Pascal (barring the strong typing, which protects against certain types of errors but makes it more complicated to do certain things) are exactly those used in non-OO C, and working in a language with different syntax might (probably will, IMO) make it easier to learn the concepts without your "programming reflexes" kicking in and trying to do OO operations in a nearly-familiar language.

Why are nested functions not supported by the C standard?

It doesn't seem like it would be too hard to implement in assembly.
gcc also has a flag (-fnested-functions) to enable their use.
It turns out they're not actually all that easy to implement properly.
Should an internal function have access to the containing scope's variables?
If not, there's no point in nesting it; just make it static (to limit visibility to the translation unit it's in) and add a comment saying "This is a helper function used only by myfunc()".
If you want access to the containing scope's variables, though, you're basically forcing it to generate closures (the alternative is restricting what you can do with nested functions enough to make them useless).
I think GCC actually handles this by generating (at runtime) a unique thunk for every invocation of the containing function, that sets up a context pointer and then calls the nested function. This ends up being a rather Icky hack, and something that some perfectly reasonable implementations can't do (for example, on a system that forbids execution of writable memory - which a lot of modern OSs do for security reasons).
The only reasonable way to make it work in general is to force all function pointers to carry around a hidden context argument, and all functions to accept it (because in the general case you don't know when you call it whether it's a closure or an unclosed function). This is inappropriate to require in C for both technical and cultural reasons, so we're stuck with the option of either using explicit context pointers to fake a closure instead of nesting functions, or using a higher-level language that has the infrastructure needed to do it properly.
I'd like to quote something from the BDFL (Guido van Rossum):
This is because nested function definitions don't have access to the
local variables of the surrounding block -- only to the globals of the
containing module. This is done so that lookup of globals doesn't
have to walk a chain of dictionaries -- as in C, there are just two
nested scopes: locals and globals (and beyond this, built-ins).
Therefore, nested functions have only a limited use. This was a
deliberate decision, based upon experience with languages allowing
arbitraries nesting such as Pascal and both Algols -- code with too
many nested scopes is about as readable as code with too many GOTOs.
Emphasis is mine.
I believe he was referring to nested scope in Python (and as David points out in the comments, this was from 1993, and Python does support fully nested functions now) -- but I think the statement still applies.
The other part of it could have been closures.
If you have a function like this C-like code:
(*int()) foo() {
int x = 5;
int bar() {
x = x + 1;
return x;
}
return &bar;
}
If you use bar in a callback of some sort, what happens with x? This is well-defined in many newer, higher-level languages, but AFAIK there's no well-defined way to track that x in C -- does bar return 6 every time, or do successive calls to bar return incrementing values? That could have potentially added a whole new layer of complication to C's relatively simple definition.
See C FAQ 20.24 and the GCC manual for potential problems:
If you try to call the nested function
through its address after the
containing function has exited, all
hell will break loose. If you try to
call it after a containing scope level
has exited, and if it refers to some
of the variables that are no longer in
scope, you may be lucky, but it's not
wise to take the risk. If, however,
the nested function does not refer to
anything that has gone out of scope,
you should be safe.
This is not really more severe than some other problematic parts of the C standard, so I'd say the reasons are mostly historical (C99 isn't really that different from K&R C feature-wise).
There are some cases where nested functions with lexical scope might be useful (consider a recursive inner function which doesn't need extra stack space for the variables in the outer scope without the need for a static variable), but hopefully you can trust the compiler to correctly inline such functions, ie a solution with a seperate function will just be more verbose.
Nested functions are a very delicate thing. Will you make them closures? If not, then they have no advantage to regular functions, since they can't access any local variables. If they do, then what do you do to stack-allocated variables? You have to put them somewhere else so that if you call the nested function later, the variable is still there. This means they'll take memory, so you have to allocate room for them on the heap. With no GC, this means that the programmer is now in charge of cleaning up the functions. Etc... C# does this, but they have a GC, and it's a considerably newer language than C.
It also wouldn't be too hard to add members functions to structs but they are not in the standard either.
Features are not added to C standard based on soley whether or not they are easy to implement. It's a combination of many other factors including the point in time in which the standard was written and what was common / practical then.
One more reason: it is not at all clear that nested functions are valuable. Twenty-odd years ago I used to do large scale programming and maintenance in (VAX) Pascal. We had lots of old code that made heavy use of nested functions. At first, I thought this was way cool (compared to K&R C, which I had been working in before) and started doing it myself. After awhile, I decided it was a disaster, and stopped.
The problem was that a function could have a great many variables in scope, counting the variables of all the functions in which it was nested. (Some old code had ten levels of nesting; five was quite common, and until I changed my mind I coded a few of the latter myself.) Variables in the nesting stack could have the same names, so that "inner" function local variables could mask variables of the same name in more "outer" functions. A local variable of a function, that in C-like languages is totally private to it, could be modified by a call to a nested function. The set of possible combinations of this jazz was near infinite, and a nightmare to comprehend when reading code.
So, I started calling this programming construct "semi-global variables" instead of "nested functions", and telling other people working on the code that the only thing worse than a global variable was a semi-global variable, and please do not create any more. I would have banned it from the language, if I could. Sadly, there was no such option for the compiler...
ANSI C has been established for 20 years. Perhaps between 1983 and 1989 the committee may have discussed it in the light of the state of compiler technology at the time but if they did their reasoning is lost in dim and distant past.
I disagree with Dave Vandervies.
Defining a nested function is much better coding style than defining it in global scope, making it static and adding a comment saying "This is a helper function used only by myfunc()".
What if you needed a helper function for this helper function? Would you add a comment "This is a helper function for the first helper function used only by myfunc"? Where do you take the names from needed for all those functions without polluting the namespace completely?
How confusing can code be written?
But of course, there is the problem with how to deal with closuring, i.e. returning a pointer to a function that has access to variables defined in the function from which it is returned.
Either you don't allow references to local variables of the containing function in the contained one, and the nesting is just a scoping feature without much use, or you do. If you do, it is not a so simple feature: you have to be able to call a nested function from another one while accessing the correct data, and you also have to take into account recursive calls. That's not impossible -- techniques are well known for that and where well mastered when C was designed (Algol 60 had already the feature). But it complicates the run-time organization and the compiler and prevent a simple mapping to assembly language (a function pointer must carry on information about that; well there are alternatives such as the one gcc use). It was out of scope for the system implementation language C was designed to be.

When is "inline" ineffective? (in C)

Some people love using inline keyword in C, and put big functions in headers. When do you consider this to be ineffective? I consider it sometime even annoying, because it is unusual.
My principle is that inline should be used for small functions accessed very frequently, or in order to have real type checking. Anyhow, my taste guide me, but I am not sure how to explain best the reasons why inline is not so useful for big functions.
In this question people suggest that the compiler can do a better job at guessing the right thing to do. That was also my assumption. When I try to use this argument, people reply it does not work with functions coming from different objects. Well, I don't know (for example, using GCC).
Thanks for your answers!
inline does two things:
gives you an exemption from the "one definition rule" (see below). This always applies.
Gives the compiler a hint to avoid a function call. The compiler is free to ignore this.
#1 Can be very useful (e.g. put definition in header if short) even if #2 is disabled.
In practice compilers often do a better job of working out what to inline themselves (especially if profile guided optimisation is available).
[EDIT: Full References and relevant text]
The two points above both follow from the ISO/ANSI standard (ISO/IEC 9899:1999(E), commonly known as "C99").
In §6.9 "External Definition", paragraph 5:
An external definition is an external declaration that is also a definition of a function (other than an inline definition) or an object. If an identifier declared with external linkage is used in an expression (other than as part of the operand of a sizeof operator whose result is an integer constant), somewhere in the entire program there shall be exactly one external definition for the identifier; otherwise, there shall be no more than one.
While the equalivalent definition in C++ is explictly named the One Definition Rule (ODR) it serves the same purpose. Externals (i.e. not "static", and thus local to a single Translation Unit -- typically a single source file) can only be defined once only unless it is a function and inline.
In §6.7.4, "Function Specifiers", the inline keyword is defined:
Making a function an inline function suggests that calls to the function be as
fast as possible.[118] The extent to which such suggestions are effective is
implementation-defined.
And footnote (non-normative), but provides clarification:
By using, for example, an alternative to the usual function call mechanism, such as ‘‘inline substitution’’. Inline substitution is not textual substitution, nor does it create a new function. Therefore, for example, the expansion of a macro used within the body of the function uses the definition it had at the point the function body appears, and not where the function is called; and identifiers refer to the declarations in scope where the body occurs. Likewise, the function has a single address, regardless of the number of inline definitions that occur in addition to the external definition.
Summary: what most users of C and C++ expect from inline is not what they get. Its apparent primary purpose, to avoid functional call overhead, is completely optional. But to allow separate compilation, a relaxation of single definition is required.
(All emphasis in the quotes from the standard.)
EDIT 2: A few notes:
There are various restrictions on external inline functions. You cannot have a static variable in the function, and you cannot reference static TU scope objects/functions.
Just seen this on VC++'s "whole program optimisation", which is an example of a compiler doing its own inline thing, rather than the author.
The important thing about an inline declaration is that it doesn't necessarily do anything. A compiler is free to decide to, in many cases, to inline a function not declared so, and to link functions which are declared inline.
Another reason why you shouldn't use inline for large functions, is in the case of libraries. Every time you change the inline functions, you might loose ABI compatibility because the application compiled against an older header, has still inlined the old version of the function. If inline functions are used as a typesafe macro, chances are great that the function never needs to be changed in the life cycle of the library. But for big functions this is hard to guarantee.
Of course, this argument only applies if the function is part of your public API.
An example to illustrate the benefits of inline. sinCos.h :
int16 sinLUT[ TWO_PI ];
static inline int16_t cos_LUT( int16_t x ) {
return sin_LUT( x + PI_OVER_TWO )
}
static inline int16_t sin_LUT( int16_t x ) {
return sinLUT[(uint16_t)x];
}
When doing some heavy number crunching and you want to avoid wasting cycles on computing sin/cos you replace sin/cos with a LUT.
When you compile without inline the compiler will not optimize the loop and the output .asm will show something along the lines of :
;*----------------------------------------------------------------------------*
;* SOFTWARE PIPELINE INFORMATION
;* Disqualified loop: Loop contains a call
;*----------------------------------------------------------------------------*
When you compile with inline the compiler has knowledge about what happens in the loop and will optimize because it knows exactly what is happening.
The output .asm will have an optimized "pipelined" loop ( i.e. it will try to fully utilize all the processor's ALUs and try to keep the processor's pipeline full without NOPS).
In this specific case, I was able to increase my performance by about 2X or 4X which got me within what I needed for my real time deadline.
p.s. I was working on a fixed point processor... and any floating point operations like sin/cos killed my performance.
Inline is ineffective when you use the pointer to function.
Inline is effective in one case: when you've got a performance problem, ran your profiler with real data, and found the function call overhead for some small functions to be significant.
Outside of that, I can't imagine why you'd use it.
That's right. Using inline for big functions increases compile time, and brings little extra performance to the application. Inline functions are used to tell the compiler that a function is to be included without a call, and such should be small code repeated many times. In other words: for big functions, the cost of making the call compared to the cost of the own function implementation is negligible.
I mainly use inline functions as typesafe macros. There's been talk about adding support for link-time optimizations to GCC for quite some time, especially since LLVM came along. I don't know how much of it actually has been implemented yet, though.
Personally I don't think you should ever inline, unless you have first run a profiler on your code and have proven that there is a significant bottleneck on that routine that can be partially alleviated by inlining.
This is yet another case of the Premature Optimization Knuth warned about.
Inline can be used for small and frequently used functions such as getter or setter method. For big functions it is not advisable to use inline as it increases the exe size.
Also for recursive functions, even if you make inline, the compiler will ignore it.
inline acts as a hint only.
Added only very recently. So works with only the latest standard compliant compilers.
Inline functions should be approximately 10 lines or less, give or take, depending on your compiler of choice.
You can tell your compiler that you want something inlined .. its up to the compiler to do so. There is no -force-inline option that I know of which the compiler can't ignore. That is why you should look at the assembler output and see if your compiler actually did inline the function, if not, why not? Many compilers just silently say 'screw you!' in that respect.
so if:
static inline unsigned int foo(const char *bar)
.. does not improve things over static int foo() its time to revisit your optimizations (and likely loops) or argue with your compiler. Take special care to argue with your compiler first, not the people who develop it.. or your just in store for lots of unpleasant reading when you open your inbox the next day.
Meanwhile, when making something (or attempting to make something) inline, does doing so actually justify the bloat? Do you really want that function expanded every time its called? Is the jump so costly?, your compiler is usually correct 9/10 times, check the intermediate output (or asm dumps).

When to use function-like macros in C

I was reading some code written in C this evening, and at the top of
the file was the function-like macro HASH:
#define HASH(fp) (((unsigned long)fp)%NHASH)
This left me wondering, why would somebody choose to implement a
function this way using a function-like macro instead of implementing
it as a regular vanilla C function? What are the advantages and
disadvantages of each implementation?
Thanks a bunch!
Macros like that avoid the overhead of a function call.
It might not seem like much. But in your example, the macro turns into 1-2 machine language instructions, depending on your CPU:
Get the value of fp out of memory and put it in a register
Take the value in the register, do a modulus (%) calculation by a fixed value, and leave that in the same register
whereas the function equivalent would be a lot more machine language instructions, generally something like
Stick the value of fp on the stack
Call the function, which also puts the next (return) address on the stack
Maybe build a stack frame inside the function, depending on the CPU architecture and ABI convention
Get the value of fp off the stack and put it in a register
Take the value in the register, do a modulus (%) calculation by a fixed value, and leave that in the same register
Maybe take the value from the register and put it back on the stack, depending on CPU and ABI
If a stack frame was built, unwind it
Pop the return address off the stack and resume executing instructions there
A lot more code, eh? If you're doing something like rendering every one of the tens of thousands of pixels in a window in a GUI, things run an awful lot faster if you use the macro.
Personally, I prefer using C++ inline as being more readable and less error-prone, but inlines are also really more of a hint to the compiler which it doesn't have to take. Preprocessor macros are a sledge hammer the compiler can't argue with.
One important advantage of macro-based implementation is that it is not tied to any concrete parameter type. A function-like macro in C acts, in many respects, as a template function in C++ (templates in C++ were born as "more civilized" macros, BTW). In this particular case the argument of the macro has no concrete type. It might be absolutely anything that is convertible to type unsigned long. For example, if the user so pleases (and if they are willing to accept the implementation-defined consequences), they can pass pointer types to this macro.
Anyway, I have to admit that this macro is not the best example of type-independent flexibility of macros, but in general that flexibility comes handy quite often. Again, when certain functionality is implemented by a function, it is restricted to specific parameter types. In many cases in order to apply similar operation to different types it is necessary to provide several functions with different types of parameters (and different names, since this is C), while the same can be done by just one function-like macro. For example, macro
#define ABS(x) ((x) >= 0 ? (x) : -(x))
works with all arithmetic types, while function-based implementation has to provide quite a few of them (I'm implying the standard abs, labs, llabs and fabs). (And yes, I'm aware of the traditionally mentioned dangers of such macro.)
Macros are not perfect, but the popular maxim about "function-like macros being no longer necessary because of inline functions" is just plain nonsense. In order to fully replace function-like macros C is going to need function templates (as in C++) or at least function overloading (as in C++ again). Without that function-like macros are and will remain extremely useful mainstream tool in C.
On one hand, macros are bad because they're done by the preprocessor, which doesn't understand anything about the language and does text-replace. They usually have plenty of limitations. I can't see one above, but usually macros are ugly solutions.
On the other hand, they are at times even faster than a static inline method. I was heavily optimizing a short program and found that calling a static inline method takes about twice as much time (just overhead, not actual function body) as compared with a macro.
The most common (and most often wrong) reason people give for using macros (in "plain old C") is the efficiency argument. Using them for efficiency is fine if you have actually profiled your code and are optimizing a true bottleneck (or are writing a library function that might be a bottleneck for somebody someday). But most people who insist on using them have Not actually analyzed anything and are just creating confusion where it adds no benefit.
Macros can also be used for some handy search-and-replace type substitutions which the regular C language is not capable of.
Some problems I have had in maintaining code written by macro abusers is that the macros can look quite like functions but do not show up in the symbol table, so it can be very annoying trying to trace them back to their origins in sprawling codesets (where is this thing defined?!). Writing macros in ALL CAPS is obviously helpful to future readers.
If they are more than fairly simple substitutions, they can also create some confusion if you have to step-trace through them with a debugger.
Your example is not really a function at all,
#define HASH(fp) (((unsigned long)fp)%NHASH)
// this is a cast ^^^^^^^^^^^^^^^
// this is your value 'fp' ^^
// this is a MOD operation ^^^^^^
I'd think, this was just a way of writing more readable code with the casting and mod opration wrapped into a single macro 'HASH(fp)'
Now, if you decide to write a function for this, it would probably look like,
int hashThis(int fp)
{
return ((fp)%NHASH);
}
Quite an overkill for a function as it,
introduces a call point
introduces call-stack setup and restore
The C Preprocessor can be used to create inline functions. In your example, the code will appear to call the function HASH, but instead is just inline code.
The benefits of doing macro functions were eliminated when C++ introduced inline functions. Many older API like MFC and ATL still use macro functions to do preprocessor tricks, but it just leaves the code convoluted and harder to read.

Resources