Functions splitting effect on running time - c

I am writing a DSP code in C (windows environment). The code should be modified, by another engineer, to run on Cortex-M4. This engineer claims that, for reduction of running time, many of the functions that I have implemented should be united into one function. I prefer to avoid it keeping clarity and testing.
Does his claim make sense? If it is, where I can read about it. Otherwise, can I show that he is wrong without a comparison of running time?

Does his claim make sense?
Depends on context. Modern compilers are perfectly able to inline function calls, but that usually means that those functions must be placed in the same translation unit (essentially the same .c file).
If your functions are in the same .c file then their claim is wrong, if you have the functions scattered across multiple files, then their claim is likely correct.
If it is, where I can read about it.
Function inlining has been around for some 30 years. C even added an inline keyword for it in year 1999 (C++ had one earlier still), though during the 2000s compilers turned smarter than programmers in terms of determining when and what to inline. Nowadays when using modern compilers, inline is mostly considered obsolete.
Otherwise, can I show that he is wrong without a comparison of running time?
By disassembling the optimized code and see if there are any function calls or not. Still, function calls are relatively cheap on Cortex M (unless there's a ton of different parameters), so doing manual optimization to remove them would be very tiny optimization.

As always there's a choice between code size and execution speed.
If you wish to remove the stack overhead of calling a new function but wish to keep your code modular then consider using the inline function attribute suitable for your compiler e.g.
static inline void com_ClearMessageBuffer(uint8_t* pBuffer, uint32_t length)
{
NRF_LOG_DEBUG("com_ClearMessageBuffer");
memset(pBuffer, 0, length);
}
Then at compile time your inline function code will be inserted into the code flow wherever it is called.
This will speed execution, but when called multiple times increase the code size.

Related

Why is optimizing inline functions easier than normal functions?

Im reading What Every Programmer Should Know About Memory
https://people.freebsd.org/~lstewart/articles/cpumemory.pdf and it says that inline functions make your code more optimizable
for example:
Inlining of functions, in particular, allows the compiler to optimize larger chunks of code at a time which, in turn, enables the generation of machine code which better exploits the processor’s pipeline architecture
and:
The handling of both code and data (through dead code elimination or value range propagation, and others) works better when larger parts of the program can be considered as a single unit.
and this also:
If a function is only called once it might as well be inlined. This gives the compiler the opportunity to perform more optimizations (like value range propagation, which might significantly improve the code).
After reading these, to me atleast it seems like inline functions are easier to optimize, but why? Why is it easier to optimize something is inline?
The reason that it is easier to make a better job when optimizing inlined functions than with outlined is that you know the complete context in which the function is called and can use this information to tune the generated code to match this exact context. This often allows more efficient code for the inlined copy of the function but also for the calling function. The caller and the callee can be optimized to fit each other in a way that is not possible for outlined functions.
There is no difference!
All functions are subject to being inlined by gcc in -O3 optimization mode, whether declared inline, static, neither or both.
see: https://stackoverflow.com/a/40783656/9925764
or here is the modifying the example of #Eugene Sh. without noinline option.
https://godbolt.org/z/arPEf7rd4

Advantage of #define instead of creating a function in embedded

Recently I got to view an embedded code in that they are using
#define print() printf("hello world")
instead of
void print() { printf("hello world"); }
My question what is the gain on using #define instead of creating a function?
It may be related to performance.
A function call has some overhead (i.e. calling, saving things on the stack, returning, etc) while a macro is a direct substitution of the macro name with it's contents (i.e. no overhead).
In this example the functions foo and bar does exactly the same. foo uses a macro while bar uses a function call.
As you can see bar and printY together requires more instructions than foo
.
So by using a macro the performance got a little better.
But... there are downsides to this approach:
Macros are hard to debug as you can't single step a macro
Extensive use of a macro increases the size of the binary (compared to using function call). Something that can impact performance in a negative direction.
Also notice that modern compilers (with optimization on) are really good at figuring out when it's a good idea to automatically inline a function (i.e. your code is written with a function call but the compiler decides to inline the function as if it was a macro). So you might get the same performance using function call.
Further, you can use the inline key word as a hint to the compiler that you think it will be good to inline a function. But even with that keyword the compiler may decide not to inline. The only way to make sure that the code gets inline, is by using a macro.
There is no advantage. Using #define like this is quite ancient C programming style.
In the year 1999, the C language got the inline keyword to make all such macros obsolete. And with modern compilers, inline is often superfluous too, since the compiler is nowadays better than the programmer when it comes to determining when to inline.
Some of the embedded compilers out can still be rather bad at such optimizations though, and that's why embedded C code tends to lag behind in modernization.
In general, doing micro-optimizations like this is called "pre-mature optimizations", meaning the programmer is meddling with optimizations that they should leave to the compiler. Even in hard real time systems. Optimizations should only be the last resort when you have 1) detected an actual bottleneck, and 2) disassembled to see if manual inlining actually does anything good for performance.
Sometimes you want to stub out functionality at compile time. Macros give you an easy way to do this.

Header-only and static-inline-only library in C

I write small header-only and static-inline-only libraries in C. Would this be a bad idea when applied to big libraries? Or is it likely that the running time will be faster with the header-only version? Well, without considering the obvious compilation time difference.
Yes, it is a bad idea -- especially when integrated with larger libraries.
The problem of inline functions' complexity generally increases as these libraries are included and visible to more translations and more complex header inclusion graphs -- which is quite common with larger projects. It becomes much more time consuming to build as translation counts and dependencies increase. The increase is not typically linear complexity.
There are reasons this flies in C++, but not in C. inline export semantics differ. In short, you will end up producing tons of copies of functions in C (as well as functions' variables). C++ deduplicates them. C does not.
Also, inlining isn't a silver bullet for speed. The approach will often increase your code size and executable size. Large functions can create slower code. Copies of programs/functions can also make your program slower. Larger binaries take more time to link and initialize (=launch). Smaller is usually better.
It's better to consider alternatives, such as Link Time Optimizations, Whole Program Optimizations, Library design, using C++ -- and to avoid C definitions in headers.
Also keep in mind that the compiler can eliminate dead code, and the linker can eliminate unused functions.
I wrote a unit testing framework* as a single C89 header file. Essentially everything is a macro or marked static and link time optimisation (partly) deduplicates the result.
This is a win for ease of use as integration with build systems is trivial.
Compile times are OK, since this is C, but the resulting function duplication does bother me a little. It can therefore be used as header + source instead by setting a macro before #including in a single source file, e.g.
#define MY_LIB_HEADER_IMPLEMENTATION
#include "my_lib.h"
I don't think I would take this approach for a larger project, but I think it's optimal for what is essentially a set of unit testing macros.
in the "don't call us, we'll call you" sense

Is there, as in JavaScript, a performance penalty for creating functions in C?

In JavaScript, there are, often, huge performance penalties for writing functions. For example, if you use this function:
function double(x){ return x*2; }
inside an inner loop, you are probably hitting your performance considerably, so it is really profitable to inline that kind of function for intensive applications. Does this, in general, hold for C? Am I free to create those kind of functions for everything, and rest assured the compiler will do the job, or is hand inlining still important?
The answer is: it depends.
I'm currently using MSVC compiler and GCC for a project at work and my experience is that they both do a pretty good job. Furthermore, the cost of a function call in native code can be pretty small, especially in functions that do not need to be accessible outside the executable (like functions not exported in a shared library). For these functions, there is more flexibility with how the call is actually implemented.
A few things to note: it's much easier for a compiler to optimize calls to static functions. Functions with external linkage often require link time optimization since one must know how and where the function is actually called, as well as the implementation, to do much optimization or inlining. This requires examining more than one compilation unit at a time.
I would say that you should use functions where it makes sense and makes the code easier to read and maintain. In general, it is safe to assume that the cost is smaller than it would be in JavaScript. But in the end, you'd have to profile the code to say anything more precise.
UPDATE: I want to emphasize that functions can be inlined across compilation units, but this requires link-time optimization (or whole program optimization). This is supported in both GCC (https://gcc.gnu.org/wiki/LinkTimeOptimization) and MSVC (http://msdn.microsoft.com/en-us/library/0zza0de8.aspx).
These days, if you can beat the compiler by copying the body of a function and pasting it everywhere you call that function, you probably need a different compiler.
In general, with optimizations turned on, gcc will tend to inline short functions provided that they are defined in the same compilation unit that they are called in.
Moreover, if the calling function and called function are in different compilation units, the compiler does not have a chance to inline them regardless of what you request.
So, if you want to maximize the chance of the compiler optimizing away a function call (without manually inlining), you should define the function call in .h file or in the same c file that it is called in.
There are no inner functions in C. Dot. So the rest of your question is kind of irrelevant.
Anyway, as of "normal" functions in C compiler may or may not inline them ( replace function invocation by its body ). If you compile your code with "optimize for size" it may decide to do not do inlining for obvious reason.

Difference between macros and functions in C in relation to instruction memory and speed

To my understanding the difference between a macro and a function is, that a macro-call will be replaced by the instruction in the definition, and a function does the whole push, branch and pop -thing. Is this right, or have I understand something wrong?
Additionally, if this is right, it would mean, that macros would take more space, but would be faster (because of the lack of the push, branch and pop instructions), wouldn't it?
What you are wrote about the performance implications is correct if the C compiler is not optimizing. But optimizing compilers can inline functions just as if they were macros, so an inlined function call runs at the same speed as a macro, and there is no pushing/popping overhead. To trigger inlining, enable optimization in your compiler settings (e.g. gcc -O2), and put your functions to the .h file as static inline.
Please note that sometimes inlining/macros is faster, sometimes a real function call is faster, depending on the code and the compiler. If the function body is very short (and most of it will be optimized away), usually inlining is faster than a function call.
Another important difference that macros can take arguments of different types, and the macro definition can make sense for multiple types (but the compiler won't do type checking for you, so you may get undesired behavior or a cryptic error message if you use a macro with the wrong argument type). This polymorphism is hard to mimic with functions in C (but easy in C++ with function overloading and function templates).
This might have been right in the 1980s, but modern compilers are much better.
Functions don't always push and pop the stack, especially if they're leaf functions or have tail calls. Also, functions are often inlined, and can be inlined even if they are defined in other translation units (this is called link-time optimization).
But you're right that in general, when optimizations are turned off, a macro be inlined and a function won't be inlined. Either version may take more space, it depends on the particulars of the macro/function.
A function uses space in two ways: the body uses space, and the function call uses space. If the function body is very small, it may actually save space to inline it.
Yes your understanding is right. But you should also note that, no type checking in macro and it can lead to side effect. You should also be very careful in parenthesizing macros.
Your understanding is half correct. The point is that macros are resolved before compilation. You should think of them as sophisticated text replacement tools (that's oversimplifying it, but is mostly what it comes down to).
So the difference is when in the build process your code is used.
This is orthogonal to the question of what the compiler really does with it when it creates the final binary code. It is more or less free to do whatever it thinks is correct to produce the intended behaviour. In C++, you can only hint at your preference with the inline keyword. The compiler is free to ignore that hint.
Again, this is orthogonal to the whole preprocessor business. Nothing stops you from writing macros which result in C++ code using the inline keyword, after all. Likewise, nobody stops you from writing macros which result in a lot of recursive C++ functions which the compiler will probably not be able to inline even if wanted to do.
The conclusion is that your question is wrong. It's a general question of having binaries with a lot of inlined functions vs. binaries with a lot of real function calls. Macros are just one technique you can use to influence the tradeoff in one way or the other, and you will ask yourself the same general question without macros.
The assumption that inlining a function will always trade space for speed is wrong. Inlining the wrong (i.e. too big) functions will even have a negative impact on speed. As is always the case with such opimisations, do not guess but measure.
You should read the FAQ on this: "Do inline functions improve performance?"

Resources