Effects of very large functions in C Program? - c

Ignoring modularity and readability, what are the effects of having large functions in terms of performance against many sub divided functions? (C Language in general).

Large function probably has a small performance gain over many small functions due to less function calls. But my general rule is: let the compiler deal with optimizations and concentrate on functionality and security.

Functions are an important part of code organization in any programming language. Although, performance wise, having a single large function would decrease the use of the function calls and hence fewer stack jumps and leading to a better performing code. But, not every project has the luxury of not being modular and having code that's unreadable or worse confusing or misleading. Over time, the cost of the project with large functions will be far greater than a project with small functions, in terms of maintenance, refactoring, feature enhancements,etc.Again, a function is big only when analyzed in a certain context and there can be some situations where a big function could not be broken down into smaller pieces and its totally acceptable, as long as it is well-designed and simple.
Remember the first rule of writing a function: do one thing and one thing well.

In C programming For Each Function Call a Stack Frame is Created So in case of single function there will be only one stack and there is no need of stack jump but in case of many sub divided functions, each functions will be having a separate stack and for each function call there will be stack jump so the performance may be reduced depending upon the compiler optimization.

Related

Is dividing the task into functions is beneficial or harmful ? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I am working with embedded systems(Mostly ARM Cortex M3-M4) using C language.And was wondering what are the advantages/ disadvantages of dividing the task into many functions such as;
void Handle_Something(void)
{
// do Task-1
// do Task-2
// do Task-3
//etc.
}
to
void Hanlde_Something(void)
{
// Handle_Task1();
// Handle_Tasl2();
//etc.
}
How are these two approaches can be examined with respect to stack usage and overall processing speed, and which is safer/better for what reason ? (You can assume this is outside of ISR)
From what I know, Memory in the stack is allocated/deallocated for local variables in each call/return cycle, thus dividing the task seems reasonable in terms of memory usage but when doing this, I sometimes get Hardfaults from different sources(mostly Bus or Undefined Instruction errors.) that I couldn't figured out why.
Also, working speed is very crucial for many applications in my field, so ı do need to know which methode provides faster responses.
I would appreciate an enlightment. Thanks everybody in advance
This is what's known as "pre-mature optimization".
In the old days when compilers where horrible, they couldn't inline functions by themselves. So in the old days a keyword inline was added to C - similar non-standard versions also existed before the year 1999. This was used to tell a bad compiler how it should generate code better.
Nowadays this is mostly history. Compilers are better than programmers at determining what and when to inline. They may however struggle when a called function is located in a different "translation unit" (basically, in a different .c file). But in your case I take it this is not the case, but Handle_Task1() etc can be regarded as functions in the same file.
With the above in mind:
How are these two approaches can be examined with respect to stack usage and overall processing speed
They are to be regarded as identical. They use the same stack space and take the same time to execute.
Unless you have a bad, older compiler - in which case function calls always take extra space and execution time. Since you are working with modern MCU:s, this should not be the case, or you desperately need to get a better compiler.
As a rule of thumb, it is always better practice to split up larger functions in several smaller, for the sake of readability and maintenance. Even in hard real-time systems, there exist very few cases where function call overhead is an actual bottleneck even when bad compilers are used.
Memory on the stack isn't allocated/reallocated from some complex memory pool. The stack pointer is simply increased/decreased. An operation that is basically free in all but the tightest/smallest loops imaginable (and those will probably be optimized by the compiler).
Don't group together functions because they could reuse variables e.g. don't create a bunch of int tempInt; long tempLong; variables you use throughout your entire program. A variable should serve only a single purpose and its scope should be kept as tight as possible. Also see: is it good or bad to reuse the variables?
Expanding on that, keeping the scope of all variables as local as possible might even cause your compiler to keep the variables in a cpu register only. A shortly used variable might actually never be allocated!
Try to limit functions to only a singly purpose and try avoiding side effects: if you avoid global variables a function becomes easier to test, optimize and understand as each time you call it with the exact same set of arguments it will preform the exact same action. Have a look at: Why are global variables bad, in a single threaded, non-os, embedded application
Each solution has advantages and disadvantages.
The first approach allows to execute the code (a priori) faster, because the asm code won't have instructions related to jumps. However, you have to take the readability into account, in terms of mixing different kind of functionalities in the same function (or creating large functions, which is not a good idea from the guidelines point of view).
The second solution could be easier to understand, because each function contains a simple task, furthermore it is easier for documenting (that is, you dont have to explain different "purposes" in the same function). As I said, this
solution is slower, because your "scheduler" contains jumps, nevertheless you could declare the simple tasks as inline, given that you can split the code in several simple tasks with a proper documentation and the compiler will generate an assembler as the first approach, that is, avoiding the jumps.
Another point is the use of memory. If your simple tasks are being called from different parts of the code, the first solution and the second solution with inline are worse (in terms of memory) than the second solution without inline, because the function is added as many times as it is called from different parts of your code.
Working with modules is always more efficient in terms of error handling, debugging and re-reading. Considering some heavy working libraries (SLAM, PCL etc.) as functions, they are used as external functions and they don't cause a significant loss of performance(tbh sometimes it's almost impossible to embed such large functions into your code). You may face slightly higher stack use as #Colin commented.

many small sized functions

in computer literature it is generally recommended to write short functions as much as possible. I understand it may increase readability (although not always), and such approach also provides more flexibility. But does it have something to do with optimization as well? I mean -- does it matter to a compiler to compile a bunch of small routines rather than a few large routines?
Thanks.
That depends on the compiler. Many older compilers only optimized a single function at a time, so writing larger functions (up to to some limit) could improve optimization -- but (with most of them) exceeding that limit turned optimization off completely.
Most reasonably current compilers can generate inline code for functions (and C99 added the ineline keyword to facilitate that) and do global (cross-function) optimization, in which case it normally makes no difference at all.
#twain249 and #Jerry are both correct; breaking a program into multiple functions can have a negative effect on performance, but it depends on whether or not the compiler can optimize the functions into inline code.
The only way to know for sure is to examine the assembler output of your program and do some profiling. For example, if you know a particular code path is causing a performance problem, you can look at the assembler, and see how many functions are getting called, how many times parameters are being pushed onto the stack, etc. In that case, you may want to consolidate small functions into one larger one.
This has been a concern for me in the past: doing very tight optimization for embedded projects, I have consciously tried to reduce the number of function calls, especially in tight loops. However, this does produce ungainly functions, sometimes several pages long. To mitigate the maintenance cost of this, you can use macros, which I have leveraged heavily and successfully to make sure there are no function calls while at the same time preserving readability.

Is it okay to use functions to stay organized in C?

I'm a relatively new C programmer, and I've noticed that many conventions from other higher-level OOP languages don't exactly hold true on C.
Is it okay to use short functions to have your coding stay organized (even though it will likely be called only once)? An example of this would be 10-15 lines in something like void init_file(void), then calling it first in main().
I would have to say, not only is it OK, but it's generally encouraged. Just don't overly fragment the train of thought by creating myriads of tiny functions. Try to ensure that each function performs a single cohesive, well... function, with a clean interface (too many parameters can be a hint that the function is performing work which is not sufficiently separate from it's caller).
Furthermore, well-named functions can serve to replace comments that would otherwise be needed. As well as providing re-use, functions can also (or instead) provide a means to organize the code and break it down into smaller units which can be more readily understood. Using functions in this way is very much like creating packages and classes/modules, though at a more fine-grained level.
Yes. Please. Don't write long functions. Write short ones that do one thing and do it well. The fact that they may only be called once is fine. One benefit is that if you name your function well, you can avoid writing comments that will get out of sync with the code over time.
If I can take the liberty to do some quoting from Code Complete:
(These reason details have been abbreviated and in spots paraphrased, for the full explanation see the complete text.)
Valid Reasons to Create a Routine
Note the reasons overlap and are not intended to be independent of each other.
Reduce complexity - The single most important reason to create a routine is to reduce a program's complexity (hide away details so you don't need to think about them).
Introduce an intermediate, understandable abstraction - Putting a section of code int o a well-named routine is one of the best ways to document its purpose.
Avoid duplicate code - The most popular reason for creating a routine. Saves space and is easier to maintain (only have to check and/or modify one place).
Hide sequences - It's a good idea to hide the order in which events happen to be processed.
Hide pointer operations - Pointer operations tend to be hard to read and error prone. Isolating them into routines shifts focus to the intent of the operation instead of the mechanics of pointer manipulation.
Improve portability - Use routines to isolate nonportable capabilities.
Simplify complicated boolean tests - Putting complicated boolean tests into a function makes the code more readable because the details of the test are out of the way and a descriptive function name summarizes the purpose of the tests.
Improve performance - You can optimize the code in one place instead of several.
To ensure all routines are small? - No. With so many good reasons for putting code into a routine, this one is unnecessary. (This is the one thrown into the list to make sure you are paying attention!)
And one final quote from the text (Chapter 7: High-Quality Routines)
One of the strongest mental blocks to
creating effective routines is a
reluctance to create a simple routine
for a simple purpose. Constructing a
whole routine to contain two or three
lines of code might seem like
overkill, but experience shows how
helpful a good small routine can be.
If a group of statements can be thought of as a thing - then make them a function
i think it is more than OK, I would recommend it! short easy to prove correct functions with well thought out names lead to code which is more self documenting than long complex functions.
Any compiler worth using will be able to inline these calls to generate efficient code if needed.
Functions are absolutely necessary to stay organized. You need to first design the problem, and then depending on the different functionality you need to split them into functions. Some segment of code which is used multiple times, probably needs to be written in a function.
I think first thinking about what problem you have in hand, break down the components and for each component try writing a function. When writing the function see if there are some code segment doing the same thing, then break it into a sub function, or if there is a sub module then it is also a candidate for another function. But at some time this breaking job should stop, and it depends on you. Generally, do not make many too big functions and not many too small functions.
When construction the function please consider the design to have high cohesion and low coupling.
EDIT1::
you might want to also consider separate modules. For example if you need to use a stack or queue for some application. Make it separate modules whose functions could be called from other functions. This way you can save re-coding commonly used modules by programming them as a group of functions stored separately.
Yes
I follow a few guidelines:
DRY (aka DIE)
Keep Cyclomatic Complexity low
Functions should fit in a Terminal window
Each one of these principles at some point will require that a function be broken up, although I suppose #2 could imply that two functions with straight-line code should be combined. It's somewhat more common to do what is called method extraction than actually splitting a function into a top and bottom half, because the usual reason is to extract common code to be called more than once.
#1 is quite useful as a decision aid. It's the same thing as saying, as I do, "never copy code".
#2 gives you a good reason to break up a function even if there is no repeated code. If the decision logic passes a certain complexity threshold, we break it up into more functions that make fewer decisions.
It is indeed a good practice to refactor code into functions, irrespective of the language being used. Even if your code is short, it will make it more readable.
If your function is quite short, you can consider inlining it.
IBM Publib article on inlining

Why is C slow with function calls?

I am new here so apologies if I did the post in a wrong way.
I was wondering if someone could please explain why is C so slow with function calling ?
Its easy to give a shallow answer to the standard question about Recursive Fibonacci, but I would appreciate if I knew the "deeper" reason as deep as possible.
Thanks.
Edit1 : Sorry for that mistake. I misunderstood an article in Wiki.
When you make a function call, your program has to put several registers on the stack, maybe push some more stuff, and mess with the stack pointer. That's about all for what can be "slow". Which is, actually, pretty fast. About 10 machine instructions on an x86_64 platform.
It's slow if your code is sparse and your functions are very small. This is the case of the Fibonacci function. However, you have to make a difference between "slow calls" and "slow algorithm": calculating the Fibonacci suite with a recursive implementation is pretty much the slowest straightforward way of doing it. There is almost as much code involved in the function body than in the function prologue and epilogue (where pushing and popping takes place).
There are cases in which calling functions will actually make your code faster overall. When you deal with large functions and your registers are crowded, the compiler may have a rough time deciding in which register to store data. However, isolating code inside a function call will simplify the compiler's task of deciding which register to use.
So, no, C calls are not slow.
Based on the additional information you posted in the comment, it seems that what is confusing you is this sentence:
"In languages (such as C and Java)
that favor iterative looping
constructs, there is usually
significant time and space cost
associated with recursive programs,
due to the overhead required to manage
the stack and the relative slowness of
function calls;"
In the context of a recursive implementation fibonacci calculations.
What this is saying is that making recursive function calls is slower than looping but this does not mean that function calls are slow in general or that function calls in C are slower than function calls in other languages.
Fibbonacci generation is naturally a recursive algorithm, and so the most obvious and natural implementation involves many function calls, but is can also be expressed as an iteration (a loop) instead.
The fibonacci number generation algorithm in particular has a special property called tail recursion. A tail-recursive recursive function can be easily and automatically converted into an iteration, even if it is expressed as a recursive function. Some languages, particularly functional languages where recursion is very common and iteration is rare, guarantee that they will recognize this pattern and automatically transform such a recursion into an iteration "under the hood". Some optimizing C compilers will do this as well, but it is not guaranteed. In C, since iteration is both common and idiomatic, and since the tail recursive optimization is not necessarily going to be made for you by the compiler, it is a better idea to write it explicitly as an iteration to achieve the best performance.
So interpreting this quote as a comment on the speed of C function calls, relative to other languages, is comparing apples to oranges. The other languages in question are those that can take certain patterns of function calls (which happen to occur in fibbonnaci number generation) and automatically transform them into something that is faster, but is faster because it is actually not a function call at all.
C is not slow with function calls.
The overhead of calling a C function is extremely low.
I challenge you to provide evidence to support your assertion.
There are a couple of reasons C can be slower than some other languages for a job like computing Fibonacci numbers recursively. Neither really has anything to do with slow function calls though.
In quite a few functional languages (and languages where a more or less functional style is common), recursion (often very deep recursion) is quite common. To keep speed reasonable, many implementations of such languages do a fair amount of work optimizing recursive calls to (among other things) turn them into iteration when possible.
Quite a few also "memoize" results from previous calls -- i.e., they keep track of the results from a function for a number of values that have been passed recently. When/if the same value is passed again, they can simply return the appropriate value without re-calculating it.
It should be noted, however, that the optimization here isn't really faster function calls -- it's avoiding (often many) function calls.
The Recursive Fibonacci is the reason, not C-language. Recursive Fibonacci is something like
int f(int i)
{
return i < 2 ? 1 : f(i-1) + f(i-2);
}
This is the slowest algorithm to calculate Fibonacci number, and by using stack store called functions list -> make it slower.
I'm not sure what you mean by "a shallow answer to the standard question about Recursive Fibonacci".
The problem with the naive recursive implementation is not that the function calls are slow, but that you make an exponentially large number of calls. By caching the results (memoization) you can reduce the number of calls, allowing the algorithm to run in linear time.
Of all the languages out there, C is probably the fastest (unless you are an assembly language programmer). Most C function calls are 100% pure stack operations. Meaning when you call a function, what this translates too in your binary code is, the CPU pushes any parameters you pass to your function onto the stack. Afterwards, it calls the function. The function then pops your parameters. After that, it executes whatever code makes up your function. Finally, any return parameters are pushed onto the stack, then the function ends and the parameters are popped off. Stack operations on any CPU are usually faster then anything else.
If you are using a profiler or something that is saying a function call you are making is slow, then it HAS to be the code inside your function. Try posting your code here and we will see what is going on.
I'm not sure what you mean. C is basically one abstraction layer on top of CPU assembly instructions, which is pretty fast.
You should clarify your question really.
In some languages, mostly of the functional paradigm, function calls made at the end of a function body can be optimized so that the same stack frame is re-used. This can potentially save both time and space. The benefit is particularly significant when the function is both short and recursive, so that the stack overhead might otherwise dwarf the actual work being done.
The naive Fibonacci algorithm will therefore run much faster with such optimization available. C does not generally perform this optimization, so its performance could suffer.
BUT, as has been stated already, the naive algorithm for the Fibonacci numbers is horrendously inefficient in the first place. A more efficient algorithm will run much faster, whether in C or another language. Other Fibonacci algorithms probably will not see nearly the same benefit from the optimization in question.
So in a nutshell, there are certain optimizations that C does not generally support that could result in significant performance gains in certain situations, but for the most part, in those situations, you could realize equivalent or greater performance gains by using a slightly different algorithm.
I agree with Mark Byers, since you mentioned the recursive Fibonacci. Try adding a printf, so that a message is printed each time you do an addition. You will see that the recursive Fibonacci is doing a lot more additions that it may appear at first glance.
What the article is talking about is the difference between recursion and iteration.
This is under the topic called algorithm analysis in computer science.
Suppose I write the fibonacci function and it looks something like this:
//finds the nth fibonacci
int rec_fib(n) {
if(n == 1)
return 1;
else if (n == 2)
return 1;
else
return fib(n-1) + fib(n - 2)
}
Which, if you write it out on paper (I recommend this), you will see this pyramid-looking shape emerge.
It's taking A Whole Lotta Calls to get the job done.
However, there is another way to write fibonacci (there are several others too)
int fib(int n) //this one taken from scriptol.com, since it takes more thought to write it out.
{
int first = 0, second = 1;
int tmp;
while (n--)
{
tmp = first+second;
first = second;
second = tmp;
}
return first;
}
This one only takes the length of time that is directly proportional to n,instead of the big pyramid shape you saw earlier that grew out in two dimensions.
With algorithm analysis you can determine exactly the speed of growth in terms of run-time vs. size of n of these two functions.
Also, some recursive algorithms are fast(or can be tricked into being faster). It depends on the algorithm - which is why algorithm analysis is important and useful.
Does that make sense?

Have you written very long functions? If so, why?

I am writing an academic project about extremely long functions in the Linux kernel.
For that purpose, I am looking for examples for real-life functions that are extremely long (few hundreds of lines of code), that you don't consider bad programming (i.e., they won't benefit from decomposition or usage of a dispatch table).
Have you ever written or seen such a code? Can you post or link to it, and give explanation of why is it so long?
I have been getting amazing help from the community here - any idea that will be taken into the project will be properly credited.
Thanks,
Udi
The longest functions that I have ever written all have one thing in common, a very large switch statement. There are times, when you have to switch on a long list of items and it would only make things harder to understand if you tried to refactor some of the options into a separate function. Having large switch statements makes the Cyclomatic complexity go through the roof, but it is often better than the alternative implementations.
It was the last one before I got fired.
A previous job: An extremely long case statement, IIRC 1000+ lines. This was long before objects. Each option was only a few lines long. Breaking it up would have made it less clear. There were actually a pair of such routines doing different things to the same underlying set of data types.
Sorry, I don't have the code anymore and it isn't mine to post, anyway.
The longest function that I didn't see as being horrible would be the key method of a custom CPU VM. As with #epotter, this involved a big switch statement. In fact I'd say a lot of method that I find resist being cleanly broken down or improved in readability involve switch statements.
Unfortunately, you won't often find this type of subroutine checked in or posted somewhere if it's auto-generated during a build step using some sort of code generator.
So look for projects that have C generated from another language.
Beside the performance, I think the size of the call stack in Kernel space is 8K (please verify the size). Also, as far as I know, code in kernel is fairly specific. If some code is unlikely to be re-used in the future why bother make it a function considering function call overhead.
I could imagine that when speed is important (such as when holding some sort of lock in the kernel) you would not want to break up a function because of the overhead due to making a functional call. When compiled, parameters have to be pushed onto the stack and data has to be popped off before returning. Therefor you may have a large function for efficiency reasons.

Resources