I'm a relatively new C programmer, and I've noticed that many conventions from other higher-level OOP languages don't exactly hold true on C.
Is it okay to use short functions to have your coding stay organized (even though it will likely be called only once)? An example of this would be 10-15 lines in something like void init_file(void), then calling it first in main().
I would have to say, not only is it OK, but it's generally encouraged. Just don't overly fragment the train of thought by creating myriads of tiny functions. Try to ensure that each function performs a single cohesive, well... function, with a clean interface (too many parameters can be a hint that the function is performing work which is not sufficiently separate from it's caller).
Furthermore, well-named functions can serve to replace comments that would otherwise be needed. As well as providing re-use, functions can also (or instead) provide a means to organize the code and break it down into smaller units which can be more readily understood. Using functions in this way is very much like creating packages and classes/modules, though at a more fine-grained level.
Yes. Please. Don't write long functions. Write short ones that do one thing and do it well. The fact that they may only be called once is fine. One benefit is that if you name your function well, you can avoid writing comments that will get out of sync with the code over time.
If I can take the liberty to do some quoting from Code Complete:
(These reason details have been abbreviated and in spots paraphrased, for the full explanation see the complete text.)
Valid Reasons to Create a Routine
Note the reasons overlap and are not intended to be independent of each other.
Reduce complexity - The single most important reason to create a routine is to reduce a program's complexity (hide away details so you don't need to think about them).
Introduce an intermediate, understandable abstraction - Putting a section of code int o a well-named routine is one of the best ways to document its purpose.
Avoid duplicate code - The most popular reason for creating a routine. Saves space and is easier to maintain (only have to check and/or modify one place).
Hide sequences - It's a good idea to hide the order in which events happen to be processed.
Hide pointer operations - Pointer operations tend to be hard to read and error prone. Isolating them into routines shifts focus to the intent of the operation instead of the mechanics of pointer manipulation.
Improve portability - Use routines to isolate nonportable capabilities.
Simplify complicated boolean tests - Putting complicated boolean tests into a function makes the code more readable because the details of the test are out of the way and a descriptive function name summarizes the purpose of the tests.
Improve performance - You can optimize the code in one place instead of several.
To ensure all routines are small? - No. With so many good reasons for putting code into a routine, this one is unnecessary. (This is the one thrown into the list to make sure you are paying attention!)
And one final quote from the text (Chapter 7: High-Quality Routines)
One of the strongest mental blocks to
creating effective routines is a
reluctance to create a simple routine
for a simple purpose. Constructing a
whole routine to contain two or three
lines of code might seem like
overkill, but experience shows how
helpful a good small routine can be.
If a group of statements can be thought of as a thing - then make them a function
i think it is more than OK, I would recommend it! short easy to prove correct functions with well thought out names lead to code which is more self documenting than long complex functions.
Any compiler worth using will be able to inline these calls to generate efficient code if needed.
Functions are absolutely necessary to stay organized. You need to first design the problem, and then depending on the different functionality you need to split them into functions. Some segment of code which is used multiple times, probably needs to be written in a function.
I think first thinking about what problem you have in hand, break down the components and for each component try writing a function. When writing the function see if there are some code segment doing the same thing, then break it into a sub function, or if there is a sub module then it is also a candidate for another function. But at some time this breaking job should stop, and it depends on you. Generally, do not make many too big functions and not many too small functions.
When construction the function please consider the design to have high cohesion and low coupling.
EDIT1::
you might want to also consider separate modules. For example if you need to use a stack or queue for some application. Make it separate modules whose functions could be called from other functions. This way you can save re-coding commonly used modules by programming them as a group of functions stored separately.
Yes
I follow a few guidelines:
DRY (aka DIE)
Keep Cyclomatic Complexity low
Functions should fit in a Terminal window
Each one of these principles at some point will require that a function be broken up, although I suppose #2 could imply that two functions with straight-line code should be combined. It's somewhat more common to do what is called method extraction than actually splitting a function into a top and bottom half, because the usual reason is to extract common code to be called more than once.
#1 is quite useful as a decision aid. It's the same thing as saying, as I do, "never copy code".
#2 gives you a good reason to break up a function even if there is no repeated code. If the decision logic passes a certain complexity threshold, we break it up into more functions that make fewer decisions.
It is indeed a good practice to refactor code into functions, irrespective of the language being used. Even if your code is short, it will make it more readable.
If your function is quite short, you can consider inlining it.
IBM Publib article on inlining
Related
I am responsible for designing the software architecture of an embedded system in C90 (which is dictated by the target hardware compiler). It shall be easily built against a couple of targets (traditional testing, Software-In-The-Loop, final hardware). Therefore I took a top-down approach or, designing for an interface:
Once defined the data flows of the system (inputs, outputs, ...) I have created generical interfaces in the form of .H files that need to be implemented by the targets.
Therefore, and for the sake of the question, let them be two:
imeasures.h --> Measures needed by the algorithm
icomm.h --> Data flow to and from the algorithm to other devices
For the production target, suppose that all the measures but one (e.g. Engine Speed) are taken using ADCmeasures module, and the last mentioned one (Engine Speed) is provided by RS232comm module.
Question 1
Is it OK if imeasures.h is implemented using both ADCmeasures and RS232comm modules in the following form?
imeasures.h <--is implemented BY-- imeasuresImpl.c
imeasuresImpl.c --> calls functions from ADCmeasures.h and RS232comm.h
Therefore, switching targets would imply changing imeasuresImpl and the rest of callees.
Question 2
Due to the overhead the previous method may suppose (which could be mitigated using inline functions, indeed) , I also thought about a ¿less elegant? form:
imeasures.h <-- is partially implemented by ADCmeasures.c
imeasures.h <-- is partially implemented by RS232comm.c
Which pitfalls do you see? I can see that, for example, if imeasures.h consists of a single getter method which returns a struct, I would have to partially fill the struct in both of the partial implementations. Or, in turn, provide different getter methods, and then I would be deciding beforehand a layout of the implementation which would break the top-down principle.
Thank you for your attention.
First, some assumptions on the situation, the requirements
So I assume that through imeasures.h, preferably you would like to get an interface with a single get function which would return you a structure nicely populated with the most fresh measurements. While it is possible, you may accept some other functions like run which would run the processes necessary for the measurements, and an init to initialize the stuff (I mean with "possible" that there are ways I sometimes explored by which you can get around without these two latter functions).
As you tell, I assume you would like to separate an as thin hardware interface as possible, so you could easier apply simulation for testing, or later you would have less to reimplement when porting to different hardware.
As the interface suggest, you would like to hide the split (that one of your measurements come from RS232).
Solving with something like Q1, the architecture
Your take with Q1 seems to be an okay approach for laying down the architecture to meet these requirements. For Q2 I think "forget that", I can't conceive any reasonable solution which would appear like that.
My approach, just like your Q1, would require at least three implementation file.
On the top would be an imeasures.c (I would stick to this name, since that's the usual way of doing these, and there is no very good reason to do anything different here). This file would implement the whole imeasures.h interface, in it containing the logic for assembling the measurements, and dispatching the hardware-specific components. It would not contain anything hardware-specific by itself.
An RS232comm.c (and .h) would realize the RS232 hardware interface. I would do this as generic as reasonable, within the necessities of meeting the requirements (for example if it would only need to receive, I would only implement a receiver adequate for the project here). The goal is to have something which meets the project's requirements, however if needed, may be re-used for other projects on the same (or similar) hardware.
An ADCcomm.c (and .h). Note that I did not name it ADCmeasures.c for a good reason: since I don't want to have anything specific for the actual measurements here. Just like above: something necessary by the requirements, but generic enough so it might be possible to be reused.
Following this, it is likely that you get an imeasures.c which does not need to be altered in any means for the simulation (has no hardware specific code), so can also be tested in that testing environment. You also get useful little hardware specific components which you can reuse for new projects (in my case it happened quite frequently as many times electrical engineers would iterate on the same piece of hardware for later projects).
Usually you shouldn't have to be concerned about overhead. Design first, optimize only where it is actually necessary. If you design well, you may even likely to end up with an end product performing better, just because you don't have to battle with messy performance code (or "I thought it would perform better" code), taking your time from recognizing the real bottlenecks, and time from either discovering better algorithms or optimizing those parts which actually need it.
Well, hope it helps in getting across this!
When I read open source codes (Linux C codes), I see a lot functions are used instead of performing all operations on the main(), for example:
int main(void ){
function1();
return 0;
}
void function() {
// do something
function2();
}
void function2(){
function3();
//do something
function4();
}
void function3(){
//do something
}
void function4(){
//do something
}
Could you tell me what are the pros and cons of using functions as much as possible?
easy to add/remove functions (or new operations)
readability of the code
source efficiency(?) as the variables in the functions will be destroyed (unless dynamic allocation is done)
would the nested function slow the code flow?
Easy to add/remove functions (or new operations)
Definitely - it's also easy to see where does the context for an operation start/finish. It's much easier to see that way than by some arbitrary range of lines in the source.
Readability of the code
You can overdo it. There are cases where having a function or not having it does not make a difference in linecount, but does in readability - and it depends on a person whether it's positive or not.
For example, if you did lots of set-bit operations, would you make:
some_variable = some_variable | (1 << bit_position)
a function? Would it help?
Source efficiency(?) due to the variables in the functions being destroyed (unless dynamic allocation is done)
If the source is reasonable (as in, you're not reusing variable names past their real context), then it shouldn't matter. Compiler should know exactly where the value usage stops and where it can be ignored / destroyed.
Would the nested function slow the code flow?
In some cases where address aliasing cannot be properly determined it could. But it shouldn't matter in practice in most programs. By the time it starts to matter, you're probably going to be going through your application with a profiler and spotting problematic hotspots anyway.
Compilers are quite good these days at inlining functions though. You can trust them to do at least a decent job at getting rid of all cases where calling overhead is comparable to function length itself. (and many other cases)
This practice of using functions is really important as the amount of code you write increases. This practice of separating out to functions improves code hygiene and makes it easier to read. I read somewhere that there really is no point of code if it is only readable by you only (in some situations that is okay I'm assuming). If you want your code to live on, it must be maintainable and maintainability is one created by creating functions in the simplest sense possible. Also imagine where your code-base exceeds well over 100k lines. This is quite common and imagine having that all in the main function. That would be an absolute nightmare to maintain. Dividing the code into function helps create degrees of separability so many developers can work on different parts of the code-base. So basically short answer is yes, it is good to use functions when necessary.
Functions should help you structure your code. The basic idea is that when you identify some place in the code which does something that can be described in a coherent, self-contained way, you should think about putting it into a function.
Pros:
Code reuse. If you do many times some sequence of operations, why don't you write it once, use it many times?
Readability: it's much easier to understand strlen(st) than while (st[i++] != 0);
Correctness: the code in the previous line is actually buggy. If it is scattered around, you may probably not even see this bug, and if you will fix it in one place, the bug will stay somewhere else. But given this code inside a function named strlen, you will know what it should do, and you can fix it once.
Efficiency: sometimes, in certain situations, compilers may do a better job when compiling a code inside a function. You probably won't know it in advance, though.
Cons:
Splitting a code into functions just because it is A Good Thing is not a good idea. If you find it hard to give the function a good name (in your mother language, not only in C) it is suspicious. doThisAndThat() is probably two functions, not one. part1() is simply wrong.
Function call may cost you in execution time and stack memory. This is not as severe as it sounds, most of the time you should not care about it, but it's there.
When abused, it may lead to many functions doing partial work and delegating other parts from here to there. too many arguments may impede readability too.
There are basically two types of functions: functions that do a sequence of operations (these are called "procedures" in some contexts), and functions that does some form of calculation. These two types are often mixed in a single function, but it helps to remember this distinction.
There is another distinction between kinds of functions: Those that keep state (like strtok), those that may have side effects (like printf), and those that are "pure" (like sin). Function like strtok are essentially a special kind of a different construct, called Object in Object Oriented Programming.
You should use functions that perform one logical task each, at a level of abstraction that makes the function of each function easy to logically verify. For instance:
void create_ui() {
create_window();
show_window();
}
void create_window() {
create_border();
create_menu_bar();
create_body();
}
void create_menu_bar() {
for(int i = 0; i < N_MENUS; i++) {
create_menu(menus[i]);
}
assemble_menus();
}
void create_menu(arg) {
...
}
Now, as far as creating a UI is concerned, this isn't quite the way one would do it (you would probably want to pass in and return various components), but the logical structure is what I'm trying to emphasize. Break your task down into a few subtasks, and make each subtask its own function.
Don't try to avoid functions for optimization. If it's reasonable to do so, the compiler will inline them for you; if not, the overhead is still quite minimal. The gain in readability you get from this is a great deal more important than any speed you might get from putting everything in a monolithic function.
As for your title question, "as much as possible," no. Within reason, enough to see what each function does at a comfortable level of abstraction, no less and no more.
One condition you can use: if part of the code will be reuse/rewritten, then put it in a function.
I guess I think of functions like legos. You have hundreds of small pieces that you can put together into a whole. As a result of all of those well designed generic, small pieces you can make anything. If you had a single lego that looked like an entire house you couldn't then use it to build a plane, or train. Similarly, one huge piece of code is not so useful.
Functions are your bricks that you use when you design your project. Well chosen separation of functionality into small, easily testable, self contained "functions" makes building and looking after your whole project easy. Their benefits WAYYYYYYY out-weigh any possible efficiency issues you may think are there.
To be honest, the art of coding any sizeable project is in how you break it down into smaller pieces, so functions are key to that.
I often write codes in MATLAB/Python to test whether my algorithm is feasible (& actually works). I then need to convert the entire code into C and sometimes, in FORTRAN90.
What would be a good way to manually convert a medium sized code from one language to another?
I have tried :
Converting the entire code from one into another and then testing it.
(Sometimes, there are errors and bugs which just won't go away and the finding the source of the error becomes a problem)
Go line by line and check for consistency of outputs every few lines.
(Too time consuming)
Use converters like f2c.
(In my experience, they are extremely horrible. I link to a lot of libraries which have different function calls for C and Fortran)
Also,:
I am fairly conversant with the programming languages I deal with so I don't need manuals or reference guides for my work (i.e. I know the syntax).
I am not asking this question specifically about MATLAB and C but rather as a translation paradigm.
Regarding the size, the codes are less than 100 lines long.
I dont want to call the code of one language to another. Please don't suggest that.
Different languages call for different paradigms. You definitely don't write and design code the same way in eg. Matlab, Python, C# or C++. Even object hierarchies will change a lot depending on the language.
That said, if your code consists in a few interconnected procedures, then you may go away with a direct line by line translation (every language allow you to write two or three interconnected functions while remaining idiomatic). But this is the case only for the simplest programs.
Prototyping in a high level language and then implementing the same idea in a robust and clean way in a "production" language is a very good practice, but involves two very different things :
Prototype in whatever language you want. Test, experiment, and convince yourself that the idea works. Pay attention to the big picture, don't focus on performance but on the high level ideas. Pay also attention to difficulties that you encounter when implementing, as you'll face them again in step 2.
Implement from scratch the idea in the production environment in language X. It will be quicker than if you did not do the prototyping stage, since most of the difficulties have been met in stage 1. Use idiomatic X, and focus on correctness. Pay attention to corner cases, general robustness, and once it works correctly, performance. You'll notice that roughly half of your code is made of new things which did not appear in 1. (eg. error checking, corner case handling, input/output, unit testing, etc).
You can see that line by line translation is obviously not a good idea, since you don't translate into the same program.
Also, when not prototyping, I find myself throwing away the first version and making another one that I like better, ie. I find myself prototyping ! Implementing the same thing twice is not a loss of time, it is normal development flow.
You may want to consider using a higher level domain specific language with multiple backends (e.g., Matlab, C, Fortran), producing clean and idiomatic code for each target language, probably with some optimisations. If your problem domain is narrow and every piece of code is more or less typical, it should be fairly trivial to design and implement such a DSL.
Break the source down into psuedo-code with input/process/output and then write your new code base to fit that spec.
As a beginner, I read everywhere to avoid excess use of global variables. Well how to do so? My low skill fails. I am ending up passing tons of structures and it is harder to read than using globals. An tips on going through this problem/application structure design?
Depending on what your variables are doing, global scope might be the best scope. (Think flags to signal that an interrupt has arrived, and should be handled at a convenient time in the middle of a compute loop.)
Small utility programs can often feel much cleaner by using global variables (I'm thinking especially of small language parsers); but this makes it much harder to integrate the small utility programs into larger programs in the future. There are always trade-offs.
But chances are good the "correct" data organization will not feel quite so cumbersome. If you post code here, someone may be able to suggest cleaner layout, but the real problems come when code grows beyond easily-understood small samples.
I have a LOT of favorite programming style books, but I think the best I know of to address this situation is The Elements of Programming Style, by Kernighan and Plauger. It's quite old, and difficult to find, but short, sweet, and well worth finding used somewhere.
It's not as short, it's not as sweet, but still well worth finding Code Complete, 2nd edition. It's much more detailed, provides much more code, and provides much more diversity involved in designing software. It's excellent, but might be more intimidating.
There's nothing like studying the masters: the code in Advanced Programming in the Unix Environment, 2nd Edition is phenomenal, well worth every hour of study.
And, of course, there's always experience, but that takes time to acquire. Learning lessons from your own mistakes tends to stick much stronger than learning lessons from other people's mistakes. So keep at it. :)
I'd suggest Structured Design by Yourdon and Constantine. An old book by computer standards (it has examples involving tapes!) but very sound on the problems you are having.
Here are two options that you could use to improve your situation:
For read-only structures, have functions that can control access to the data with a const pointer:
struct my_struct;
const my_struct* GetMyStruct(void) const;
Limit the exposure of a global structure by declaring it static. This way it will only have file scope:
static mystruct myStructInstance;
If your program it the sort of "small" project where global variables don't feel so bad, but you think you might need to integrate it into a larger project in the future, a very simple solution is to add a single context pointer argument to each function and store all your "global" variables in there. If you always name it the same thing, you can even do stuff like:
#define current_filename context->current_filename
#define option_flags context->option_flags
etc. and your code will look virtually identical to how it would have looked with globals, except that you'll be able to have multiple instances of it in a single program, integrate it into a library, and so on with minimal fuss. Just keep those defines in a private header used by your source modules, not the public interface header.
#PeterK Problem is that structure as itself is always presented in C books a as container that can be declared/passed many times to different functions and that is a thing that may confused me and I never thought to use it as a simple one instance global container (and that may make my code more readable).
I am writing 3 phase motor control application to control 1 motor.
Based on what all you wrote please check if my current ideas of solving problem is right:
Pack some global information in structure according to function ex. (sInverterState, sButtonsState, sInverterParameters etc.)
If I write menu UI I can use static variables in C file and don’t care about passing structs when I have only 1 LCD. I don’t want to make it look like GTK++.
Writing reetrant code is not for me yet and its overdoing for this purpose.
Get proper education in IT field.
I may end up with lots of globals but at least they are nicely packed and readable.
I am writing an academic project about extremely long functions in the Linux kernel.
For that purpose, I am looking for examples for real-life functions that are extremely long (few hundreds of lines of code), that you don't consider bad programming (i.e., they won't benefit from decomposition or usage of a dispatch table).
Have you ever written or seen such a code? Can you post or link to it, and give explanation of why is it so long?
I have been getting amazing help from the community here - any idea that will be taken into the project will be properly credited.
Thanks,
Udi
The longest functions that I have ever written all have one thing in common, a very large switch statement. There are times, when you have to switch on a long list of items and it would only make things harder to understand if you tried to refactor some of the options into a separate function. Having large switch statements makes the Cyclomatic complexity go through the roof, but it is often better than the alternative implementations.
It was the last one before I got fired.
A previous job: An extremely long case statement, IIRC 1000+ lines. This was long before objects. Each option was only a few lines long. Breaking it up would have made it less clear. There were actually a pair of such routines doing different things to the same underlying set of data types.
Sorry, I don't have the code anymore and it isn't mine to post, anyway.
The longest function that I didn't see as being horrible would be the key method of a custom CPU VM. As with #epotter, this involved a big switch statement. In fact I'd say a lot of method that I find resist being cleanly broken down or improved in readability involve switch statements.
Unfortunately, you won't often find this type of subroutine checked in or posted somewhere if it's auto-generated during a build step using some sort of code generator.
So look for projects that have C generated from another language.
Beside the performance, I think the size of the call stack in Kernel space is 8K (please verify the size). Also, as far as I know, code in kernel is fairly specific. If some code is unlikely to be re-used in the future why bother make it a function considering function call overhead.
I could imagine that when speed is important (such as when holding some sort of lock in the kernel) you would not want to break up a function because of the overhead due to making a functional call. When compiled, parameters have to be pushed onto the stack and data has to be popped off before returning. Therefor you may have a large function for efficiency reasons.