I've recently started working on developing APIs written in C. I see some subroutines which expect 8(Eight) parameters and to me it looks ugly and cumbersome passing 8 parameters while calling that particular subroutine. I was wondering if something more acceptable and cleaner way could be implemented .
If a number of arguments can be logically grouped together you may consider creating a structure containing them and simply pass that structure as an argument. For example instead of passing two coordinate values x and y you could pass a POINT structure instead.
But if such a grouping isn't applicable, then any number of arguments should be fine if you really need them, although it might be a sign that your function does a little too much and that you could spread work over more, but smaller functions.
Large numbers of arguments in a function call are usually showing up a design problem. There are ways of apparently reducing the number of parameters, by such means as creating structures which are passed instead of individual variables or having global variables. I'd recommend you DON'T do either of these things, and attend to the design. No quick or easy fix there, but the people who have to maintain the code will thank you for it.
Yes, 8 is almost certianly too much.
Here's some old-school software engineering terms for you. Cohesion and coupling. Cohesion is how well a subroutine holds together on its own, and coupling is how clean the interfaces between your routines are (or how self-sufficient your routines are).
With coupling, generally the looser the better. Interfacing only through parameters ("data coupling") is good low coupling, while using global variables ("common coupling") is very high coupling. When you have a high number of parameters, what is usually the case is that someone has tried to hide their common coupling with a thin veneer of data coupling. Its bad design with a paint job.
With cohesion, the higher (more cohesive) the better. Any routine that modifies eight different things is also quite likey to suffer from low cohesion. I'd have to see the code to see for sure, but I'd be willing to bet that it would be very difficult to clearly explain what that routine does in a short sentence. Sight unseen, I'd guess it is temporally cohesive (just a bunch of stuff that needs to be done at roughly the same time).
8 could be a proper number. or it could be that many of those 8 should all belong to a proper class as members, then you could pass a single instance of the class... hard to tell just by this kind of high level discussion.
edit: in c - classes would be similar to structures in this case.
I suggest the usage of structures too.
Maybe you want to rethink your APIs design.
Remember that your APIs will be used by developers, and it would be hard to use an 8 parameter function call.
One pattern used by some APIs (like pthreads) is "attribute objects" that are passed into functions instead of a bunch of discrete arguments. These attribute objects are opaque structures, with functions for creating, destroying, modifying and querying them. All in all this takes more code than simply dumping 10 arguments to a function, but it's a more robust approach. Sometimes a bit of extra code that can make your code much more understandable is worth the effort.
Once again, for a good example of this pattern, see the pthreads API. I wrote a lengthy article on the design of the pthreads API a couple of months ago, and this is one of the aspects I addressed in it.
It also seems to be interesting to consider this question not from the standpoint of ugliness/not ugliness but from the point of performance.
I know that there are some x86 calling conventions that can use registers for passing two first arguments and stack for all other arguments. So I think that if one use this type of calling convention and always use a pointer to structure to pass arguments in situations when a function needs more then 2 parameters in overall the function call might be faster. On Itanium registers are always used for passing parameters to a function.
I think it might be worth testing.
If the API seems cumbersome with that many parameters, use a pointer to a structure to pass parameters that are related in some way.
I'm not going to judge your design without seeing it firsthand. There are legitimate functions that require a large number of parameters, for example:
Filters with a large number of polynomial coefficients.
Color space transforms with sub-pixel masks and LUT indices.
Geometric arithmetic for n-dimensional irregular polygons.
There are also some very poor designs that lead to large parameter prototypes. Share more information about your design if you seek more germane responses.
Another thing you can do is convert some of the parameters to state. OpenGL functions have fewer parameters because you have calls like:
glBindFramebuffer( .. ) ;
glVertexAttribPointer( .. ) ;
glBindTexture( .. ) ;
glBindBuffer( .. ) ;
glBindVertexArray( .. ) ;
// the actual draw call
glDrawArrays( .. ) ;
All of these (glBind* type calls) represent "changing state", all of which will affect the next draw call. Imagine a draw call with 20-something arguments.. absolutely unmanageable!
The old Windows C api for drawing also had state, stored inside "opaque pointer" objects (HDC's, HWND's..). An opaque pointer is basically C's way of making private data members that you can't directly access. So for example, in the Windows drawing API, you would create an HDC opaque pointer via createDC. You could set the DC's internal values via
the SetDC* functions, for example SetDCBrushColor.
Now that you have a DC set up with a color and everything, you could use the Rectangle function to draw into the DC. You passed an HDC as the first parameter, which contained information about what color brush to use etc. Rectangle then only takes 5 parameters, the hdc, x,y,width and height.
Related
I came to know that scanf is coupled and it is coupled because it takes different types of inputs like integer, float, char and others.
But cohesion indicates doing a single task and scanf does it(scanning input from stdin).
I agree that although it does the desired task of reading, it lack cohesion in the form reading different types of data.
But how does reading different types of data indicate it is coupled? I saw that coupling is a degree to which a component/module is connected to the other modules or independence of modules. How does reading different types of data make it dependent?
Can anyone explain how can we say a function is coupled or cohesive and is scanf coupled or cohesive?
As the word suggests, I would say that you can only discuss coupling if you have at least two elements involved.
Saying that scanf(), by itself, is coupled, does not make much sense to me.
Given two modules (two functions, two classes, ...) they may be more or less "coupled" depending on how much one depends on the other.
For example, they may share a global variable or (a file) so that if one alters it, the other is affected too. Or they must be called in a certain order (or they won't work).
Too tight coupling is a bad thing from a maintenance perspective, you may change something in a module and later discover that you introduced a bug in another module!
From this perspective I can't think of any function in the standard C library on which scanf() may depend. But even if there were one it would be a problem for the standard library mainteners, not for programmers.
Cohesion, instead, refers to the fact that a module (again: a function, a class, ...) performs a single, identified task. The worst you can have is when you have a function that performs two (or more) unrelated tasks just because they can be performed at the same time. For example you have a function that computes the average of a set of numbers and cleans up the directory where you will store the results.
This is bad from a clarity point of view (and hence, you are increasing the chance of bugs) and from a reuse point of view (little chance you will ever call that function again in the program).
As far as I can tell, scanf() does a single job (reading a set of values from stdin according a pattern) and does it well.
I am responsible for designing the software architecture of an embedded system in C90 (which is dictated by the target hardware compiler). It shall be easily built against a couple of targets (traditional testing, Software-In-The-Loop, final hardware). Therefore I took a top-down approach or, designing for an interface:
Once defined the data flows of the system (inputs, outputs, ...) I have created generical interfaces in the form of .H files that need to be implemented by the targets.
Therefore, and for the sake of the question, let them be two:
imeasures.h --> Measures needed by the algorithm
icomm.h --> Data flow to and from the algorithm to other devices
For the production target, suppose that all the measures but one (e.g. Engine Speed) are taken using ADCmeasures module, and the last mentioned one (Engine Speed) is provided by RS232comm module.
Question 1
Is it OK if imeasures.h is implemented using both ADCmeasures and RS232comm modules in the following form?
imeasures.h <--is implemented BY-- imeasuresImpl.c
imeasuresImpl.c --> calls functions from ADCmeasures.h and RS232comm.h
Therefore, switching targets would imply changing imeasuresImpl and the rest of callees.
Question 2
Due to the overhead the previous method may suppose (which could be mitigated using inline functions, indeed) , I also thought about a ¿less elegant? form:
imeasures.h <-- is partially implemented by ADCmeasures.c
imeasures.h <-- is partially implemented by RS232comm.c
Which pitfalls do you see? I can see that, for example, if imeasures.h consists of a single getter method which returns a struct, I would have to partially fill the struct in both of the partial implementations. Or, in turn, provide different getter methods, and then I would be deciding beforehand a layout of the implementation which would break the top-down principle.
Thank you for your attention.
First, some assumptions on the situation, the requirements
So I assume that through imeasures.h, preferably you would like to get an interface with a single get function which would return you a structure nicely populated with the most fresh measurements. While it is possible, you may accept some other functions like run which would run the processes necessary for the measurements, and an init to initialize the stuff (I mean with "possible" that there are ways I sometimes explored by which you can get around without these two latter functions).
As you tell, I assume you would like to separate an as thin hardware interface as possible, so you could easier apply simulation for testing, or later you would have less to reimplement when porting to different hardware.
As the interface suggest, you would like to hide the split (that one of your measurements come from RS232).
Solving with something like Q1, the architecture
Your take with Q1 seems to be an okay approach for laying down the architecture to meet these requirements. For Q2 I think "forget that", I can't conceive any reasonable solution which would appear like that.
My approach, just like your Q1, would require at least three implementation file.
On the top would be an imeasures.c (I would stick to this name, since that's the usual way of doing these, and there is no very good reason to do anything different here). This file would implement the whole imeasures.h interface, in it containing the logic for assembling the measurements, and dispatching the hardware-specific components. It would not contain anything hardware-specific by itself.
An RS232comm.c (and .h) would realize the RS232 hardware interface. I would do this as generic as reasonable, within the necessities of meeting the requirements (for example if it would only need to receive, I would only implement a receiver adequate for the project here). The goal is to have something which meets the project's requirements, however if needed, may be re-used for other projects on the same (or similar) hardware.
An ADCcomm.c (and .h). Note that I did not name it ADCmeasures.c for a good reason: since I don't want to have anything specific for the actual measurements here. Just like above: something necessary by the requirements, but generic enough so it might be possible to be reused.
Following this, it is likely that you get an imeasures.c which does not need to be altered in any means for the simulation (has no hardware specific code), so can also be tested in that testing environment. You also get useful little hardware specific components which you can reuse for new projects (in my case it happened quite frequently as many times electrical engineers would iterate on the same piece of hardware for later projects).
Usually you shouldn't have to be concerned about overhead. Design first, optimize only where it is actually necessary. If you design well, you may even likely to end up with an end product performing better, just because you don't have to battle with messy performance code (or "I thought it would perform better" code), taking your time from recognizing the real bottlenecks, and time from either discovering better algorithms or optimizing those parts which actually need it.
Well, hope it helps in getting across this!
When I read open source codes (Linux C codes), I see a lot functions are used instead of performing all operations on the main(), for example:
int main(void ){
function1();
return 0;
}
void function() {
// do something
function2();
}
void function2(){
function3();
//do something
function4();
}
void function3(){
//do something
}
void function4(){
//do something
}
Could you tell me what are the pros and cons of using functions as much as possible?
easy to add/remove functions (or new operations)
readability of the code
source efficiency(?) as the variables in the functions will be destroyed (unless dynamic allocation is done)
would the nested function slow the code flow?
Easy to add/remove functions (or new operations)
Definitely - it's also easy to see where does the context for an operation start/finish. It's much easier to see that way than by some arbitrary range of lines in the source.
Readability of the code
You can overdo it. There are cases where having a function or not having it does not make a difference in linecount, but does in readability - and it depends on a person whether it's positive or not.
For example, if you did lots of set-bit operations, would you make:
some_variable = some_variable | (1 << bit_position)
a function? Would it help?
Source efficiency(?) due to the variables in the functions being destroyed (unless dynamic allocation is done)
If the source is reasonable (as in, you're not reusing variable names past their real context), then it shouldn't matter. Compiler should know exactly where the value usage stops and where it can be ignored / destroyed.
Would the nested function slow the code flow?
In some cases where address aliasing cannot be properly determined it could. But it shouldn't matter in practice in most programs. By the time it starts to matter, you're probably going to be going through your application with a profiler and spotting problematic hotspots anyway.
Compilers are quite good these days at inlining functions though. You can trust them to do at least a decent job at getting rid of all cases where calling overhead is comparable to function length itself. (and many other cases)
This practice of using functions is really important as the amount of code you write increases. This practice of separating out to functions improves code hygiene and makes it easier to read. I read somewhere that there really is no point of code if it is only readable by you only (in some situations that is okay I'm assuming). If you want your code to live on, it must be maintainable and maintainability is one created by creating functions in the simplest sense possible. Also imagine where your code-base exceeds well over 100k lines. This is quite common and imagine having that all in the main function. That would be an absolute nightmare to maintain. Dividing the code into function helps create degrees of separability so many developers can work on different parts of the code-base. So basically short answer is yes, it is good to use functions when necessary.
Functions should help you structure your code. The basic idea is that when you identify some place in the code which does something that can be described in a coherent, self-contained way, you should think about putting it into a function.
Pros:
Code reuse. If you do many times some sequence of operations, why don't you write it once, use it many times?
Readability: it's much easier to understand strlen(st) than while (st[i++] != 0);
Correctness: the code in the previous line is actually buggy. If it is scattered around, you may probably not even see this bug, and if you will fix it in one place, the bug will stay somewhere else. But given this code inside a function named strlen, you will know what it should do, and you can fix it once.
Efficiency: sometimes, in certain situations, compilers may do a better job when compiling a code inside a function. You probably won't know it in advance, though.
Cons:
Splitting a code into functions just because it is A Good Thing is not a good idea. If you find it hard to give the function a good name (in your mother language, not only in C) it is suspicious. doThisAndThat() is probably two functions, not one. part1() is simply wrong.
Function call may cost you in execution time and stack memory. This is not as severe as it sounds, most of the time you should not care about it, but it's there.
When abused, it may lead to many functions doing partial work and delegating other parts from here to there. too many arguments may impede readability too.
There are basically two types of functions: functions that do a sequence of operations (these are called "procedures" in some contexts), and functions that does some form of calculation. These two types are often mixed in a single function, but it helps to remember this distinction.
There is another distinction between kinds of functions: Those that keep state (like strtok), those that may have side effects (like printf), and those that are "pure" (like sin). Function like strtok are essentially a special kind of a different construct, called Object in Object Oriented Programming.
You should use functions that perform one logical task each, at a level of abstraction that makes the function of each function easy to logically verify. For instance:
void create_ui() {
create_window();
show_window();
}
void create_window() {
create_border();
create_menu_bar();
create_body();
}
void create_menu_bar() {
for(int i = 0; i < N_MENUS; i++) {
create_menu(menus[i]);
}
assemble_menus();
}
void create_menu(arg) {
...
}
Now, as far as creating a UI is concerned, this isn't quite the way one would do it (you would probably want to pass in and return various components), but the logical structure is what I'm trying to emphasize. Break your task down into a few subtasks, and make each subtask its own function.
Don't try to avoid functions for optimization. If it's reasonable to do so, the compiler will inline them for you; if not, the overhead is still quite minimal. The gain in readability you get from this is a great deal more important than any speed you might get from putting everything in a monolithic function.
As for your title question, "as much as possible," no. Within reason, enough to see what each function does at a comfortable level of abstraction, no less and no more.
One condition you can use: if part of the code will be reuse/rewritten, then put it in a function.
I guess I think of functions like legos. You have hundreds of small pieces that you can put together into a whole. As a result of all of those well designed generic, small pieces you can make anything. If you had a single lego that looked like an entire house you couldn't then use it to build a plane, or train. Similarly, one huge piece of code is not so useful.
Functions are your bricks that you use when you design your project. Well chosen separation of functionality into small, easily testable, self contained "functions" makes building and looking after your whole project easy. Their benefits WAYYYYYYY out-weigh any possible efficiency issues you may think are there.
To be honest, the art of coding any sizeable project is in how you break it down into smaller pieces, so functions are key to that.
I'm a relatively new C programmer, and I've noticed that many conventions from other higher-level OOP languages don't exactly hold true on C.
Is it okay to use short functions to have your coding stay organized (even though it will likely be called only once)? An example of this would be 10-15 lines in something like void init_file(void), then calling it first in main().
I would have to say, not only is it OK, but it's generally encouraged. Just don't overly fragment the train of thought by creating myriads of tiny functions. Try to ensure that each function performs a single cohesive, well... function, with a clean interface (too many parameters can be a hint that the function is performing work which is not sufficiently separate from it's caller).
Furthermore, well-named functions can serve to replace comments that would otherwise be needed. As well as providing re-use, functions can also (or instead) provide a means to organize the code and break it down into smaller units which can be more readily understood. Using functions in this way is very much like creating packages and classes/modules, though at a more fine-grained level.
Yes. Please. Don't write long functions. Write short ones that do one thing and do it well. The fact that they may only be called once is fine. One benefit is that if you name your function well, you can avoid writing comments that will get out of sync with the code over time.
If I can take the liberty to do some quoting from Code Complete:
(These reason details have been abbreviated and in spots paraphrased, for the full explanation see the complete text.)
Valid Reasons to Create a Routine
Note the reasons overlap and are not intended to be independent of each other.
Reduce complexity - The single most important reason to create a routine is to reduce a program's complexity (hide away details so you don't need to think about them).
Introduce an intermediate, understandable abstraction - Putting a section of code int o a well-named routine is one of the best ways to document its purpose.
Avoid duplicate code - The most popular reason for creating a routine. Saves space and is easier to maintain (only have to check and/or modify one place).
Hide sequences - It's a good idea to hide the order in which events happen to be processed.
Hide pointer operations - Pointer operations tend to be hard to read and error prone. Isolating them into routines shifts focus to the intent of the operation instead of the mechanics of pointer manipulation.
Improve portability - Use routines to isolate nonportable capabilities.
Simplify complicated boolean tests - Putting complicated boolean tests into a function makes the code more readable because the details of the test are out of the way and a descriptive function name summarizes the purpose of the tests.
Improve performance - You can optimize the code in one place instead of several.
To ensure all routines are small? - No. With so many good reasons for putting code into a routine, this one is unnecessary. (This is the one thrown into the list to make sure you are paying attention!)
And one final quote from the text (Chapter 7: High-Quality Routines)
One of the strongest mental blocks to
creating effective routines is a
reluctance to create a simple routine
for a simple purpose. Constructing a
whole routine to contain two or three
lines of code might seem like
overkill, but experience shows how
helpful a good small routine can be.
If a group of statements can be thought of as a thing - then make them a function
i think it is more than OK, I would recommend it! short easy to prove correct functions with well thought out names lead to code which is more self documenting than long complex functions.
Any compiler worth using will be able to inline these calls to generate efficient code if needed.
Functions are absolutely necessary to stay organized. You need to first design the problem, and then depending on the different functionality you need to split them into functions. Some segment of code which is used multiple times, probably needs to be written in a function.
I think first thinking about what problem you have in hand, break down the components and for each component try writing a function. When writing the function see if there are some code segment doing the same thing, then break it into a sub function, or if there is a sub module then it is also a candidate for another function. But at some time this breaking job should stop, and it depends on you. Generally, do not make many too big functions and not many too small functions.
When construction the function please consider the design to have high cohesion and low coupling.
EDIT1::
you might want to also consider separate modules. For example if you need to use a stack or queue for some application. Make it separate modules whose functions could be called from other functions. This way you can save re-coding commonly used modules by programming them as a group of functions stored separately.
Yes
I follow a few guidelines:
DRY (aka DIE)
Keep Cyclomatic Complexity low
Functions should fit in a Terminal window
Each one of these principles at some point will require that a function be broken up, although I suppose #2 could imply that two functions with straight-line code should be combined. It's somewhat more common to do what is called method extraction than actually splitting a function into a top and bottom half, because the usual reason is to extract common code to be called more than once.
#1 is quite useful as a decision aid. It's the same thing as saying, as I do, "never copy code".
#2 gives you a good reason to break up a function even if there is no repeated code. If the decision logic passes a certain complexity threshold, we break it up into more functions that make fewer decisions.
It is indeed a good practice to refactor code into functions, irrespective of the language being used. Even if your code is short, it will make it more readable.
If your function is quite short, you can consider inlining it.
IBM Publib article on inlining
I am implementing a call graph program for a C using perl script. I wonder how to resolve call graphs for function pointers using output of 'objdump'?
How different call graph applications resolve function pointers?
Are function pointers resolved at run time or they can be done statically?
EDIT
How do call graphs resolve cycles in static evaluation of program?
It is easy to build a call graph of A-calls-B when the call statement explicitly mentions B. It is much harder to handle indirect calls, as you've noticed.
Good static analysis tools form estimates of the contents of pointer variables by propagating pointer assignments/copies/arithmetic across program data flows (inter and intra-procedural ["global"]) using a variety of schemes, often conservative ("you get too much").
Without such an estimate, you cannot have any idea what a pointer contains and therefore simply cannot make a useful prediction (well, you can use the ultimate conservative estimate that it will go anywhere, but I think you've already rejected that solution).
Our DMS Software Reengineering Toolkit has static control/dataflow/points-to/call graph analysis that has been applied to huge systems (~~25 million lines) of C code, and produced such call graphs. The machinery to do this
is pretty complex but you can find it in advanced topics in the compiler literature. I doubt you want to implement this in Perl.
This is easier when you have source code, because you at least reliably know what is code, and what is not. You're trying to do this on object code, which means you can't even eliminate data.
Using function pointers is a way of choosing the actual function to call at runtime, so in general, it wouldn't be possible to know what would actually happen statically.
However, you could look at all functions that are possible to call and perhaps show those in some way. Often the callbacks have a unique enough signature (not always).
If you want to do better, you have to analyze the source code, to see which functions are assigned to pointers to begin with.