I'm following a guide to learn curses, and all of the C code within prototypes functions before main(), then defines them afterward. In my C++ learnings, I had heard about function prototyping but never done it, and as far as I know it doesn't make too much of a difference on how the code is compiled. Is it a programmer's personal choice more than anything else? If so, why was it included in C at all?
Function prototyping originally wasn't included in C. When you called a function, the compiler just took your word for it that it would exist and took the type of arguments you provided. If you got the argument order, number, or type wrong, too bad – your code would fail, possibly in mysterious ways, at runtime.
Later versions of C added function prototyping in order to address these problems. Your arguments are implicitly converted to the declared types under some circumstances or flagged as incompatible with the prototype, and the compiler could flag as an error the wrong order and number of types. This had the side effect of enabling varargs functions and the special argument handling they require.
Note that, in C (and unlike in C++), a function declared foo_t func() is not the same as a function declared as foo_t func(void). The latter is prototyped to have no arguments. The former declares a function without a prototype.
In C prototyping is needed so that your program knows that you have a function called x() when you have not gotten to defining it, that way y() knows that there is and exists a x(). C does top down compilation, so it needs to be defined before hand is the short answer.
x();
y();
main(){
}
y(){
x();
}
x(){
...
more code ...
maybe even y();
}
I was under the impression that it was so customers could have access to the .h file for libraries and see what functions were available to them, without having to see the implementation (which would be in another file).
Useful to see what the function returns/what parameters.
Function prototyping is a remnant from the olden days of compiler writing. It used to be considered horribly inefficient for a compiler to have to make multiple passes over a source file to compile it.
In C, in certain contexts, referring to a function in one manner is syntactically equivalent to referring to a variable: consider taking a pointer to a function versus taking a pointer to a variable. In the compiler's intermediate representation, the two are semantically distinct, but syntactically, whether an identifier is a variable, a function name, or an invalid identifier cannot be determined from the context.
Since it's not determinable from the context, without function prototypes, the compiler would need to make an extra pass over each one of your source files each time one of them compiles. This would add an extra O(n) factor for any compilation (that is, if compilation were O(m), it would now be O(m*n)), where n is the number of files in your project. In large projects, where compilation is already on the order of hours, having a two-pass compiler is highly undesirable.
Forward declaring all your functions would allow the compiler to build a table of functions as it scanned the file, and be able to determine when it encountered an identifier whether it referred to a function or a variable.
As a result of this, C (and by extension, C++) compilers can be extremely efficient in compilation.
It allows you to have a situation in which say you can have an iterator class defined in a separate .h file which includes the parent container class. Since you've included the parent header in the iterator, you can't have a method like say "getIterator()" because the return type would have to be the iterator class and therefore it would require that you include the iterator header inside the parent header creating a cyclic loop of inclusions (one includes the other which includes itself which includes the other again, etc.).
If you put the iterator class prototype inside the parent container, you can have such a method without including the iterator header. It only works because you're simply saying that such an object exists and will be defined.
There are ways of getting around it like having a precompiled header, but in my opinion it's less elegant and comes with a slew of disadvantages. Of couurse this is C++, not C. However, in practice you might have a situation in which you'd like to arrange code in this fashion, classes aside.
Related
In C when a function is declared like void main(); trying to input an argument to it(as the first and the only argument) doesn't cause a compilation error and in order to prevent it, function can be declared like void main(void);. By the way, I think this also applies to Objective C and not to C++. With Objective C I am referring to the functions outside classes. Why is this? Thanks for reaching out. I imagine it's something like that in Fortran variables whose names start with i, j, k, l, m or n are implicitly of integer type(unless you add an implicit none).
Edit: Does Objective C allow this because of greater compatibility with C, or is it a reason similar to the reason for C having this for having this?
Note: I've kept the mistake in the question so that answers and comments wouldn't need to be changed.
Another note: As pointed out by #Steve Summit and #matt (here), Objective-C is a strict superset of C, which means that all C code is also valid Objective-C code and thus has to show this behavior regarding functions.
Because function prototypes were not a part of pre-standard C, functions could be declared only with empty parentheses:
extern double sin();
All existing code used that sort of notation. The standard would have failed had such code been made invalid, or made to mean “zero arguments”.
So, in standard C, a function declaration like that means “takes an undefined list of zero or more arguments”. The standard does specify that all functions with a variable argument list must have a prototype in scope, and the prototype will end with , ...). So, a function declared with an empty argument list is not a variadic function (whereas printf() is variadic).
Because the compiler is not told about the number and types of the arguments, it cannot complain when the function is called, regardless of the arguments in the call.
In early (pre-ANSI) C, a correct match of function arguments between a function's definition and its calls was not checked by the compiler.
I believe this was done for two reasons:
It made the compiler considerably simpler
C was always designed for separate compilation, and checking consistency across translation units (that is, across multiple source files) is a much harder problem.
So, in those early days, making sure that a function's call(s) matched its definition was the responsibility of the programmer, or of a separate program, lint.
The lax checking of function arguments also made varargs functions like printf possible.
At any rate, in the original C, when you wrote
extern int f();
, you were not saying "f is a function accepting no arguments and returning int". You were simply saying "f is a function returning int". You weren't saying anything about the arguments.
Basically, early C's type system didn't even have a way of recording the parameters expected by a function. And that was especially true when separate compilation came into play, because the linker resolved external symbols based pretty much on their names only.
C++ changed this, of course, by introducing function prototypes. In C++, when you say extern int f();, you are declaring a function that explicitly takes 0 arguments. (Also a scheme of "name mangling" was devised, which among other things let the linker do some consistency checking at link time.)
Now, this was all somewhat of a deficiency in old C, and the biggest change that ANSI C introduced was to adopt C++'s function prototype notation into C. It was slightly different, though: to maintain compatibility, in C saying extern int f(); had to be interpreted as meaning "function returning int and taking unspecified arguments". If you wanted to explicitly say that a function took no arguments, you had to (and still have to) say extern int f(void);.
There was also a new ... notation to explicitly mark a function as taking variable arguments, like printf, and the process of getting rid of "implicit int" in declarations was begun.
All in all it was a significant improvement, although there are still a few holes. In particular, there's still some responsibility placed on the programmer, namely to ensure that accurate function prototypes are always in scope, so that the compiler can check them. See also this question.
Two additional notes: You asked about Objective C, but I don't know anything about that language, so I can't address that point. And you said that for a function without a prototype, "trying to input an argument to it (as the first and the only argument) doesn't cause a compilation error", but in fact, you can pass any number or arguments to such a function, without error.
I'm a beginner into Linking, sorry if my questions are too basic. lets say I have two .c files
file1.c is
int main(int argc, char *argv[])
{
int a = function2();
return 0;
}
file2.c is
int function2()
{
return 2018;
}
I know the norm is, create a file2.h and include it in file1.c, and I have some questions:
Q1. #include in file1.c doesn't make too much difference or improve much to me, I can still compile file1.c without file2.h correctly, the compiler will just warn me 'implicit declaration of function 'function2', but does this warning help a lot? Programmers might know that function2 is defined in other .c file(if you use function2 but don't define it, you certainly know the definition is somewhere else) and linker will do its job to produce the final executable file? so the only purpose of include file2,c to me is, don't show any warning during compilation, is my understanding correct.
Q2. Image this scenario, a programmer define function2 in file1.c, he doesn't know that his function2 in conflict with the one in file2.c until the linker throws the error(obvious he can compile his file1.c alone correctly. But if we want him to know his mistake when he compiles his file1.c, adding file2.h still don't help, so what's the purpose of adding header file?
Q3. What should we add to let the programmer know he should choose a different name for function2 rather then be informed the error by linker in the final stage.
Per C89 3.3.2.2 Function calls emphasis mine:
If the expression that precedes the parenthesized argument list in a function call consists solely of an identifier, and if no declaration is visible for this identifier, the identifier is implicitly declared exactly as if, in the innermost block containing the function call, the declaration
extern int identifier();
appeared
Now, remember, empty parameter list (declared with nothing inside the () braces) declares a function that takes unspecified type and number of arguments. Type void inside braces to declare that a function takes no arguments, like int func(void).
Q1:
does this warning help a lot?
Yes and no. This is a subjective question. It helps those, who use it. As a personal note, always make this warning an error. Using gcc compiler use -Werror=implicit-function-declaration. But you can also ignore this warning and make the simplest main() { printf("hello world!\n"); } program.
linker will do its job to produce the final executable file? so the only purpose of include file2,c to me is, don't show any warning during compilation, is my understanding correct.
No. In cases the function is called using different/not-compatible pointer type. It invokes undefined behavior. If the function is declared as void (*function2(void))(int a); then calling ((int(*)())function2)() is UB as is calling function2() without previous declaration. Per Annex J.2 (informative):
The behavior is undefined in the following circumstances:
A pointer is used to call a function whose type is not compatible with the pointed-to type (6.3.2.3).
and per C11 6.3.2.3p8:
A pointer to a function of one type may be converted to a pointer to a function of another type and back again; the result shall compare equal to the original pointer. If a converted pointer is used to call a function whose type is not compatible with the referenced type, the behavior is undefined.
So in your lucky case int function2() indeed this works. It also works for example for atoi() function. But calling atol() will invoke undefined behavior.
Q2:
the linker throws the error
This should happen, but is really linker dependent. If you compile all sources using a single stage with the gcc compiler it will throw an error. But if you create static libraries and then link them using gcc compiler without -Wl,-whole-archive then it will pick the first declaration is sees, see this thread.
what's the purpose of adding header file?
I guess simplicity and order. It is a convenient and standard way to share data structures (enum, struct, typedefs) and declarations (function and variable types) between developers and libraries. Also to share preprocessor directives. Image you are writing a big library with over 1000+ files that will work with over 100+ other libraries. In the beginning of each file would you write struct mydata_s { int member1; int member2; ... }; int printf(const char*, ...); int scanf(const char *, ...); etc. or just #include "mydata.h" and #include <stdio.h>? If you would need to change mydata_s structure, you would need to change all files in your project and all the other developers which use your library would need to change the definition too. I don't say you can't do it, but it would be more work to do it and no one will use your library.
Q3:
What should we add to let the programmer know he should choose a different name for function2 rather then be informed the error by linker in the final stage.
In case of name clashes you will by informed (hopefully) by the linker that it found two identifiers with the same name. You would need to create a tool to check your sources exactly for that. I don't know why the need for this, the linker is specifically made to resolve symbols so it naturally handles the cases when two symbols with the same identifier exists.
Short answer:
Take away: the earlier the compiler alert the better.
Q1: meaning of .h: consistency and early alerts. Alerting early on common ways of going wrong improves reliability of code and adds up to less debugging and production crashes.
Q2: Clashing Names bring early alerts to developers, which are usually easier to fix.
Q3: Early duplicate definition alerts are not baked into the C standard.
Exercises:
1. Define a function in one file that printf("%d\n",i) an int argument then call that function in another file with a float of 42.0.
2. Call with (double)42.0.
3. Define function with char *str argument printed under %.s then call with int argument.
Longer answers:
Popular convention: in typical use the name of the .h file is derived from the .c file, or files, it is associated with. file.h and file.c. For .h files with many definitions, say string.h, derive the file name from a hither perspective of what's within (as in the str... functions).
My big rule: it’s always better to structure your code so compilers can immediately alert on bugs at compile time rather than letting them slide through to debug or run time where they depend on code actually running in just the right way to find. Run time errors can be very difficult to diagnose, especially if they hit long after the program is in production, and expensive in maintenance and brings down your customer experience. See "yoda notation".
Q1: meaning of .h: consistency and early alerts and improved reliability of code.
C .h files allow developers of .c files compiled at different times to share common declarations. No duplicate code. .h files also allow functions to be consistently called from all files while identifying improper argument signatures (argument counts, bad clashes, etc.). Having.c files defining functions also #include the .h file helps assure the arguments in the definition are consistent with the calls; this may sound elementary, but without it all the human errors of signature clashes can sneak through.
Omitting .h files only works if the argument signatures of all callers perfectly match those in the definitions. This is often not the case so without .h files any clashing signatures would produce bad numbers unless you also had parallel externs in the calling file (bad bad bad). Things like int vs float can produce spectacularly wrong argument values. Bad pointers can produce segment faults and other total crashes.
Advantage: with externs in .h files compilers can correctly cast mismatching arguments to the correct type, assuring better calls. While you can still botch arguments it’s much less likely. It also helps avoid conditions where the mismatches work on one implementation but not another.
Implicit declaration warnings are hugely helpful to me as they usually indicate I’ve forgotten a .h file or spelled the name an external name wrong.
Q2: Clashing Names. Early alerts.
Clashing names are bad and it is the developers responsibility to avoid problems. C++ solves the issue with name spaces, which C, being a lower level language, does not have.
Use of .h files can allow can let compiler diagnostics alert developers where clashes care are early in the game. If compiler diagnostics don’t do this hopefully linkers will do so on multidefined symbol errors, but this is not guaranteed by the standard.
A common way to fake name spaces is by starting all potentially clashing definitions in a .h with some prefix (extern int filex_function1(int arg, char *string) or #define FILEX_DEF 42).
What to do if two different external libraries being used share the same names is beyond the scope of this answer.
Q3: early duplicate alerts. Sorry… early alerts are implementation dependent.
This would be difficult for the C standard to define. As C is an old language there are many creative different ways C programs are written and stored.
Hunting for clashing names before using them is up to the developer. Tools like cross reference programs can help. Even something stupid like ctags associated with vim or emacs can help.
you misunderstand usage of header files and function prototypes.
header files are needed to share common information between multiple code files. such information includes macro definition, data types, and, possibly, function prototypes.
function protoypes are needed for the compiler to correctly handle return data types and to give you early warnings of misuse of function return types and arguments.
function prototypes can be declared in header files or can be declared in the files which use them (more typing).
you have a very simple example, with just 2 files. Now imagine a project with hudreds of files and thousands of functions. You will be lost in linker errors.
'c' allows you to use an undeclared function due to legacy reasons. In this situation it assumes that the function has a return type of 'int'. However, modern data types has a bigger veriety than in early days. The function can return pointers, 64-bit data, structures. To express that you must use prototypes or nothing will work. The compiler has to know how to handle function returns correctly.
Also, it can give you warnings about incorrect use of argument types. Due to leagacy, those are still warnings, but they got addressed in early c++ and converted to errors.
Those warnings give you early debugging capabilities. Type mismatch warnings can save you days of debugging in some cases.
So, in your example you do not need the header file. You can prototype the function in the 'main' file using the 'extern' syntax. You can even do without prototyping. However, in real modern programming world you cannot allow the latter. In particular when you work in a team or want your program to be maintainable.
It is a good idea to store you funcion protypes in header files. This would be a good documentation source, in particular with good comments. BTW, function names must make sense to be maintainable.
Q1. Yes. C is a low level language, and was historically used to bind low level constructs into higher level concepts. For example, traditionally the label _end is at the last address in a program. The label is typeless but you can declare it as any type that is convenient to you. A "properly typed" language would make this sort of abuse difficult.
Q2. By convention, both file1.c and file2.c would include file2.h; one as consumer, the other as producer. Following this simple idiom will catch declaration vs definition errors; although again, the "warning" is not necessarily enforced.
Q3. Many software organizations take a "warnings are errors" rule to socially control their programmers.
I've seen a couple of times that the prototype declaration of a function in the header was literally repeated in the c-file.
It is possible to declare a function more than one time in C - but what sense does it make? Is it just for better readability or is there some deeper insight which I am missing?
It is possible. It doesn't make any sense.
But it does not cause any harm either. You can declare a function as many times as you like, but each such declaration must be identical to the others. So it is pointless to do so. As someone suggested, maybe it is a copy/paste error.
You can only have one function definition though, which should always be in the c file.
This is how you should do it:
Function declarations which is part of the caller's interface should be in the h file, and there only.
Function declarations of local (private) functions that are only available from inside the c file itself, should be in the c file, and there only. Such functions should be declared and defined as static.
Duplicate function declarations serve no useful semantic purpose, but they may arise for historical reasons, because of local coding conventions, or for some other reason.
For example, it may be a local coding convention that every function in each source file is prototyped at the beginning of that file. That has some minor practical utility, such as serving as a manifest of the functions defined in each file, and making the functions inside each file able to ignore any concern about whether other functions in the same file are declared in a header.
Additionally, multiple declarations of the same function or object don't necessarily have to be identical, they only need to be compatible. Under some circumstances it may makes sense to provide a less specific prototype in a header and a more specific one in the source file containing the function definition (which itself serves as yet another declaration).
I know that it's poor practice to not include function prototypes, but if you don't, then the compiler will infer a prototype based on what you pass into the function when you call it (according to this answer). My question is why does the compiler infer the prototype from what you pass into the function rather than the definition of the function itself? I can imagine some kind of preprocessing step where all declared functions are identified and checked to see if a prototype exists for each one. If one doesn't have a prototype, the first line of the function is copied and stuck under the existing prototypes. Why isn't this done?
Because the C compiler was designed as a single pass compiler, where any given file does not know about the other source files that make up the project.
Although compilers have gotten more sophisticated, and may do multiple passes, the general outline of the compilation process framework remains as it was in K&R's day:
Pre-process each source file(macro text replacement only).
Compile the processed source into an object file.
Link the objects into an executable or library.
Inferring prototypes would have to happen in the first step, but the compiler does not know about the existence of any other objects which may contain the function definition at that time.
It might be possible to make a compiler which did what you suggest, but not without breaking the existing rules for how to infer prototypes. A change with such big consequences would make the language no longer C.
The major use for prototypes is to declare a function and inform the compiler about the number and type of arguments in cases where the definition is not visible. Since C was originally compiled single-pass, the definition is not visible when it occurs later in the translation unit, but the more important case from a modern perspective is when the definition is not visible at all, due to lying in a separate translation unit, possibly even in a library file that exists only in compiled form and where no information about the function's type is recorded.
Im very new to C programing, but have limited experience with Python, Java, and Perl. I was just wondering what the advantage is of having the prototype of a function above main() and the definition of that function below main() rather than just having the definition of said function above main(). From what I have read, the definition can act as a prototype as well.
Thanks in advance,
The use of a prototype above main() (within a single module) is mostly a matter of personal preference. Some people like to see main() at the bottom of the file; other people like to see it at the top. I had a professor in university who complained that I wrote my programs "upside-down" because I put main() at the bottom (and avoided having to write and maintain prototypes for everything).
There is one situation where a prototype may be required:
void b(); // prototype required here
void a()
{
b();
}
void b()
{
a();
}
In this mutually recursive situation, you need at least one of the prototypes to appear prior to the definition of the other function.
When someone else wants to use your library, you can give them the header (containing the prototypes), without giving them your code.
Example? Windows SDK.
If two functions call each other, you will need to have a prototype for at least one of them.
There is no performance impact* on the compiled code.
*Note: Well, there could be, I guess. If the compiler decides to put the method somewhere else in the binary, then you could theoretically see locality issues affecting the performance of your application. It's not something to normally worry about, though.
There's no advantage in splitting a function into a standalone prototype and a definition. In fact, there's a clear and explicit disadvantage to that approach, since it requires extra maintenance efforts to keep the prototype and the definition in perfect sync. For this reason, a reasonable approach would be to provide a separate prototype only when you have to. For example, with external functions declared in header files you have no other choice but to keep the prototype and definition separate.
As for functions internal to a single translation unit (the ones that we'd normally declare static) there's absolutely no reason to create a standalone prototype, since the definition of the function itself includes a prototype. Just define your functions in the natural order: low level functions first, high level functions last, and that's it. If your program consists of a single translation unit, the main function will end up at the very bottom.
The only case you'd have to declare a standalone prototype in this approach would be if some form of recursion was involved.
If you write your code in the correct order, i.e. main last, then there is only one definition in your code for each function. This prevents having to change the code in two, or more, places if changes ever need to be made. That prevents errors and reduces the work to maintain it. The principle is called "single source of truth"