I've been following a couple of C tutorials and there is a certain point I'm not sure I understand. Some examples show function prototypes but without function definitions. The tutorials say that the code should compile ok, even if they won't run.
Is this correct? Should C programs with missing function definitions compile ok?
The source code will compile with declarations only, but if any of the functions are called then a linker error will occur if the functions are not defined somewhere.
Yes, this is correct. It is the feature that makes it possible to split a big program into multiple source files.
There's a big difference between function declarations and function definitions. To use a function, you have to declare the function first, but you can only compile the program if the functions you use have been defined.
The C compilation process is a series of steps that feed one into another. In a typical compilation process, first the preprocessor runs, then the compiler generates assembly language for each source file, then the assembler turns that assembly language into machine code, and then the linker puts all the necessary pieces together. The compiler step typically won't finish unless you declare functions, but the compiler doesn't care about where the functions are actually implemented - it just generates assembly language code with holes where calls to the real functions can be placed. The linker fills in those holes with calls to the actual functions.
So you can declare a function but define it in a different file, which is what the tutorial was probably doing. However, you still have to define the function somewhere, or else you won't get a full executable binary.
Yes, there is something called linking. This is a process of resolving references to different symbols, like variables, functions etc. The compiler is happy even if it does not know anything about a function's definition. However, if compiler knows about the function's prototype, it can check if the function is used correctly, so that mistakes get flagged early.
Refer Wikipedia or google to know more about linking.
Related
When we include the header files in C , we actually add the declaration of the functions such as the printf , scanf etc. But how does the code for the function ( the function declaration ) get added to the program ?
That's done by the process of linking. Individually compiled translation units have a way of referring to dependent names symbolically, so your code would only say "call a function with name 'printf'", and it is the job of the linking procedure to look up those symbols in one of the provided object or library files.
The standard library is usually linked against your code implicitly, so you may not be aware of the fact that you are linking your code with pre-existing library code. You would definitely be aware of this if you used your own libraries.
Note that there is no standard for linking, so you cannot generally compile one file with one compiler and another file with a different compiler and then link them together. The problem is not just to agree on how names are represented, but also on how to generate code for function calls. There are however several "informal" calling conventions and name mangling rules on popular platforms that offer a degree of interoperability.
I made a C program. And I made a go file with go functions defined.
In the C program, I called go functions. Is go called from C compiled or interpretted?
I made a C program. And I made a go file with go functions defined. In the C program, I called go functions
You made a Go program which calls C functions (the other way around is not yet possible.) Then you're apparently calling Go functions from C again which is a bit weird and doesn't make much sense most of the time. See https://stackoverflow.com/a/6147097/532430.
I'm going to assume you used gccgo to compile your program. Because if you used Go's gc then there wouldn't be any confusion about what language your program is written in.
Is go called from C compiled or interpretted?
It's compiled. gccgo is a Go front-end for GCC. And GCC stands for GNU Compiler Collection.
it is always compiled. C will never run function without compilation.
In your program when you first call the go function,the compiler will generate the necessary codes for function call,space for function arguments,store details about function arguments type etc.
If everything is correct as per the compiler standard,object file is created and further there are other processes like linking and all.
So basically you cannot say it as " Is go called from C compiled or interpretted?",it's a series of processes which works together to make your program run.
As the title says, I know what causes this error but I want to know why the compiler gives it in this circumstance.
Eg :
main.c
void test(){
test1();
}
void test1(){
...
}
Would give an implicit declaration warning as the compiler would reach the call to test1() before it has read its declaration, I can see the obvious problems with this (not knowing return type etc), but why can't the compiler do a simple pass to get all function declarations, then compile the code removing these errors? It just seems so simple to do and I don't believe I've seen similar warnings in other languages.
Does anyone know if there is a specific purpose for this warning in this situation that I am overlooking?
I'd guess since C is a pretty old language, dating back to 1972, this was an intentional because of memory and speed constraints.
The way it's defined, the compiler has to make one scan of your file to know everything that's needed for compilation. Having to do a two pass would have been more expensive and so this rule has survived to this day.
Also, as peoro noted, this rule makes a compiler writer's life easier. Not to mention an IDE's life for autocompletion will also make it's life easier.
So, a small annoyance for program writers means easing life for compiler writers and IDE makers, among others.
Oh, and your programs will compile faster. Not bad when you've got a multimillion code base on your hands.
That's the way C is defined.
There's a declare before use rule that forces you to declare symbols before using them.
It's mostly to make life easier for compilers.
Short answer: Because C is ooooold. :-)
Long answer: The C compiler and linker are totally separate. You might be defining different functions across different source files, and then linking them together. In this case, say that you were defining test1 in a separate library source file. The compiler wouldn't know about test1 until it has compiled the other file, and it compiles the other file separately so it can't know about it when it's compiling test. Therefore you have to tell it, 'yes, there really is a test1 defined elsewhere, and here is its signature'. That's why you usually include a header file (.h), for any other source files whose functions you need to use, in this one.
It might not even seem so, but this approach also saves you time! Imagine you are compiling a compilation unit with thousands of files: In your scenario the compiler would first have to pares thousands of files to then see "Oh this function does not exist. Abort." The way that it implemented makes the compilation break as soon as it sees an undefined function. This saves you time.
What is the difference between a compiler and a linker in C?
The compiler converts code written in a human-readable programming language into a machine code representation which is understood by your processor. This step creates object files.
Once this step is done by the compiler, another step is needed to create a working executable that can be invoked and run, that is, associate the function calls (for example) that your compiled code needs to invoke in order to work. For example, your code could call sprintf, which is a routine in the C standard library. Your code has nothing that does the actual service provided by sprintf, it just reports that it must be called, but the actual code resides somewhere in the common C library. To perform this (and many others) linkages, the linker must be invoked. After linking, you obtain the actual executable that can run.
A compiler generates object code files (machine language) from source code.
A linker combines these object code files into an executable.
Many IDEs invoke them in succession, so you never actually see the linker at work. Some languages/compilers do not have a distinct linker and linking is done by the compiler as part of its work.
In Simple words -> Linker comes into act whenever a '.obj' file needs to be linked with its library functions as compiler doesn't understand what is (scanf or printf..etc) , compiler just converts '.c' file to '.obj' file if there's no error without understanding library functions we used. So To make 'obj' file to 'exe'(executable file) we need linker because it makes compiler understand of library functions.
I have a fair amount of practice with Java as a programming language, but I am completely new to C. I understand that a header file contains forward declarations for methods and variables. How is this different from an abstract class in Java?
The short answer:
Abstract classes are a concept of object oriented programming. Header files are a necessity due to the way that the C language is constructed. It cannot be compared in any way
The long answer
To understand the header file, and the need for header files, you must understand the concepts of "declaration" and "definition". In C and C++, a declaration means, that you declare that something exists somewhere, for example a function.
void Test(int i);
We have now declared, that somewhere in the program, there exists a function Test, that takes a single int parameter. When you have a definition, you define what it is:
void Test(int i)
{
...
}
Here we have defined what the function void Test(int) actually is.
Global variables are declared using the extern keyword
extern int i;
They are defined without the extern keyword
int i;
When you compile a C program, you compile each source file (.c file) into an .obj file. Definitions will be compiled into the .obj file as actual code. When all these have been compiled, they are linked to the final executable. Therefore, a function should only be defined on one .c file, otherwise, the same function will end up multiple times in the executable. This is not really critical if the function definitions are identical. It is more problematic if a global variable is linked into the same executable twice. That will leave half the code to use the one instance, and the other half of the code to use the other instance.
But functions defined in one .c file cannot see functions defined in another .c files. So if from file1.c file you need to access function Test(int) defined in file2.c, you need to have a declaration of Test(int) present when compiling file1.c. When file1.c is compiled into file1.obj, the resulting .obj file will contain information that it needs Test(int) to be defined somewhere. When the program is linked, the linker will identify that file2.obj contains the function that file1.obj depends on.
If there is no .obj file containing the definition for this function, you will get a linker error, not a compiler error (linker errors are considerably more difficult to find and correct that compiler errors because you get no filename and line number for the resulting file)
So you use the header file to store declarations for the definitions stored in the corresponding source file.
IMO it's mainly because many C programmers seem to think that Java programmers don't know how to program “for real”, e.g. handling pointers, memory and so on.
I would rather compare headers to Java interfaces, in the sense that they generally define how the API must be used.
Headers are basically just a way to avoid copy-pasting: the preprocessor simply includes the content of the header in the source file when encounters an #include directive.
You put in a header every declaration that the user will commonly use.
Here's the answers:
Java has had a bad reputation among some hardcore C programmers mainly because they think:
it's "too easy" (no memory-management, segfaults)
"can't be used for serious work"
"just for the web" or,
"slow".
Java is hardly the easiest language in the world these days, compared to some lanmguages like Python, etc.
It is used in many desktop apps - applets aren't even used that often. Finally, Java will always be slower than C, because it is not compiled directly to machine code. Sometimes, though, extreme speed isn't needed. Anyway, the JVM isn't the slowest language VM ever.
When you're working in C, there aren't abstract classes.
All a header file does is contain code which is pasted into other files. The main reason you put it in a header file is so that it is at the top of the file - this way, you don't need to care where you put your functions in the actual implementation file.
While you can kind-of use OO concepts in C, it doesn't have built-in support for classes and similar fundamentals of OO. It is nigh-impossible to implement inheritance in plain C, therefore there can never actually have OO, or abstract classes for that matter. I would suggest sticking to plain old structs.
If it makes it easier for you to learn, by all means think of them as abstract classes (with the implementation file being the inheriting class) - but IMHO it is a difficult mindset to use when for working in a language without explicit support of said features.
I'm not sure if Java has them, but I think a closer analogue could be partial classes in C#.
If you forward declare something, you have to actually deliver and implement it, else the compiler will complain. The header allows you to display a "module"'s public API and make the declarations available (for type checking and so) to other parts of the program.
Comprehensive reading: Learning C from Java. Recommended reading for developers who are coming from Java to C.
I think that there is much derision (mockery, laughter, contempt, ridicule) for Java simply because it's popular.
Abstract classes and interfaces specify a contract or a set of functions that can be invoked on an object of a certain type. Function prototypes in C only really do compile time type checking of function arguments/return values.
While your first question seems subjective to me, I will answer to the second one:
A header file contains the declarations which are then made available to other files via #inclusion by the preprocessor.
For instance you will declare in a header a function, and you will implement in a .c file. Other files will be able to use the function so long they can see the declaration (by including the header file).
At linking time the linker will look among the object files, or the various libraries linked, for some object which provides the code for the function.
A typical pattern is: you distribute the header files for your library, and a dll (for instance) which contains the object code. Then in your application you include the header, and the compiler will be able to compile because it will find the declaration in the header. No need to provide the actual implementation of the code, which will be available for the linker through the dll.
C programs run directy, while Java programs run inside the JVM, so a common belief is that Java programs are slow. Also in Java you are hidden from some low level constructs (pointer, direct memory access), memory management, etc...
In C the declaration and definition of a function is separated. Declaration "declares" that there exists a function that called by those arguments returns something. Definition "defines" what the function actually does. The former is done in header files, the latter in the actual code. When you are compiling your code, you must use the header files to tell your compiler that there is such a function, and link in a binary that contains the binary code for the function.
In Java, the binary code itself also contains the declaration of the functions, so it is enough for the compiler to look at the class files to get both the definition and declaration of the available functions.