Why declaration of a pre defined function and code of that pre defined function are in different files? [duplicate] - c

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
The community reviewed whether to reopen this question 1 year ago and left it closed:
Original close reason(s) were not resolved
Why does C++ have header files and .cpp files?

C++ compilation
A compilation in C++ is done in 2 major phases:
The first is the compilation of "source" text files into binary "object" files: The CPP file is the compiled file and is compiled without any knowledge about the other CPP files (or even libraries), unless fed to it through raw declaration or header inclusion. The CPP file is usually compiled into a .OBJ or a .O "object" file.
The second is the linking together of all the "object" files, and thus, the creation of the final binary file (either a library or an executable).
Where does the HPP fit in all this process?
A poor lonesome CPP file...
The compilation of each CPP file is independent from all other CPP files, which means that if A.CPP needs a symbol defined in B.CPP, like:
// A.CPP
void doSomething()
{
doSomethingElse(); // Defined in B.CPP
}
// B.CPP
void doSomethingElse()
{
// Etc.
}
It won't compile because A.CPP has no way to know "doSomethingElse" exists... Unless there is a declaration in A.CPP, like:
// A.CPP
void doSomethingElse() ; // From B.CPP
void doSomething()
{
doSomethingElse() ; // Defined in B.CPP
}
Then, if you have C.CPP which uses the same symbol, you then copy/paste the declaration...
COPY/PASTE ALERT!
Yes, there is a problem. Copy/pastes are dangerous, and difficult to maintain. Which means that it would be cool if we had some way to NOT copy/paste, and still declare the symbol... How can we do it? By the include of some text file, which is commonly suffixed by .h, .hxx, .h++ or, my preferred for C++ files, .hpp:
// B.HPP (here, we decided to declare every symbol defined in B.CPP)
void doSomethingElse() ;
// A.CPP
#include "B.HPP"
void doSomething()
{
doSomethingElse() ; // Defined in B.CPP
}
// B.CPP
#include "B.HPP"
void doSomethingElse()
{
// Etc.
}
// C.CPP
#include "B.HPP"
void doSomethingAgain()
{
doSomethingElse() ; // Defined in B.CPP
}
How does include work?
Including a file will, in essence, parse and then copy-paste its content in the CPP file.
For example, in the following code, with the A.HPP header:
// A.HPP
void someFunction();
void someOtherFunction();
... the source B.CPP:
// B.CPP
#include "A.HPP"
void doSomething()
{
// Etc.
}
... will become after inclusion:
// B.CPP
void someFunction();
void someOtherFunction();
void doSomething()
{
// Etc.
}
One small thing - why include B.HPP in B.CPP?
In the current case, this is not needed, and B.HPP has the doSomethingElse function declaration, and B.CPP has the doSomethingElse function definition (which is, by itself a declaration). But in a more general case, where B.HPP is used for declarations (and inline code), there could be no corresponding definition (for example, enums, plain structs, etc.), so the include could be needed if B.CPP uses those declaration from B.HPP. All in all, it is "good taste" for a source to include by default its header.
Conclusion
The header file is thus necessary, because the C++ compiler is unable to search for symbol declarations alone, and thus, you must help it by including those declarations.
One last word: You should put header guards around the content of your HPP files, to be sure multiple inclusions won't break anything, but all in all, I believe the main reason for existence of HPP files is explained above.
#ifndef B_HPP_
#define B_HPP_
// The declarations in the B.hpp file
#endif // B_HPP_
or even simpler (although not standard)
#pragma once
// The declarations in the B.hpp file

Well, the main reason would be for separating the interface from the implementation. The header declares "what" a class (or whatever is being implemented) will do, while the cpp file defines "how" it will perform those features.
This reduces dependencies so that code that uses the header doesn't necessarily need to know all the details of the implementation and any other classes/headers needed only for that. This will reduce compilation times and also the amount of recompilation needed when something in the implementation changes.
It's not perfect, and you would usually resort to techniques like the Pimpl Idiom to properly separate interface and implementation, but it's a good start.

Because C, where the concept originated, is 30 years old, and back then, it was the only viable way to link together code from multiple files.
Today, it's an awful hack which totally destroys compilation time in C++, causes countless needless dependencies (because class definitions in a header file expose too much information about the implementation), and so on.

Because in C++, the final executable code does not carry any symbol information, it's more or less pure machine code.
Thus, you need a way to describe the interface of a piece of code, that is separate from the code itself. This description is in the header file.

Because C++ inherited them from C. Unfortunately.

Because the people who designed the library format didn't want to "waste" space for rarely used information like C preprocessor macros and function declarations.
Since you need that info to tell your compiler "this function is available later when the linker is doing its job", they had to come up with a second file where this shared information could be stored.
Most languages after C/C++ store this information in the output (Java bytecode, for example) or they don't use a precompiled format at all, get always distributed in source form and compile on the fly (Python, Perl).

It's the preprocessor way of declaring interfaces. You put the interface (method declarations) into the header file, and the implementation into the cpp. Applications using your library only need to know the interface, which they can access through #include.

Often you will want to have a definition of an interface without having to ship the entire code. For example, if you have a shared library, you would ship a header file with it which defines all the functions and symbols used in the shared library. Without header files, you would need to ship the source.
Within a single project, header files are used, IMHO, for at least two purposes:
Clarity, that is, by keeping the interfaces separate from the implementation, it is easier to read the code
Compile time. By using only the interface where possible, instead of the full implementation, the compile time can be reduced because the compiler can simply make a reference to the interface instead of having to parse the actual code (which, idealy, would only need to be done a single time).

Responding to MadKeithV's answer,
This reduces dependencies so that code that uses the header doesn't
necessarily need to know all the details of the implementation and any
other classes/headers needed only for that. This will reduce
compilation times, and also the amount of recompilation needed when
something in the implementation changes.
Another reason is that a header gives a unique id to each class.
So if we have something like
class A {..};
class B : public A {...};
class C {
include A.cpp;
include B.cpp;
.....
};
We will have errors, when we try to build the project, since A is part of B, with headers we would avoid this kind of headache...

Related

Benefits and drawbacks of making all functions in main.c static?

I have heard that, when you have just 1 (main.c) file (or use a "unity build"), there are benefits to be had if you make all your functions static.
I am kind of confused why this (allegedly) isn't optimized by default, since it's not probable that you will include main.c into another file where you will use one of its functions.
I would like to know the benefits and dangers of doing this before implementing it.
Example:
main.c
static int my_func(void){ /*stuff*/ }
int main(void) {
my_func();
return 0;
}
You have various chunks of wisdom in the comments, assembled here into a Community Wiki answer.
Jonathan Leffler noted:
The primary benefit of static functions is that the compiler can (and will) aggressively inline them when it knows there is no other code that can call the function. I've had error messages from four levels of inlined function calls (three qualifying “inlined from” lines) on occasion. It's staggering what a compiler will do!
and:
FWIW: my rule of thumb is that every function should be static until it is known that it will be called from code in another file. When it is known that it will be used elsewhere, it should be declared in a header file that is included both where the function is defined and where it is used. (Similar rules apply to file scope variables — aka 'global variables'; they should be static until there's a proven need for them elsewhere, and then they should be declared in a header too.)
The main() function is always called from the startup code, so it is never static. Any function defined in the same file as an unconditionally compiled main() function cannot be reused by other programs. (Library code might contain a conditionally compiled test program for the library function(s) defined in the source file — most of my library code has #ifdef TEST / …test program… / #endif at the end.)
Eirc Postpischil generalized on that:
General rule: Anytime you can write code that says the use of something is limited, do it. Value will not be modified? Make it const. Name only needs to be used in a certain section? Declare it in the innermost enclosing scope. Name does not need to be linked externally? Make it static. Every limitation both shrinks the window for a bug to be created and may remove complications that interfere with optimization.

Why is multiple variable definition across different source files a problem, but multiple class definition across different sources is not

I am currently learning C++ (more precisely C++03) at uni, and I came across the initialization of static members. Non-constant static members should be declared inside the class, but defined outside. Moreover, they also should be declared in source files, not header files. As far as I understand, that is because if you have a myClass.h header with a myClass class in it, and A.cpp and B.cpp which include it, then you can protect yourself from multiple definition inside the same source file with include guards, but you cannot protect from myClass.h being present once in A.cpp and once in B.cpp. If you define the static member inside myClass.h, but outside myClass, then after the preprocess you will have copied the definition of the same thing both inside the global scope of A.cpp and B.cpp. The linker will "see" the global scope of B.cpp from inside A.cpp and vicecersa, so you will have multiple definitions available in a given context, which is a problem.
So my question is, if this is a problem, then how come the definition of the class myClass both in the global scope of A.cpp and B.cpp isn't one?
No you misunderstand the reasons. static variables that are not constexpr needs to be initialized only once as it happens at runtime. Something is really wrong if same variable is initialized twice at run-time... Thus they have to be initialized in a .cpp so that compiler/linker knows which library carries it.
Definitions for classes, on the other hand, are compile-time definitions so linker throws away all duplicates. (This is in fact is very bad and results in poor compilation times and at times in ODR issues. Modules C++20 were developed to resolve these issues.).
You've got some concepts wrong, which is all right and happens with everyone. You see, there's no hard and fast rule that the definitions of non-static methods are to be written inside the class. Its perfectly fine even to write them outside the class, the difference being, you then will be able to mask their definitions if you want to, as the header files needs to be provided in cleartext. As against this, if you don't write the functions in a .cpp file, you can't make a library and provide the library, inherently masking the definitions. Also, it is usually recommended to do it the former way, so that the class size in clear text becomes small and the abstraction is better achieved. Also, please do read how to ask a good question on stack overflow.
RE: If you define the static member inside myClass.h, but outside myClass, then after the preprocess you will have copied the definition of the same thing both inside the global scope of A.cpp and B.cpp Please read boilerplate code
Hope this helps. :)
And if you don't understand anything, I'm always open for dicussions; don't hesitate to ask

is it possible to have only header file in C without source file

I would like to write a C library with fast access by including just header files without using compiled library. For that I have included my code directly in my header file.
The header file contains:
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
#ifndef INC_TEST_H_
#define INC_TEST_H_
void test(){
printf("hello\n");
}
#endif
My program doesn't compile because I have multiple reference to function test(). If I had a correct source file with my header it works without error.
Is it possible to use only header file by including code inside in a C app?
Including code in a header is generally a really bad idea.
If you have file1.c and file2.c, and in each of them you include your coded.h, then at the link part of the compilation, there will be 2 test functions with global scope (one in file1.c and the other one in file2.c).
You can use the word "static" in order to say that the function will be restricted so it is only visible in the .c file which includes coded.h, but then again, it's a bad idea.
Last but not least: how do you intend to make a library without a .so/.a file? This is not a library; this is copy/paste code directly in your project.
And when a bug is found in your "library", you will be left with no solution apart correcting your code, redispatch it in every project, and recompile every project, missing the very point of a dynamic library: The ability to "just" correct the library without touching every program using it.
If I understand what you're asking correctly, you want to create a "library" which is strictly source code that gets #incuded as necessary, rather than compiled separately and linked.
As you have discovered, this is not easy when you're dealing with functions - the compiler complains of multiple definitions (you will have the same problem with object definitions).
You have a couple of options at this point.
You could declare the function static:
static void test( void )
{
...
}
The static keyword limits the function's visibility to the current translation unit, so you don't run into multiple definition errors at link time. It means that each translation unit is creating its own separate "instance" of the function, leading to a bit of code bloat and slightly longer build times. If you can live with that, this is the easiest solution.
You could use a macro in place of a function:
#define TEST() (printf( "hello\n" ))
except that macros are not functions and do not behave like functions. While macro-based "libraries" do exist, they are not trivial to implement correctly and require quite a bit of thought. Remember that macro arguments are not evaluated, they're just expanded in place, which can lead to problems if you pass expressions with side effects. The classic example is:
#define SQUARE(x) ((x)*(x))
...
y = SQUARE(z++);
SQUARE(z++) expands to ((z++)*(z++)), which leads to undefined behavior.
Separate compilation is a Good Thing, and you should not try to avoid it. Doing everything in one source file is not scalable, and leads to maintenance headaches.
My program do not compiled because I have multiple reference to test() function
That is because the .h file with the function is included and compiled in multiple C source files. As a result, the linker encounters the function with global scope multiple times.
You could have defined the function as static, which means it will have scope only for the curent compilation unit, so:
static void test()
{
printf("hello\n");
}

Function prototype in header file doesn't match definition, how to catch this?

(I found this question which is similar but not a duplicate:
How to check validity of header file in C programming language )
I have a function implementation, and a non-matching prototype (same name, different types) which is in a header file. The header file is included by a C file that uses the function, but is not included in the file that defines the function.
Here is a minimal test case :
header.h:
void foo(int bar);
File1.c:
#include "header.h"
int main (int argc, char * argv[])
{
int x = 1;
foo(x);
return 0;
}
File 2.c:
#include <stdio.h>
typedef struct {
int x;
int y;
} t_struct;
void foo (t_struct *p_bar)
{
printf("%x %x\n", p_bar->x, p_bar->y);
}
I can compile this with VS 2010 with no errors or warnings, but unsurprisingly it segfaults when I run it.
The compiler is fine with it (this I understand)
The linker did not catch it (this I was slightly surprised by)
The static analysis tool (Coverity) did not catch it (this I was very surprised by).
How can I catch these kinds of errors?
[Edit: I realise if I #include "header.h" in file2.c as well, the compiler will complain. But I have an enormous code base and it is not always possible or appropriate to guarantee that all headers where a function is prototyped are included in the implementation files.]
Have the same header file included in both file1.c and file2.c. This will pretty much prevent a conflicting prototype.
Otherwise, such a mistake cannot be detected by the compiler because the source code of the function is not visible to the compiler when it compiles file1.c. Rather, it can only trust the signature that has been given.
At least theoretically, the linker could be able to detect such a mismatch if additional metadata is stored in the object files, but I am not aware if this is practically possible.
-Werror-implicit-function-declaration, -Wmissing-prototypes or equivalent on one of your supported compilers. then it will either error or complain if the declaration does not precede the definition of a global.
Compiling the programs in some form of strict C99 mode should also generate these messages. GCC, ICC, and Clang all support this feature (not sure about MS's C compiler and its current status, as VS 2005 or 2008 was the latest I've used for C).
You may use the Frama-C static analysis platform available at http://frama-c.com.
On your examples you would get:
$ frama-c 1.c 2.c
[kernel] preprocessing with "gcc -C -E -I. 1.c"
[kernel] preprocessing with "gcc -C -E -I. 2.c"
[kernel] user error: Incompatible declaration for foo:
different type constructors: int vs. t_struct *
First declaration was at header.h:1
Current declaration is at 2.c:8
[kernel] Frama-C aborted: invalid user input.
Hope this helps!
Looks like this is not possible with C compiler because of its way how function names are mapped into symbolic object names (directly, without considering actual signature).
But this is possible with C++ because it uses name mangling that depends on function signature. So in C++ void foo(int) and void foo(t_struct*) will have different names on linkage stage and linker will raise error about it.
Of course, that will not be easy to switch a huge C codebase to C++ in turn. But you can use some relatively simple workaround - e.g. add single .cpp file into your project and include all C files into it (actually generate it with some script).
Taking your example and VS2010 I added TestCpp.cpp to project:
#include "stdafx.h"
namespace xxx
{
#include "File1.c"
#include "File2.c"
}
Result is linker error LNK2019:
TestCpp.obj : error LNK2019: unresolved external symbol "void __cdecl xxx::foo(int)" (?foo#xxx##YAXH#Z) referenced in function "int __cdecl xxx::main(int,char * * const)" (?main#xxx##YAHHQAPAD#Z)
W:\TestProjects\GenericTest\Debug\GenericTest.exe : fatal error LNK1120: 1 unresolved externals
Of course, this will not be so easy for huge codebase, there can be other problems leading to compilation errors that cannot be fixed without changing codebase. You can partially mitigate it by protecting .cpp file contents with conditional #ifdef and use only for periodical checks rather than for regular builds.
Every (non-static) function defined in every foo.c file should have a prototype in the corresponding foo.h file, and foo.c should have #include "foo.h". (main is the only exception.) foo.h should not contain prototypes for any functions not defined in foo.c.
Every function should prototyped exactly once.
You can have .h files with no corresponding .c files if they don't contain any prototypes. The only .c file without a corresponding .h file should be the one containing main.
You already know this, and your problem is that you have a huge code base where this rule has not been followed.
So how do you get from here to there? Here's how I'd probably do it.
Step 1 (requires a single pass over your code base):
For each file foo.c, create a file foo.h if it doesn't already exist. Add "#include "foo.h" near the top of foo.c. If you have a convention for where .h and .c files should live (either in the same directory or in parallel include and src directories, follow it; if not, try to introduce such a convention).
For each function in foo.c, copy its prototype to foo.h if it's not already there. Use copy-and-paste to ensure that everything stays consistent. (Parameter names are optional in prototypes and mandatory in definitions; I suggest keeping the names in both places.)
Do a full build and fix any problems that show up.
This won't catch all your problems. You could still have multiple prototypes for some functions. But you'll have caught any cases where two headers have inconsistent prototypes for the same function and both headers are included in the same translation unit.
Once everything builds cleanly, you should have a system that's at least as correct as what you started with.
Step 2:
For each file foo.h, delete any prototypes for functions that aren't defined in foo.c.
Do a full build and fix any problems that show up. If bar.c calls a function that's defined in foo.c, then bar.c needs a #include "foo.h".
For both of these steps, the "fix any problems that show up" phase is likely to be long and tedious.
If you can't afford to do all this at once, you can probably do a lot of it incrementally. Start with one or a few .c files, clean up their .h files, and remove any extra prototypes declared elsewhere.
Any time you find a case where a call uses an incorrect prototype, try to figure out the circumstances in which that call is executed, and how it causes your application to misbehave. Create a bug report and add a test to your regression test suite (you have one, right?). You can demonstrate to management that the test now passes because of all the work you've done; you really weren't just messing around.
Automated tools that can parse C are likely to be useful. Ira Baxter has some suggestions. ctags may also be useful. Depending on how your code is formatted, you can probably throw together some tools that don't require a full C parser. For example, you might use grep, sed, or perl to extract a list of function definitions from a foo.c file, then manually edit the list to remove false positives.
Its obvious ("I have a huge code base") you cannot do this by hand.
What you need is an automated tool that can read your source files as the compiler sees them, collect all function prototypes and definitions, and verify that all definitions/prototypes match. I doubt you'll find such a tool lying around.
Of course, this match much check the signature, and this requires something like the compiler's front end to compare the signatures.
Consider
typedef int T;
void foo(T x);
in one compilation unit, and
typedef float T;
void foo(T x);
in another. You can't just compare the signature "lines" for equality; you need something that can resolve the types when checking.
GCCXML may be able to help, if you are using a GCC dialect of C; it extracts top-level declarations from source files as XML chunks. I don't know if it will resolve typedefs, though. You obviously have to build (considerable) support to collect the definitions in a central place (a database) and compare them. Comparing XML documents for equivalents is at least reasonably straightforward, and pretty easy if they are formatted in a regular way. This is likely your easiest bet.
If that doesn't work, you need something that has a full C front end that you can customize. GCC is famously available, and famously hard to customize. Clang is available, and might be pressed into service for this, but AFAIK only works with GCC dialects.
Our DMS Software Reengineering Toolkit has C front ends (with full preprocessing capability) for many dialects of C (GCC, MS, GreenHills, ...) and builds symbol tables with complete type information. Using DMS you might be able (depending on the real scale of your application) to simply process all the compilation units, and build just the symbol tables for each compilation unit. Checking that symbol table entries "match" (are compatible according to compiler rules including using equivalent typedefs) is built-into the C front ends; all one needs to do is orchestrate the reading, and calling the match logic for all symbol table entries at global scope across the various compilation units.
Whether you do this with GCC/Clang/DMS, it is a fair amount of work to cobble together a custom tool. So you have decide how critical you need for fewer suprises is, compared to the energy to build such a custom tool.

Why should I include an header file? And how #include actually works? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
At first, I was writing my function in an .h file and then include it with #include "myheader.h". Then, someone said me that it's better to add to these files only functions prototypes and to put the real code in a separate .c file.
Now, I'm able to compile more .c files to generate only an executable, but at this point I can't understand why I should add the header files if the code is in another file.
Moreover, I had a look to standard C libraries (like stdlib.h) in the system and it seemed to me to store only structure definitions, constants and similar... I'm not so good with C (and to be honest, stdlib.h was almost Chinese to me, no offense for Chinese of course :) ), but I didn't spot any single line of 'operative' code. However, I always just include it without adding anything else and I compile my files as if the 'code' was actually there.
Can someone please explain me how do these things work? Or, at least, point me to a good guide? I searched on Google and SO, also, but didn't find anything that explains it clearly.
When you compile C code, the compiler has to actually know that a particular function exists with a defined name, parameter list, return type and optional modifiers. All these things are called function signature and the existence of a particular function is declared in the header file. Having this information, when the compiler finds a call to this function, it will know which type of parameters to look for, can control whether they have the appropriate type and prepare them to a structure that will be pushed to the stack before the code actually jumps to your function implementation. However the compiler does not have to know the actual implementation of the function, it simple puts a "placeholder" in your object file to all function calls. (Note: each c files compiles to exactly one object file). #include simple takes the header file and replaces the #include line with the contents of the file.
After the compilation the build script passes all object files to the linker. The linker will be that resolves all function "placeholders" finding the physical location of the function implementation, let them be among your object files, framework libraries or dlls. It simple places the information where the function implementation can be found to all function calls thus your program will know where to continue execution when it arrives to your function call.
Having said all this it should be clear why you can't put function definition in the header files. If later you would #include this header in to more then one c file, both of them would compile the function implementation into two separate object files. The compiler would work well, but when the linker wanted to link together everything, it would find two implementation of the function and would give you an error.
stdlib.h and friends work the same way. The implementation of the functions declared in them can be found in framework libraries which the compiler links to your code "automatically" even if you are not aware of it.
The C language (together with C++) uses a quite obsolete strategy for making the compiler know the functions defined elsewhere.
This strategy goes like this: the signatures of the functions etc. (this stuff is called declarations in C) go into a special file called header, and every other file which wants to use them is expected to almost literally include that header into the file (actually, #include directive just tells the compiler to include the literal text of the header), so that the compiler sees again the function declarations.
Other languages solve this problem in a different way: compiler sees all the source code, and remembers the metadata of the already compiled classes itself.
The strategy used in C shifts the task of finding all the dependencies from the compiler to the developer; it's a legacy from the old times when the computers were small, silly and slow, so this kind of help from the developer was really valuable.
Although this strategy has numerous drawbacks, and besides it's theoretically possible to change it now, the standard is not going to change, because gigabytes of code have been written in that style already.
tl;dr: it's a legacy from the 70's.
In C it is required that a function is declared before it is called. The reason this is required was that in the 70s it would just take too much time to first parse a file for all its symbols and then parse it a second time to actually compile the code. If all functions are declared before they are called one single parse is enough. However on modern system we no longer face those limitations and that is the reason why mondern languages don't have this requirement.
Imagine you have 2 files a.c b.c in your project. You implement a function foo which you want to use in both files. You can't just define the function in a.c and use it in b.c since you have to declare a function before you call it. So you would add a line void foo(); to b.c. But everytime you change the signature of your function in a.c you would have to change the declaration in b.c. To circumvent this issue it is standard strategy in C to declare all functions your file implements in a seperate header file (in this case a.h. The header file is then included by all other files who want to use that code (so b.c would use this: #include "a.h").
An #include is a preprocessor directive that causes the file to be textually inserted at the point where the #include occurs.
When linking multiple .c files that include the same header files, care must be taken to avoid multiple inclusions of the header files (textually inserting a header file more than once). The #ifndef, #define, and #endif preprocessor directives can be used to prevent multiple inclusions.
#ifndef MY_FILE_H
#define MY_FILE_H
/* This code will not be included more than once. */
#endif /* !MY_FILE_H */
I can't understand why I should add the header files if the code is in another file.
The header file contains the declarations for functions defined in the other file, which is necessary for the code that's calling the function to compile correctly.
For instance, suppose I write the following code:
int main(void)
{
double *foo = malloc(sizeof *foo * 10);
if (foo)
{
// do something with foo
free (foo);
}
return 0;
}
malloc is a standard library function that dynamically allocates memory and returns a pointer to it. The return type of malloc is void *, any value of which can be assigned to any other pointer type. free is another standard library function that deallocates memory allocated through malloc, and its return type is void (IOW, no return value).
However, the compiler doesn't automatically know what malloc or free return (or don't return); it needs to see the declarations for both functions in the current scope before it can correctly translate the function calls. Under the C89 standard and earlier, if a function is called without a declaration in scope, the compiler assumes that the function returns int; since int is not compatible with double * (you can't assign one to the other directly without a cast), you'll get an "incompatible assignment" diagnostic. Under C99 and later, implicit declarations aren't allowed at all. Either way the compiler won't translate the code.
I need to add the line
#include <stdlib.h>
which includes the declarations for malloc and free (and a bunch of other stuff) to the beginning of the file.
There are several reasons you don't want to put function definitions (or variable definitions) in header files. Suppose you define function foo in header a.h. You include a.h in files a.c and b.c. Each file will compile okay individually, but when you try to link them together to build a library or executable, you'll get a "multiple definition" error from the linker -- you've wound up creating two separate instances of a function with the same name, which is a no-no. Same goes for variable definitions.
It also doesn't scale well. If you put a bunch of functions in their own header files and include them in one source file, you're translating all those functions in one big glob. Furthermore, if you only change the code in the source file or one header file, you still wind up recompiling everything each time you recompile the .c file. By putting each function in it's own .c file, you can reduce your overall build times by only recompiling the files that need to be recompiled.

Resources