I have a header file, sample_A.h, which has an include statement of the form #include "sample_B.h". I also have another header file sample_C.h. I would like header file sample_A.h to include sample_C.h instead of sample_B.h, but under no circumstances can I edit anything outside of the Makefile used to build the project. What would be the best way to "redirect" sample_A.h to include sample_C.h instead of sample_B.h by solely editing the Makefile? Assume that both sample_C.h and sample_B.h will allow sample_A.h to compile.
EDIT: I am working on a project which has a vast build structure. Some files are outdated, but due to upper management orders, these files should not be messed with until future meetings take place. In the meantime, I am trying to figure out a way to circumvent these outdated files (and their outdated include directives) without touching the files themselves. I am using the gcc compiler.
I would like header file sample_A.h to include sample_C.h instead of sample_B.h, but under no circumstances can I edit anything outside of the Makefile used to build the project.
In principle, the compiler is at leisure to choose how to interpret the file identifier presented in an #include directive (i.e. "sample_B.h"), but in practice, every compiler you're likely ever to meet interprets it as a file name or path. Thus, if you have a file whose (only) simple name is sample_C.h, and you have no allowable way to provide another name for it (by copying it or creating a symlink to it, for example) then it is unlikely that there is any way that file will ever be chosen to satisfy an #include "sample_B.h" directive.
If, however, the two headers have the same simple name but reside in different directories, e.g. src/sample_B.h and custom/sample_B.h, then it is normally possible to influence which is selected via compiler options that affect the search path for include files. The traditional option of that kind for Unix C compilers is -I.
Related
I'm putting my Simplicity Studio project into TortoiseSVN. I've been told to create an inc (includes) folder and use it. What is it? Why would I use it?
It's just a convention, not something that is required. For large projects it is often used to hold any file that is included from another file, i.e.:
#include "headerFile.h"
You might ask the person who "told" you to do that - it is not a language or tool-chain requirement.
The preprocessor does not care where the include files are so long is you tell it (either in the #include directive itself, by command line options or environment variables). Putting all headers in a folder separate to the associated .c files is a common practice but often a habit rather than for any good reason. It is useful when the headers relate to some static, shared library, or DLL where the headers are to be distributed with the compiled object code.
In other cases it is arguably simpler and more useful to keep the headers and the associated .c files in the same directory as each other. That allows for example "localised" headers to be included using double-quotes without needing to explicitly tell the preprocessor which paths to search.
When I compile a C program, for ease I've been including the source file for a certain header at the end. So, if main.c includes util.h, util.h will have all the headers util.c will use, outlines types or structs, etc, then at the very end it include util.c. Then, when I compile I only have to use gcc main.c -o main, and the rest is all taken care of.
I've been looking up C coding standards, trying to figure out what the best way to do things is, and there are just so many, and so many conflicting opinions I don't know what to think. Why do so many places reccomend compiling object files individually instead of including all of them in a web? util never touches anything but util.c, so the two are perfectly independent, and in theory (my theory) it would be fine, but I'm probably wrong since this is computer science and people are wrong even when they're right, so if I'm already wrong I'm probably wrong.
Some people say header files should ONLY be prototypes, and the source file be the one that includes it, and it's necessary system headers. From purely as aesthetic point of view I much prefer having all the info (types, system headers used, prototypes) in the header (in this case util.h) and having ONLY function code in util.c (excluding one "#include "util.h"" at the very top).
I guess the point I'm getting at is, with all this stuff that works, selecting a method sounds arbitrary to someone who doesn't understand the background (me). Please tell me why and what.
While your program is small, this will work. At some point, however, your program will get large enough that recompiling the whole program every time you change one line is a pain in the rear.
This -- even more than avoiding editing huge files -- is the reason to split up your program. If main.c and util.c are seperately compiled into object files, changing one line in a function in main.c will no longer require you to recompile all the code in util.c.
By the time your program is made up of a few dozen files, this will be a big win.
I think the point is that you want to include only what is needed for that file to be independent. This reduces overall compilation times by allowing the compiler to only read the headers that are necessary rather repeatedly reading every header when it might not need to. For example, if your util.c method utilises functions and/or types in <stdio.h> but your util.h doesn't, then you would want to include <stdio.h> only in util.c so that when the compiler compiles util.c it only then includes <stdio.h>, but if you include <stdio.h> in your util.h instead, then every source file that includes util.h is also including <stdio.h> whether it needs it or not.
This is very negligible for small projects with only a handful of files, but proper header inclusion can affect compilation times for larger projects.
With regards to the question about "object files": when you compile a source file into an object file, you create a shortcut that allows a build system to only recompile the source files that have outdated object files. This is an effective way to significantly reduce compilation times especially for large projects.
First, including a .c file from a .h file is completely bass-ackwards.
The "standard" way of doing it follows a line of thought roughly like this:
You have a library, containing dozens of functions. Keeping everything in one big source file means that anyone using your library would have to link the whole library, even if he uses only a single function of it. (Imagine linking the whole C standard library for a puts( "Hello" ).)
So you split things across multiple source files, which are compiled individually. Whenever you make changes to one of your functions, you have to re-translate only one small source file and update the library archive (or executable) - instead of re-translating the whole thing every time. (This is still an issue, because code sizes have somewhat kept up with CPU improvements. Compiling something like the Boost lib can still take several minutes on not-too-fancy hardware...)
Now you are in a pinch, however. The function is defined inside the .c file, and the corresponding .o file can conveniently be linked (via a .a archive if need be). However, to actually address the function (provided by the .o file) properly from another source file (a.k.a. "translation unit"), your compiler needs to know the function name, its parameter list, and its return type. This is why the declaration of the function (i.e., the function head without its body) is put in a separate header (.h) file.
Other source files can now #include the header file, address the function properly (without the compiler being aware of what the function actually does), and when all parts of your library / program are compiled into .o files, then everything is linked together.
The source file includes its own header basically to make sure the two files agree on the function declaration. ;-)
That's about it, as far as I can be bothered to write it up right now. Putting everything into one monolithic source file is barely acceptable (actually, no, it isn't, not for anything beyond about 200 lines), but including the .c file at the end of the .h file either means you learned your C coding by looking at god-awful code instead of a good book, or whoever tutored you should never tutor another person on C coding in his life. No offense intended. ;-)
PS: Header files also provide a good summary / oversight of a piece of code. Languages that don't provide headers - Java, for example - need IDE's or documentation tools to extract this kind of information. Personally, I found header files to be a benefit, not a liability.
Please use *.h and *.c files as customary: *.h files are #included in *.c files; *.h contain only macro definitions, data type declarations, function declarations, and extern data declarations. All definitions are in *.c files. That is how everybody else organizes C programs, do your fellow humans (who some day might need to understand your program) a favor. If something in file.c is used outside, you'd write file.h containing the declarations of whatever in that file is to be used outside, and include that in file.c (to check that declarations and definitions agree) and in all using *.c files. If a bunch of *.h are always included together, it might mean that the splitup into *.c isn't right (or at least that of the *.h; perhaps you should make one .h including all those declarations, and creating *.h for internal use where needed among the group of related *.c files).
[If a program written as you outline crosses my path, I can assure you I'll avoid it like the plague. The extra obfuscation might be wellcome in IOCCC, but not by me. It is a sure sign of somebody who doesn't know how to organize a program cleanly, and so the program probably isn't worth trying it out.]
Re: Separate compilation: You break up a C program so the pieces are easier to understand, you can hide details of how things work in the C files (think static), this provides support for Parnas' modularity. It also means that if you change a file, you don't have to recompile everything.
Re: Differing C programming standards: Yes, there are lots of them around. Pick one you feel confortable with, and stick to that. If you work on a project, adhere to their standards.
The "include in a single translation unit" approach becomes very inefficient for any significantly sized project, it is impractical for projects that are distributed amongst multiple developers.
Morover when creating static libraries, if everything in the library were from a single translation unit, any code linked to it would get all the library code regardless of whether it is referenced or not.
A project using a build manager such as make or the features available in most IDEs uses header file dependencies to allow an incremental build; only compiling those sources that are modified or dependent on modified files. The dependencies are determined by the file inclusions, so minimising redundant dependencies speeds build time.
A typical commercial project can comprise hundreds of thousands of lines of code and a few hundred source files; full rebuild times can vary from minutes to hours. If in your development cycle you have to wait that long between code changes and test, productivity would be very low!
Lately I have been using header files to split up my program into separate files, (C files containing functions and header files declaring them). Every thing works fine but for some reason, I need to include <stdio.h> and <stdlib.h> in EVERY C file... or my project fails to compile. Is this expected behavior?
C modules need to know either how something is defined, or where it can find a definition. If the definition is in the header file, then you should include it in the modules that use it. Here is a link to information regarding header files.
The answer would depend on whether or not that functions might depend on other declared functions in other .c/.h files.
For example:
filea.c:
#include "filea.h";
methodA()
{
methodB();
}
fileb.c:
#include <somelibrary.h>
#include "fileb.h"
methodB();
{
somelibrarycode();
}
This will not compile unless filea.c includes the header for fileb.h as it has some external dependency that is not resolved.
If this is not what you're describing than there is some other spaghettification happening, or you accidentally statically typed functions preventing them from being seen outside of the .c file.
One possible solution to this problem is to have a single shared.h with all the other includes, but I personally don't recommend this as this merely masks the issue instead of making it readily apparently which files depend on what and establish clear lines of dependency.
They must be included some way.
Some projects require long list of includes in .c files, possibly with mandatory sort, even forcing assumption that no header includes any other header.
Some allow assuming some includes form some headers.
Some use collection headers (that include a list of small headers) and replace long lists with those.
Some go even further, using "forced header" option of compiler, so include will not appear anywhere, and declare the content to be implicitly assumed. It may go on project or whole codebase level, or combined. It plays pretty well with precompiled headers.
(And there are many more strategies, you get the figure, all with some pros&cons.)
Where can I store folder's paths, which can be accessed from every function/variable in a C program?
Ex. I have an executable called do_input.exe in the path c:\tests\myprog\bin\do_input.exe,
another one in C:\tools\degreesToDms.exe, etc. how and where should I store these?
I stored them as strings in an header file which I included in every project's file but someone discouraged from doing this. Are they right?
I stored them as strings in an header file which I included in every project's file but someone discouraged from doing this. Are they right?
Yes, they are absolutely right: "baking in" installation-specific strings with paths in a file system into a compiled code is not a good decision, because you must recompile simply to change locations of some key files. This limits the flexibility of other members of your team to run your tests, and may prevent your tests from being ran automatically in an automated testing environment.
A better solution would use a plain text configuration file with the locations of the key directories, and functions that read that file and produce correct locations at run-time.
Alternatively, you could provide locations of key directories as command-line parameters to your program. This way, users who run your program would be able to set correct locations without recompiling.
If they stay the same, then I don't see any problem defining these paths in a ".h" header file included in all the various .c files that reference the paths. But every computer this thing will be running on may have different paths ("Tests" instead of "test"), so this is super risky programming and probably only safe if you're running it on a single machine or a set of machines that you control directly.
If the paths will change, then you need to create a storage place for these paths (e.g. static character array, etc.) and then have methods to allow these to be fetched and possibly reset dynamically (e.g. instead of writing output files to "results", maybe the user wants to change things to write files to "/tmp"). Totally depends on what you are doing in your code and what the tools you're writing will be doing.
Remembering the names of system header files is a pain...
Is there a way to include all existing header files at once?
Why doesn't anyone do that?
Including unneeded header files is a very bad practice. The issue of slowing down compilation might or might not matter; the bigger issue is that it hides dependencies. The set of header files you include in a source file should is the documentation of what functionality the module depends upon, and unlike external documentation or comments, it is automatically checked for completeness by the compiler (failing to include needed header files will result in an error). Ensuring the absence of unwanted dependencies not only improves portability; it also helps you track down unneeded and potentially dangerous interactions, for instance cases where a module which should be purely computational or purely data structure management is accessing the filesystem.
These principles apply whether the headers are standard system headers or headers for modules within your own program or third-party libraries.
Your source code files are preprocessed before the compiler looks at them, and the #include statement is one of the directives that the preprocessor uses. When being preprocessed, #include statements are replaced with the entire contents of the file being included. The result of including all of the system files would be very large source files that the compiler then needs to work through, which will cost a lot of time during compilation.
No one includes all the header files. There are too many, and a few of them are mutually exclusive with other files (like ncurses.h and curses.h).
It really is not that bad when writing a program even from scratch. A few are quite easy to remember: stdio.h for any FILE stuff; ctype.h for any character classification, alloc.h for any use of malloc(), etc.
If you don't remember one:
leave the #include out
compile
examine first few error messages for indication of a missing header file, such as some type not declared, or calling a function with assumed parameter types
figure out which function call is the cause
look at the man page (or whatever documentation your compiler has) for that function
notice the #include shown by the documentation and add it
repeat until all errors fixed
It is quite a bit easier for adding to an existing code base. You could go hundreds or thousands of working hours and never have to add a #include.
No it is a terrible idea and will massively increase your compile times and possible make your exe a lot larger by including massive amounts of unused code.
I know what you're talking about, but I need to double-check the function prototypes for the functions I'm using (for ones I don't use daily, anyway) -- I'll just copy and paste the #includes straight out of the manpage for the associated functions. I'm already looking at the manpage (it's a simple K in vim(1)), so it doesn't feel like an extra burden.
You can create a "master" header, where you put all your includes into. Then in everything else include it! Beware of conflicting definitions and circular references... So.... Master1.h, master2.h, ...
Not advocating it. Just saying.