How many headers for a c-module? [closed] - c

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
I have a module in my project mymodule.c that provides a lot of functions for the rest of my project.
These function are defined in a mymodule.h header file.
But in mymodule.c there are a lot of other defines or define masks.
For example:
#define STACKSIZE 1024
#define TIMER1 100
#define TCR_MASK 16U
#define TCR 16U
#define TCR_IR (0ULL << 8)
...
100 other defines or typedefs
I could split it up like this:
mymodule.h --->all external functions and declarations used from other places.
Rename this to mymodule_public.h ?
mymodule_config.h ---> configuration like timers, controlparameters or constants.
mymodule_masks.h ---> decsriptors.
There could be more headers.
Another way is to keep all except the external functions in the mymodule.c.
What is best practice for splitting up into headers and giving header names?

Generally, a header file which is the public interface of a library should contain all the things that the caller needs to use the functions.
If you have a bunch of #define that are necessary to use the functions, they need to be in the h file. If they aren't needed but just used internally, you should keep them in the c file.
It is ok to make a 2nd header file which isn't the public API but just contains internal constants used by your c file(s).
As for where to place the #includes, that's a bit subjective. Generally I like to show the user of the library which dependencies the library comes with, so that they can ensure that they have all the necessary files and so that they can trouble-shoot strange linker errors easier. On the other hand, one might feel uneasy about "exposing" includes that are just used privately by the library in the public header (like the 2nd private header mentioned above). There's no obvious right or wrong here, though try to be consistent with where you place the #includes.
Your idea with multiple headers all named with a certain library-specific prefix "mymodule" is pretty sound overall.

Related

Why does the standard C library feature multiple header files instead of consolidating the contents into a single header? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 months ago.
Improve this question
Why does the standard C library need to feature multiple header files? Would it not be more user friendly to consolidate it into one header file?
I understand that it is unnecessary to include function prototypes / global variables that go unused, however, won't the compiler eventually remove all of these references if they're unused?
Maybe I'm underestimating the size of contents spanning all of the header files. But it seems like I'm always googling to figure out what header I need to #include.
Edit: The top comment references a size of 50MB which appears to be untrue. The C library is relatively concise compared to other language's standard libraries hence the question.
// sarcasm ON
Build your own #include "monster.h" that includes all you want from the collection.
People write obfuscated code all the time. But, you won't be popular with your cohort.
// sarcasm OFF
The real reason? C was developed when storage was barely beyond punch card technology. A multi-user system might have 1/2Mb of (magnetic) core memory and a cycle time that is now laughable.
Yet, reams of C code was developed (and the story of the popularity of UNIX well known.)
Although new standards could change the details (and C++ has), there is an ocean of code that might/would no longer be able to be compiled.
Look up "Legacy".
There are (too many) examples of code in the world where the usual collection of, say, stdio.h, stdlib.h, etc. have been "hidden" inside #include "myApp.h". Doing so forces the reader to hunt-down a second file to verify what the &^%&%& is going on.
Get used to it. If it was a bad practice, it would not have survived for ~50 years.
EDIT: It is for a similar reason that "functions" have been "grouped together" into different libraries. By extension, pooling those libraries might make things a tiny bit "less cluttered", but the process of linking may require a supercomputer to finish the job before knock-off time.
EDIT2: Brace yourself. Cleanly written C++ code often has a .h and a .cpp for every class used in the app. Sometimes "families" (hierarchies) might live together in a single pair of files, although that can lead to boo-boo's when the coder is having a bad day...

C access control practices [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I am writing a small shell as a course exercise, which emulates bash's autocompletion and history mechanisms, resulting in a main.c file for managing user commands, and a raw.c file for managing the terminal in raw mode.
It is unlikely any file in the project will ever need call anything except raw.c's get_line() method, therefore my instinct is to only include this get_line() method in raw.h, to prevent accidental access of another raw.c method and further complexities.
Where can a good primer and or discussion on C access control techniques be found, in particular whether it is a good idea to emulate OO language's private/public concepts, and how it is usually done, if so?
First things first: "OO's private / public concepts" are not "access control". Even if something is "private" it's still there, and can still be accessed. You have protected it against accidential access, but that is a far cry from "securing" it (from an "authorization" point of view). A determined and / or malicious client can still get at it, because "security" is not what those mechanics are for.
Once you understood that, you realize that all those "visibility" things - whether you declare something in a header, or make it public vs. private, or whatever - are basically aiming at maintainability: Reducing the amount of identifiers in the current scope, reducing the amount of functions and variables you have to think about in a given context.
Then, you say that your "instinct is to only include this get_line() method in raw.h". You realize that this is faulty wording? You can declare that function in a header file, you can include that header file, but you don't include a function.
So. You implement functions that belong together in a translation unit (main.c, raw.c). You declare functions that might be called from outside that translation unit in that translation unit's header file (raw.h). All functions not to be called from the outside, you define as static inside the translation unit itself, and don't declare them in a header at all.
As for emulating another language's concepts, don't. Do things the way they are done in the language you are currently using, or use a different language.
Private functions in raw.c should of course be declared static (and omitted from the public header). Then they're only visible and callable from the same "compilation unit", i.e. from within raw.c.
Only public method should be put into the .h
Private method must be declared static at the top of the .c file.
If your module is using multi .c files, you should not put the function into the public .h. Instead you should create a second private .h, for example : mymodule_p.h instead of mymodule.h. It is like a protected function

C header file dependencies [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I would always typically include dependencies in my header files so that when adding that header to a source file, I don't need to dig around for the other required headers to make it compile.
However, after reviewing some other coding standards, it appears that this is often banned, with the requirement that a header file will not contain any #include statements.
I can't really find any discussion on this - so what would be the reason for banning such a practice, or is it purely down to preference?
--
E.g.
typedef.h contains a typedef for U8.
my_header.h declares void display_message(U8 arg);
Should the reference to typedef.h go into my_source_file.c or into my_header.h ??
I see no good reason for not allowing headers to include their prerequisites.
Consider deleting an #include from a source file. For example, suppose the code has been modified to no longer use foo.h, so the #include for that is being deleted. But the source files has a dozen #include statements. Which other ones should you delete because they are no longer needed? Hopefully, foo.h documents its prerequisites, so you can identify those candidates for deletion. However, if you delete their #include statements, you might be deleting a prerequisite that is needed by a different header file. So you must check the prerequisites of every header file.
In contrast, if headers include their prerequisites, then you can simply delete #include <foo.h> and be done with it.
Includes should go in the source file. The header should only declare functions and variables of your source code file (that is typically the standard).

best practice for delivering a C API hiding internal functions [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I have written a C library which consists in a few .h files and .c files. I compile it as a .a static library.
I would like to expose only certain functions to the user and keep the rest as "obscure" as possible to make reverse engineering reasonably difficult.
Ideally my library would consist of:
1- one .h file with only the functions exposed to the user
2- myLibrary.a: as un-reversengineerable as possible
What are the best practices for that? Where should I look, is there a good tutorial/book somewhere?
More specifically:
for - 1
I already have all my .h and .c working and I would like to avoid changing them around, moving function declarations from .h to .c and go into circular references potential pbs. Is That possible?
For instance is it a good idea to create a new .h file which I would use only for distributing with my .a? That .h would contain copies of the functions I want to expose and forward declarations of types I use. Is that a good idea?
for - 2
a) what gcc flags (or xcode) shall I be aware of (for stripping, not having debug symbols etc)
b) a good pointer to learn about how to do code obfuscation?
Any thought will help,
Thanks, baba
The usual practice is to make sure that every function and global variable that is for use only internal to some module is declared static in that module. That limits exposure of internal implementation details from a single module.
If you need internal implementation details that cross between modules, but which are not for public consumption, then declare one or more .h files that are kept private and not delivered to end users. The names of objects defined in that way will still be visible to the linker (and to tools such as objdump and nm) but their detailed signatures will not be.
If you have data structures that are delivered to the end user, but which are opaque, then consider having the API deliver them as pointers to a struct that is declared by not defined in the public API .h file. That will preserve type safety, while concealing the implementation details. Naturally, the complete struct definition is in a private .h file.
With care, you can keep a partially documented publicly known struct that is a type-pun for the real definition but which only exposes the public members. This is more difficult to keep up to date, and if you do it, I would make certain that there are some strong test cases to validate that the public version is in fact equivalent to the private version in all ways that matter.
Naturally, use strip to remove the debug segments so that the internal details are not leaked that way.
There are tools out there that can obfuscate all the names that are intended to be only internal use. If run as part of the build process, you can work with an internal debug build that has sensible names for everything, and ship a build that has named all the internal functions and global variables with names that only a linker can love.
Finally, get used to the fact that anyone that can use your library will be able to reverse engineer your library to some extent. There are anti-debugger measures that can be taken, but IMHO that way lies madness and frustration.
I don't have a quick answer other than to explore the use of "static" functions. I would recommend reading Miro Samek's work on something he calls "C+". Basically object oriented ANSI C. Great read. He owns Quantum leaps software.
Erase headers for this functions, do some obfuscation in exports table and get pack your code and apply some anti debugger algorithm.
http://upx.sourceforge.net/
http://www.oreans.com/

Good way to organize C source files? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
The way I've always organized my C source was to put struct, macro and function prototypes in header files and function implementations in .c files. However, I have recently been reading alot of other peoples code for large projects and I'm starting to see that people often define things like structs and macros in the C source itself, immediately above the functions that make use of it. I can see some benefit to this as you don't have to go searching around to find the definition of structs and macros used by particular functions, everything is right there in roughly the same place as the functions that use it. However I can also see some disadvantages to it as it means that there is not one central repository for struct/macro definitions as they're scattered through the sourcecode.
My question is, what are some good rules of thumb for deciding when to put the macro/struct definition in the C source code as opposed to the header files themselves?
Typically, everything you put in the header file is part of the interface, while everything that you put in the source file is part of the implementation.
That is, if something in the header file is only ever used by the associated source file, it's an excellent candidate for moving to that source file. This prevents "polluting" the namespace of every file that uses your header with macros and types that they were never intended to use.
Interfaces and implementations is what it's all about.
Addendum to the accepted answer: it can be useful to put an incomplete struct declaration in the header but put the definition only in the .c file. Now a pointer to that struct gives you a private type that you have complete control over. Very useful to guarantee separation of concerns; a bit like private members in C++.
For extensive examples, follow the link.
Put your public structures and interface into the .h file.
Put your private bits into the .c file.
If I have more than one .c file that implements a logical set of functionality, I'll put the things that need to be shared among those implementation files into a *p.h file ('p' for private). Client code should not include the *p.h header.
For example, if I have a set of routines that implement an XML parser, I might have the following organization:
xmlparser.h - the public structures, types, enums, and function prototypes
xmlparserp.h - private types, function prototypes, etc. that client code
doesn't and shouldn't need
xmlparser.c - implementation of the XML parser
xmlutil.c - some other implementation bits (would include xmlparserp.h)
stuff defining the external interface to the module goes in the header.
stuff just used within the module should stay in the C file.
header files should include required headers to support its declarations
headers should wrap themselves in #ifndef NAME_H to ensure a single inclusion per compilation unit
modules should include their own headers to ensure consistency
Lots of people don't know this basic stuff BTW, which is required to maintain your sanity on any sized C project.
In the book "C style: Standards and Guidelines" from David Straker (available online here), there are some good ideas on file layout, and on the division between C file and headers.
You may read the chapter 7 and specially chapter 7.4.
And as John Calsbeek said, you can based your organization on how the header parts are used.
If one structure, type, macro, ... is only used by one source, you can move your code there.
You may have header files for the prototypes and some header file for the common declarations (type definitions, etc...)
Mainly, the issue at hand is a sort of encapsulation consideration. You can think of a module's header file as not so much its "overhead", which is what you seem to be doing now, as its public interface declaration, which is presumably how the people whose code you're looking at are seeing it.
From there it follows naturally that what goes in the header file is things that other modules need to know about: prototypes for functions you anticipate being used externally, structs and macros used for interface purposes, extern variable declarations, and so on, while things that are strictly for the module's internal use go in the .c file.
If you define your structs and macros inside the .c, you won't be able to use it from other .c files
To do so, you have to put it in the .h so that #include tells the compiler where to check for your structs and macros
unless you #include "x.c", which you shouldn't do =)

Resources