XC8 Library organization and #defines across multiple source files - c

This is a complicated post so please be patient. I have tried to condense it as much as possible...
I am coming to XC8 from using a different tool chain for PIC microcontrollers. With the previous compiler, setting up and using my own libraries and using defines across those libraries seems to be much easier.
I want to write my own libraries of functions for re-use. I also want to store them all in a directory structure of my own choosing (this is so that they sync automatically between multiple machines and for various other reasons). Here is a simplified fictional file structure.
\projects\my_project //the current project directory
\some_other_directory\my_library\comms_lib //my communications library
\some_other_directory\my_library\adc_lib //my ADC library
Now let’s say for arguments sake that each of my libraries needs the __XTAL_FREQ definition. The frequency will likely be different for each project.
Here are my questions:
What is the best/most efficient way to tell the compiler where my library files are located?
Short of adding __XTAL_FREQ to every header file how do I make the define available to all of them?
Likely someone is going to say that it should be in a separate header file (let’s call it project_config.h). This file could then be located with each future project and changed accordingly. If the separate header file is the answer then question that follows is, how do I get the library headers (not in the same directory as the project) to reference the project_config.h file correctly for each new project?
Thanks in advance, Mark

If you are using MPLABX, you could consider making one or more library projects for your libraries, which can then be included from other MPLABX projects.
As for a global definition of __XTAL_FREQ, I'm thinking that it should be possible to pass a symbol definition to the command line, not sure though.

Related

What, if any, are the disadvantages having a header file of header files?

I came onto a project that employs the method of using a header file MyProject.h that has all the headers of each .c file. Each .c file has their own header file that has #include "MyProject.h", whatever libraries are needed, and any other declarations necessary for the file.
To me it seems redundant and somewhat unnatural, but it compiles and subsequently runs as expected. My thought is that the compiler would be doing way more work than necessary and is possibly over-bloating the .exe. What are the disadvantages, if any, to doing this?
A subsequent question I have is, say I included a library like Time.h in one file using the above example. Will the compiler only build Time.h once into the binary or for every file now because of MyProject.h? What about with structs, enums, etc...?
To have such a header file is poor practice and bad design. The problem is that it will create a tight coupling dependency between every single file of your project, even if they are completely unrelated.
Good program design is to create autonomous modules that only include the resources they are using. They should do this from their own h files, to document exactly which dependencies a particular module has.
The main downside is increased build times. Every source file includes every header of the project, whether it needs it or not.
It's also conceptually unclean. A source file should include the headers it needs. It should be possible to search for a header's name to find the parts of the source code that uses these facilities. Minimizing unnecessary includes is evidence of a loosely coupled system. However, include-what-you-use is hard to enforce, because you cannot prevent transitive availability of facilities through headers in C.
It should not lead to increased executable size. Even if the headers contain code (which is rare in C, but common in C++), the linker should eliminate duplicates and remove unused code.
All previous answers were clear enough, but.
The main disadvantage of "one big header file" is the problem with code reusability. For example, you've created some module as a part of your application. Let's say, this module implements API for some hardware. And then you want to make another application, which should use this module.
In this case you have to remember which header files you have to include to compile this module, or, if you just copy your previous "big header file", it requires a lot of unnecessary third party libraries and header files.
Nobody wants to spend a lot of time to make some module working. It's much better if you can use it right out-of-the-box, isn't it?
Most of my libraries have one precompiled header containing external includes and one "main" header for each library that contains the precompiled header items + all the .h files for the library in question. All of my .cpp files first include the precompiled header and then the main header.
That's it. None of my other .h files have any includes in at all. My .cpp files only include these two headers. Other libraries that need to use the library will #include that same main header.
Doing this has greatly simplified header file headache/complexity issue for me though I would add that most of my libraries are relatively small in the grand scheme of things. I don't know if this would work for a very large (monster) project. Perhaps not.
The way this works is actually quite nice. Others above, concerned about "coupling" want to minimise it, but if you don't include any headers at all you're 100% uncoupled, even from your dependencies. If you want to reuse the class, use the library that contains it.
Simple.

Synchronize single c header file between two projects

I have a radio chip (connected to an embedded processor) which I have written a library for. I want to develop the protocol to use with the rf chip on a PC (Ubuntu). In order to do so I have copied the header file of my library into a new folder, but created an entirely new implementation in a new c file and compile for the PC with gcc. This approach has worked better than expected and I'm able to prototype code that calls the rf lib on the PC and simply copy it right over to the real project with little or no changes.
I do have one small problem. Any changes I make in the the library's header file need to be manually copied between the two project folders. Not a big deal, but since this has worked so well, I can see doing things like this again in the future, and would like to link the API headers between the real and "emulated" environments when doing so. I have thought about using git submodules to do so, but I'm not fond of lots of folders in my projects especially if most of them only contain one or two files each. I could use the c preprocessor to swap in the right code at compile time, but that doesn't cover the changes in my Makefile to call the right compiler with the right fags.
I'm wondering if anyone else has ever done something similar, and what their approach was?
Thanks guys!
maybe you should create a "rflib" and treat it as an external library that you use within your embedded project.
develop on one side and update to the newest version on the other.
An obvious (but fairly hacky) solution is to use a symlink.
I think the best solution, since they will share so much code, would be to just merge the two projects and have two different makefile targets for the binaries.

Cross-platform Library

Basically, I want to seperate some common functionality from existing projects into a seperate library project, but also allow a project to remain cross-platform when I include this library.
I should clarify that when I say "cross-platform" I'm primarily concerned with compiling for multiple CPU architectures (x86/x86_64/ARM).
I have a few useful functions which I use across many of my software projects. So I decided that it was bad practice to keep copying these source code files between projects, and that I should create a seperate library project from them.
I decided that a static library would suit my needs better than a shared library. However, it occurred to me that the static library would be plaform dependent, and by including it with my projects that would cause these projects to also be platform dependent. This is clearly a disadvantage over including the source code itself.
Two possible solutions occur to me:
Include a static library compiled for each platform.
Continue to include the source code.
I do have reservations about both of the above options. Option 1 seems overly complex/wasteful. Option 2 seems like bad practice, as it's possible for the "library" to be modified per project and become out-of-sync; especially if the library source code is stored in the same directory as all the other project source code.
I'd be really grateful for any suggestions on how to overcome this problem, or information on how anyone else has previously overcome this problem?
You could adopt the standard approach of open source project (even if your project is not open source). There would be one central point where one can obtain the source code, presumably under revision control (subversion, git...). Anyone who wishes to use the library should check out the source code, compile it (a Makefile or something similar should be included), and then they are all set. If someone needs to change something in the library, they do so, test their changes, and send you a patch so that you can apply the change to the project (or not, depending on your opinion on the patch).

how do i call function of other's open source propriotery project in my project in C

Suppose you have a software written in C say XYZ. The software XYZ is an open source proprietary software.
So I can have the source of the software. I can use the software but I can not modify XYZ's files.
Suppose I am writing my own software say ABC. And that software uses some of functionalities provided by XYZ.
Now there is function in source code of XYZ say "static int get_val( int index ) ".
I want to use the function get_val(), so what should i do?
How should I call the function??
You shouldn't. The static keyword makes the function local to it's translation unit (source file, more or less), which means it can not be called from other translation units.
Well of course you can, but it may not be a good idea.
There's two ways of making the function available:
Export it from the module by removing the static keyword and adding it to the api header file. This will of course involve changing the original source.
#include the file into your own source file, thus effectively making it part of your own translation unit. Depending on what other dependencies this file may have, this may or may not be a viable option. I would be very wary of doing this, but it is an option.
Build a shared library (DLL or .so) from XYZ. Chances are that there is a shared library already available.
Link your code i.e. ABC with XYZ and you can start using the functions exposed by XYZ in your program.
All open source software come with very good readme and instructions which will help you use the software. Start checking from those guides.
If the XYZ project is open source
add the source files of the XYZ project to your own and compile all together
if you change something in the XYZ project, consider sending a message to the maintainers of the project with your changes: they might like what you did and incorporate into a future version
If the project is proprietary, you don't have the source code. A function defined as static ... is not visible in other translation units, so you don't call it at all.

Make part of a C lib "private"

I'm developing a shared library, and since the code is big, I've decided to split it in many headers and source files, like any normal program :).
The problem is that most of these headers are for internal use, i.e. I don't want them to be accessible from outside of my library. So I'm thinking to move them all to a big source file and only provide headers for what is going to be visible.
It's a good idea do that? Should I worry on visibility?
Thanks
Instead of merging the headers, just keep them alongside your source files and don't "publish" them as a part of your development package. As an example of this, the linux kernel has many headers in the source tree, but only certain headers are exposed to applications (in the include structure).
You should approach it from a "cleanliness" angle; don't ship headers which include functions you aren't intending people to call. Don't document functions which you aren't shipping headers for.
If someone really wants to call a function in your library, they can, but you should try to make it clear that that's an unsupported use case and it's their problem if it all goes wrong.
Yes you should worry about symbol visibility. On Windows, set up to use DLLEXPORT. On Linux, define DLLEXPORT to set default symbol visibility, but compile everything with -fvisibility=hidden. There's an Ulrich Drepper article on it that is useful.
For the include files, you can separate them into directories and/or you can use your packaging system to just copy the public files.

Resources