Cross-platform Library - c

Basically, I want to seperate some common functionality from existing projects into a seperate library project, but also allow a project to remain cross-platform when I include this library.
I should clarify that when I say "cross-platform" I'm primarily concerned with compiling for multiple CPU architectures (x86/x86_64/ARM).
I have a few useful functions which I use across many of my software projects. So I decided that it was bad practice to keep copying these source code files between projects, and that I should create a seperate library project from them.
I decided that a static library would suit my needs better than a shared library. However, it occurred to me that the static library would be plaform dependent, and by including it with my projects that would cause these projects to also be platform dependent. This is clearly a disadvantage over including the source code itself.
Two possible solutions occur to me:
Include a static library compiled for each platform.
Continue to include the source code.
I do have reservations about both of the above options. Option 1 seems overly complex/wasteful. Option 2 seems like bad practice, as it's possible for the "library" to be modified per project and become out-of-sync; especially if the library source code is stored in the same directory as all the other project source code.
I'd be really grateful for any suggestions on how to overcome this problem, or information on how anyone else has previously overcome this problem?

You could adopt the standard approach of open source project (even if your project is not open source). There would be one central point where one can obtain the source code, presumably under revision control (subversion, git...). Anyone who wishes to use the library should check out the source code, compile it (a Makefile or something similar should be included), and then they are all set. If someone needs to change something in the library, they do so, test their changes, and send you a patch so that you can apply the change to the project (or not, depending on your opinion on the patch).

Related

Synchronize single c header file between two projects

I have a radio chip (connected to an embedded processor) which I have written a library for. I want to develop the protocol to use with the rf chip on a PC (Ubuntu). In order to do so I have copied the header file of my library into a new folder, but created an entirely new implementation in a new c file and compile for the PC with gcc. This approach has worked better than expected and I'm able to prototype code that calls the rf lib on the PC and simply copy it right over to the real project with little or no changes.
I do have one small problem. Any changes I make in the the library's header file need to be manually copied between the two project folders. Not a big deal, but since this has worked so well, I can see doing things like this again in the future, and would like to link the API headers between the real and "emulated" environments when doing so. I have thought about using git submodules to do so, but I'm not fond of lots of folders in my projects especially if most of them only contain one or two files each. I could use the c preprocessor to swap in the right code at compile time, but that doesn't cover the changes in my Makefile to call the right compiler with the right fags.
I'm wondering if anyone else has ever done something similar, and what their approach was?
Thanks guys!
maybe you should create a "rflib" and treat it as an external library that you use within your embedded project.
develop on one side and update to the newest version on the other.
An obvious (but fairly hacky) solution is to use a symlink.
I think the best solution, since they will share so much code, would be to just merge the two projects and have two different makefile targets for the binaries.

XC8 Library organization and #defines across multiple source files

This is a complicated post so please be patient. I have tried to condense it as much as possible...
I am coming to XC8 from using a different tool chain for PIC microcontrollers. With the previous compiler, setting up and using my own libraries and using defines across those libraries seems to be much easier.
I want to write my own libraries of functions for re-use. I also want to store them all in a directory structure of my own choosing (this is so that they sync automatically between multiple machines and for various other reasons). Here is a simplified fictional file structure.
\projects\my_project //the current project directory
\some_other_directory\my_library\comms_lib //my communications library
\some_other_directory\my_library\adc_lib //my ADC library
Now let’s say for arguments sake that each of my libraries needs the __XTAL_FREQ definition. The frequency will likely be different for each project.
Here are my questions:
What is the best/most efficient way to tell the compiler where my library files are located?
Short of adding __XTAL_FREQ to every header file how do I make the define available to all of them?
Likely someone is going to say that it should be in a separate header file (let’s call it project_config.h). This file could then be located with each future project and changed accordingly. If the separate header file is the answer then question that follows is, how do I get the library headers (not in the same directory as the project) to reference the project_config.h file correctly for each new project?
Thanks in advance, Mark
If you are using MPLABX, you could consider making one or more library projects for your libraries, which can then be included from other MPLABX projects.
As for a global definition of __XTAL_FREQ, I'm thinking that it should be possible to pass a symbol definition to the command line, not sure though.

What is the common code reuse strategy in C

Context: C language, 8 bit microprocessor
We have identified components which can be reused between projects (products). But I can not find which is the best infrastructure to handle the reusable components.
Two possibilities I found up to now:
Static libraries
Shared files in subversion
Both shared libraries and shared source let you share the common code among projects. Libraries present a better of the two alternatives, so you should use them if they are available on your platform. This lets you guard the source of the library from inadvertent modifications, which could happen if the code from source control is changed locally.
The only problem with sharing code through libraries may be lack of support for source-level debugging of library code by some of the tools in your embedded tool chain (e.g. debuggers attached to in-circuit emulators). In this case reusing code through the source may be acceptable. If possible, you should guard the source from modification through the file system access controls.
If you have reusable components, libraries are the way to go.
It's easier to maintain and you have a clear interface. It's also easier to incorporate into new projects.
You can easily do individual unit tests on library code
Lesser risk to copy and paste code.
Programmers are more aware that this code is shared when they have to use it from a library.
Several good arguments have been made for the library approach.
However, there's at least one good argument for re-building (perhaps from the same source repository) each time you build a dependent project, and that would be the ability to apply target- project- or development stage- unique compile settings to all of the code, including the shared portion.
At my company, we used both approaches at the same time:
We do two checkouts: one for the project, the other for the library.
When the project needs to be compiled (via Makefile), we compile the library first.
The library is then linked as if it was a binary-only library.
When we release a project, we check whether the other projects still compile against the new library.
When we release a project, we tag the library along with the project.
This way you get the best of both worlds:
common code is shared: all projects benefit from bug fixes and improvements
source code is always fully available for understanding and debugging
source code availability encourages library maintenance (fixings, improvements, and experiments)
the library boundaries impose a more API-like approach: clearer interface and project embedding
you can pass compile-time flags to the library to build a different flavors
you can always go back in time if needed without library-vs-project mismatching hassles
if you are in a hurry, you can put off the library check.
The only drawback to this approach is that developers have not know what they are doing. If they modify the library, they should know that the change will impact on all projects. But you are already using a version control system and, if you use branches and the communication within your team is good, there should be no problem at all.

Including third-party libraries in C applications

I'm a bit naive when it comes to application development in C. I've been writing a lot of code for a programming language I'm working on and I want to include stuff from ICU (for internationalization and unicode support).
The problem is, I'm just not sure if there are any conventions for including a third party library. for something like readline where lots of systems are probably going to have it installed already, it's safe to just link to it (I think). But what about if I wanted to include a version of the library in my own code? Is this common or am I thinking about this all wrong?
If your code requires 3rd party libraries, you need to check for them before you build. On Linux, at least with open-source, the canonical way to do this is to use Autotools to write a configure script that looks for both the presence of libraries and how to use them. Thankfully this is pretty automated and there are tons of examples. Basically you write a configure.ac (and/or a Makefile.am) which are the source files for autoconf and automake respectively. They're transformed into configure and Makefile.in, and ./configure conditionally builds the Makefile with any configure-time options you specify.
Note that this is really only for Linux. I guess the canonical way to do it on Windows is with a project file for an IDE...
If it is a .lib and it has no runtime linked libraries it gets complied into you code. If you need to link to dynamic libraries you will have to assure they are there provide a installer or point the user to where they can obtain them.
If you are talking about shipping your software off to end users and are worried about dependencies - you have to provide them correct packages/installers that include the dependencies needed to run your software, or otherwise make sure the user can get them (subject to local laws, export laws, etc, etc, etc, but that's all about licensing).
You could build your software and statically link in ICU and whatever else you use, or you can ship your software and the ICU shared libraries.
It depends on the OS you're targeting. For Linux and Unix system, you will typically see dynamic linking, so the application will use the library that is already installed on the system. If you do this, that means it's up to the user to obtain the library if they don't already have it. Package managers in Linux will do this for you if you package your application in the distro's package format.
On Windows you typically see static linking, which means the application bundles the library and it will use that specific version. many different applications may use the same library but include their own version. So you can have many copies of the library floating around on your system.
The problem with shipping a copy of the library with your code is that you don't get the benefit of the library's maintainers' bug fixes for free. Obscure, small, and unsupported libraries are generally worth linking statically. Otherwise I'd just add the dependency and ensure that whatever packages you ship indicate it appropriately.

Dealing with similar code in multiple C "projects"

I am playing around with some C code, writing a small webserver. The purpose of what I am doing is to write the server using different networking techniques so that I can learn more about them (multithread vs multiprocess vs select vs poll). Much of the code stays the same, but I would like the networking code to be able to be "swapped out" to do some performance testing against the different techniques. I thought about using ifdefs but that seems like it will quickly ugly up the code. Any suggestions?
Dynamic library loading? e.g. dlopen in Linux.
Just craft an API common to the component that requires dynamic loading.
I prefer pushing "conditional compilation" from C/C++ source to makefiles, i.e. having same symbols produced from multiple .c/.cpp files but only link in the objects selected by the build option.
Also take a look at nginx if you haven't already - might give you some ideas about web server implementation.
Compile the networking part into its own lib with a flexible interface. Compile that lib as needed into the various wrappers. You may even be able to find a preexisting lib that meets your requirements.
Put the different implementations of the networking related functions into different .c files sharing a common header and than link with the one you want to use. Starting from this you can make your makefile create x different executables this way for each of the different implementations you have done, so you can just say "make httpd_select" or "make httpd_poll" etc.
Especially for benchmarking to find the best approach it will probably give you more reliable results to do it at the compiler/linker level than via shared libraries or function pointers as that might introduce extra overhead at runtime.

Resources