I have a radio chip (connected to an embedded processor) which I have written a library for. I want to develop the protocol to use with the rf chip on a PC (Ubuntu). In order to do so I have copied the header file of my library into a new folder, but created an entirely new implementation in a new c file and compile for the PC with gcc. This approach has worked better than expected and I'm able to prototype code that calls the rf lib on the PC and simply copy it right over to the real project with little or no changes.
I do have one small problem. Any changes I make in the the library's header file need to be manually copied between the two project folders. Not a big deal, but since this has worked so well, I can see doing things like this again in the future, and would like to link the API headers between the real and "emulated" environments when doing so. I have thought about using git submodules to do so, but I'm not fond of lots of folders in my projects especially if most of them only contain one or two files each. I could use the c preprocessor to swap in the right code at compile time, but that doesn't cover the changes in my Makefile to call the right compiler with the right fags.
I'm wondering if anyone else has ever done something similar, and what their approach was?
Thanks guys!
maybe you should create a "rflib" and treat it as an external library that you use within your embedded project.
develop on one side and update to the newest version on the other.
An obvious (but fairly hacky) solution is to use a symlink.
I think the best solution, since they will share so much code, would be to just merge the two projects and have two different makefile targets for the binaries.
Related
I have recently tried to make a few basic projects in C (using CMake), but one aspect I find very difficult is getting all the different things that I've been making to link together nicely. For example, I started off by making a data-structure library that has some of the basic data structures, along with functions to traverse them, etc., and a testing library that handles unit testing. In most new projects I make, I find that I need to include these two libraries, but I can't find an easy way to do it. I tried doing this using git submodule, and while that did work for the most part, whenever I updated any of the dependencies, updating the dependant seemed to be a nightmare. I've also had a look into the cmake package system; find_package, and related functions; but I can't seem to get that to work (at least when I want to install it in a custom directory, that is).
I was wondering if there is some sort of "standard" way that C programmers go about dealing with this, and what that may be. Is submodules the way to go? If so, is there a way I could do it cleanly, making sure that everything is always the right version?
Thanks in advance.
Basically, I want to seperate some common functionality from existing projects into a seperate library project, but also allow a project to remain cross-platform when I include this library.
I should clarify that when I say "cross-platform" I'm primarily concerned with compiling for multiple CPU architectures (x86/x86_64/ARM).
I have a few useful functions which I use across many of my software projects. So I decided that it was bad practice to keep copying these source code files between projects, and that I should create a seperate library project from them.
I decided that a static library would suit my needs better than a shared library. However, it occurred to me that the static library would be plaform dependent, and by including it with my projects that would cause these projects to also be platform dependent. This is clearly a disadvantage over including the source code itself.
Two possible solutions occur to me:
Include a static library compiled for each platform.
Continue to include the source code.
I do have reservations about both of the above options. Option 1 seems overly complex/wasteful. Option 2 seems like bad practice, as it's possible for the "library" to be modified per project and become out-of-sync; especially if the library source code is stored in the same directory as all the other project source code.
I'd be really grateful for any suggestions on how to overcome this problem, or information on how anyone else has previously overcome this problem?
You could adopt the standard approach of open source project (even if your project is not open source). There would be one central point where one can obtain the source code, presumably under revision control (subversion, git...). Anyone who wishes to use the library should check out the source code, compile it (a Makefile or something similar should be included), and then they are all set. If someone needs to change something in the library, they do so, test their changes, and send you a patch so that you can apply the change to the project (or not, depending on your opinion on the patch).
Apologies if I explain this badly or am asking something bleeding obvious but I'm new to the Linux kernel and kinda in at the deep end...
We have an embedded-linux system which arrives with a (very badly documented) SDK containing hundreds of folders of stuff, most folders containing a rules.make, make, make.config or some variation of... and the root folder containing a "master" makefile & rules.make which mean that you can, from the root folder, type "make sysall" and it builds the entire package.
So far so good, but trying to debug it is a bit of an issue as the documentation will say something like:
"To get the kernel to output debug messages, just define #outputdebugmessagesplz"
OK, but some of these things are defined in the "master" make/rules file, some of these are defined in the child make/rules/config files, some are in .h files... and of course it's far nicer to turn these things on/off from the "top" make.config rather than modifying individual .h files and then having to remember to turn them off again.
So I thought it would be a useful thing to recursively build a tree, starting from the master "make" file and following everything it does, everything that gets defined or re-defined, etc... but there doesn't seem to be a simple way of doing that?
I assume I am missing a "make" option here that spits this info out, or a usage of the makefile/config that will just work?
Your situation is not uncommon. When developing for embedded systems, you might encounter many custom systems that solve a problem in a specific way. As people already commented on your question, there's no easy way to generate a dependency graph for your makefile structure/framework. But there are some things you can try, and I'll try to base my suggestions based on your situation. Since you've said:
Im new to the Linux kernel and kinda in at the deep end...
and
We have an embedded-linux system which arrives with a (very badly
documented) SDK containing hundreds of folders of stuff
You could try the following things:
If your SDK is provided by a third-party vendor, try contacting them and get some support.
SDK's usually provide an abstraction to work with several components without a deep understanding of how each one of them really works. Try to pinpoint your problem, like if you want to customize only the kernel configuration, you could find the linux kernel folder on your SDK (assuming your SDK is composed of a set of folders with things like libraries, source code of applications and stuff, one of them might be the kernel one) and run make menuconfig. This will open a ncurses-based configuration GUI that you can navigate and choose kernel options.
As people already pointed out, you can try to run make -n and check the output. You could also try to run make -p | less and inspect the output, but I don't recommend this since it will only print the data base (rules and variable values) that results from reading the makefiles. You would have to parse this output to find out what you want in it.
Basically, you should try to pinpoint what you want to customize and see how this interacts with your SDK. If it's the kernel, then working only with it will give you a starting point. The linux kernel has its own makefile-build system, named kbuild. You can find more information about it at the kernel's Documentation folder.
Besides that, trying to understand how makefiles work will help you if you have a complex makefile structure controlling several components. The following are good resources to learn about makefiles:
GNU Make official documentation
O'Reilly's Open Book "Managing Projects with GNU Make"
Also, before trying to build your own tool, you can check if there's an open source project that does what you want. A quick search on google gave me this:
makegrapher
Also, check this question and this one. You might find useful information from people that had the same problems as you did.
Hope it helps!
I'm trying to port my project to another platform and I've found a few differences between this new platform and the one I started on. I've seen the autotools package and configure scripts which are supposed to help with that, but I was wondering how feasible it would be to just have a separate branch for each new platform.
The only problem I see is how to do development on the target platform and then merge in changes to other branches without getting the platform-dependent changes. If there is a way to do that, it seems to me it'd be much cleaner.
Has anyone done this who can recommend/discourage this approach?
I would definitely discourage that approach.
You're just asking for trouble if you keep the same code in branches that can't be merged. It's going to be incredibly confusing to keep track of what changes have been applied to what branches and a nightmare should you forget to apply a change to one of your platform branches.
You didn't mention the language, but use the features available in the language to separate code differences between platforms, but using one branch. For example, in C++, you should first use file-based separation. For example, if you have sound code for Mac, Linux and Windows platforms, create a sound_mac.cpp, sound_windows.cpp and sound_linux.cpp file, each containing the same classes and methods, but containing very different platform-specific implementations. Obviously, you only add the appropriate file to the IDE on the particular platform. So, your Xcode project gets sound_mac.cpp file, while your Visual Studio project uses the sound_windows.cpp file. The files which reference those classes and methods will use #ifdef's to determine which headers to include.
You'll use a similar approach for things like installer scripts. You may have a different installer on the Mac than on Windows, but the files for both will be in the branch. Your build script on the Mac will simply utilize the Mac-specific installer files and ignore the Windows-specific files.
Keeping things in one branch and just ignoring what doesn't apply to the current platform allows you merge back and forth between topic branches and the master, making your life much more sane.
Branching to work out compatibility for a target platform is doable. Just be sure to separate out changes that don't have to do with the target platform specifically into another branch.
I am playing around with some C code, writing a small webserver. The purpose of what I am doing is to write the server using different networking techniques so that I can learn more about them (multithread vs multiprocess vs select vs poll). Much of the code stays the same, but I would like the networking code to be able to be "swapped out" to do some performance testing against the different techniques. I thought about using ifdefs but that seems like it will quickly ugly up the code. Any suggestions?
Dynamic library loading? e.g. dlopen in Linux.
Just craft an API common to the component that requires dynamic loading.
I prefer pushing "conditional compilation" from C/C++ source to makefiles, i.e. having same symbols produced from multiple .c/.cpp files but only link in the objects selected by the build option.
Also take a look at nginx if you haven't already - might give you some ideas about web server implementation.
Compile the networking part into its own lib with a flexible interface. Compile that lib as needed into the various wrappers. You may even be able to find a preexisting lib that meets your requirements.
Put the different implementations of the networking related functions into different .c files sharing a common header and than link with the one you want to use. Starting from this you can make your makefile create x different executables this way for each of the different implementations you have done, so you can just say "make httpd_select" or "make httpd_poll" etc.
Especially for benchmarking to find the best approach it will probably give you more reliable results to do it at the compiler/linker level than via shared libraries or function pointers as that might introduce extra overhead at runtime.