Hey guys,
I want to create a self-contained C project to be machine-independent.
An example? I want to "make all" my project on a machine where external libraries are not installed (but included in my project) and I want all keep working :)
The library I'm talking about is the GSL, you can find it in the libgsl0-dev ubuntu package.
Now, I want to include all the header and .c files in my project, uninstall the packages and the project must build and run as before :)
Ideas?
Thanks!
Bye!
Don't forget about dependencies.
There are reasons why libraries like GSL are distributed as independant entities:
Users can upgrade the library independantly of the software that uses it saving you from having to constantly update your project when the GSL version changes.
Licensing issues.
Dependancies. If GSL has dependencies and you want to build GSL as part of your project then you will also need to include ALL the source code for ALL dependencies...and their dependencies...and their dependencies...and so on and so on. If you are going to make it a requirement that some sub-dependency need to already be installed then you may as well make it a requirement that GSL is already installed.
Other reasons I can't be bothered to think up because I have other things to do.
Just copy the library's source code somewhere into your project's hierarchy, and start either creating or modifying Makefiles (or whatever GSL uses) to get it to build.
For instance, you could have it in a directory external/libgsl, and then set up a Makefile target for your project that does the building. Then you make your project's code dependent on the library's, so that the library is always built first.
Of course, you also need to think about any license issues that might arise if/when you distribute your project.
Related
Coming from programming environments that support package managers, I experience a lot of discomfort installing and using libraries not included in the default project.
For example, #include <threads.h> triggers an error threads.h file not found. I found that the compiler looks for header files in /Library/Developer/CommandLineTools/usr/include/c++/v1 by issuing gcc -print-prog-name=cpp -v. I am not sure if this a complete folder list? How do I find the ones that it doesn't find by default? I am on OSX, but Windows solution is also desired.
The question doesn't really say whether you are building your own project, or someone else's, and whether you use an IDE or some build system. I'll try to give a generic answer suitable for most scenarios.
But first, it's header files, not libraries (which are a different kind of pain, by the way). You need to explicitly make them available to the compiler, unless they reside on a standard search path. Alas, it's a lot of manual work sometimes, especially when you need to build a third-party project with a ton of dependencies.
I am not sure if this a complete folder list?
Figuring out the standard include paths of your compiler can be tricky. Here's one question that has some hints: What are the GCC default include directories?
How do I find the ones that it doesn't find by default?
They may or may not be present on your machine. If they are, you'll have to find out where they are located. Otherwise you have to figure out what library they belong to, then download and unpack (and probably build) it. Either way, you will have to specify the path to that library's header files in your IDE (or Makefile, or whatever you use). Oh, and you need to make sure that the library version matches the version required by the project. Fun!
On macOS you can use third-party package managers (e.g. brew) to handle library installation for you.
pkg-config is not available on macOS, unless you install it from a third-party source.
If you are building your own project, a somewhat better solution is to use CMake and its find_package command. However, only libraries supported by CMake can be discovered this way. Fortunately, their collection of supported libraries is quite extensive, and you can make your own find_package scripts. Moreover, CMake is cross-platform, and it can handle versioning for you.
I'm very new to C programming and I'm trying to understand what is the "idiomatic way" to install a 3rd-part library that I'm planning to use in my project.
In the JVM world I came from we have a public repositories and a build system does all the dependency downloading for us. Is it the way to go when it comes to developing native application in C?
In my particular case it is libcurl and I want to make sure it is installed correctly. As a build system I use Make (not CMake).
Would it be correct to add a specific target (e.g. bootstrap which is to setup all the necessary dependencies) for that?
I'm strictly speaking not sure if such a "dependencies-installation" is a Makefile responsibility.
When you build on Linux using the autotools it will check if the given library is present on the System. If it is missing the configure will stop and notice the user. The user then has the Chance to install the Software library with the system's repository.
Same with cmake where you can define the dependency and when trying to build with the missing library, cmake will notify you.
This is somewhat different than e.g. Maven in the Java world which automatically downloads the dependencies. This is not the case with make or cmake.
If you are under the Linux that this might be helpful. There is a canonical way. This is “autotools”. It provides you with possibility to write some script to check that library exists and then use it. I’m not much familiar with this process, but it’s pretty configurable and you can find dozens of examples and tutorials regarding “autotools”. So, if this is a case of yours, I suggest you to check that.
In my experience, I always used CMake.
I have a tool in C that needs libpng, zlib and lcms libraries. On unix I get these dependencies via pkg-config, but on Windows I can't rely on it, so for users building the library it's a massive hassle to obtain and build these dependencies manually.
How can I automate obtaining these dependencies in the most Windows-native way? I know MinGW helps, but that's a bit of a cop-out. I'd like to learn how to do it with the Microsoft toolchain.
Is there any point in searching for shared non-Microsoft libraries on Windows, or should I go straight for statically linking my own?
If I were to download and build the libraries as part of my build script, what should I use? (nuget? curl? ftp.exe?)
It seems like Microsoft is discouraging use of NuGet for C? https://github.com/Microsoft/vcpkg/blob/master/docs/about/faq.md#why-not-nuget
I recommend looking into Nuget. It's fast becoming the Microsoft standard for these sort of things. A lot of people think Nuget is just for .NET, but it works great for VC++ too. I have had a great deal of success setting up Nuget servers in my company to serve headers and compiled libs, and I've gotten all of our automated build systems to create these things automatically. I'm not going to spell out all the details here, but it basically comes down to performing your build, creating an MSBuild XML snippet to set up the precompiler and link options automatically, and then packaging all those bits (headers, libs, XML) in the correct way. When done correctly, it's just a two step process to use an existing package in a new project -- establish the Nuget reference and then add the #includes into your code. You don't even need a server -- you can source them directly from a directory of your own control.
https://learn.microsoft.com/en-us/nuget/create-packages/native-packages
We have a project that is going to require linking against libcurl and libxml2, among other libraries. We seem to have essentially two strategies for managing these depencies:
Ask each developer to install those libraries under the "usual" locations, e.g. /usr/lib, or
Include the sources to these libraries under a dedicated folder in the project's source tree.
Approach 1 requires everyone to make sure those libraries are installed on their system, but appears to be the approach used by many open source projects. On such projects, the build will detect that those libraries are missing and will fail.
Approach 2 might make the project tree unmanageably large in some instances and make the compilation time much longer. In addition, this approach can obviously be taken too far. I wouldn't put the compiler under the project tree, for instance (right?).
What are the best practices wrt external dependencies? Can/should one require of every developer to have certain libraries installed to build the project? Or is considered better to include all the dependencies in the project tree?
Don't bother about their exact location in your code. Locating them should be handled by the used compiler/linker (or the user by setting variables) if they're common. For very uncommon dependencies (or ones with customized/modified files) you might want to include them in your source (if possible due to licensing etc.).
If you'd like it more convenient, you should use some script (e.g. configure or CMake) to setup/create the build files being used. CMake for example allows you to set different packages (libcurl and libxml2 in your example) as optional and required. When building the project it will try to locate those, if that fails it will ask the user. This IS an additional step and might make building a bit more cumbersome but it will also make downloading faster (smaller source) as well as updating easier (as all you have to do is rebuild your program).
So in general I'd follow approach 1, if there's special/rare/customized stuff being used, approach 2.
The normal way is to have the respective dependencies and have the developer install them. Later, if the project is packeted into .deb or .rpm, these packets will require the respective libraries to be installed, the source packets will have the -devel packets as dependencies.
Best practice is not to include the external libraries in your source tree - instead, include a text file called INSTALL in your project root, which gives instructions on building the project and includes a list of the library dependencies, including minimum versions.
I know, it must be a silly question.
Assume I have a library using an autotools build system.
I have all that configure, configure.ac, Makefile.am, config.h and may other files in my project root folder. Some of them wre written by a developer, others are generated by autotools.
The question is: if I use a version control system (in my case - hg) - which of all that autotools files should be tracked by a VCS and which shouldn't (hgignore'd)?
Thanks,
Serge
I think the best procedure is to only put files under version control that are not generated - people working with the VCS are developers and should have the autotools installed on their machines, checking in generated files will only cause trouble for them.
On the other hand you have to make sure that source-level distribution is done with all generated files in place, so that non-developers are able to build the software without the autotools installed.
There are two schools of thought on this:
"I want to see the project exactly as it was at time/version X"
"I can always re-generate anything which was automatically generated later"
I generally fall into the latter group personally, but the former can be nice if there are/were problems with the build system in some specific version that you probably don't have installed any more.
In your example configure and config.h are both (probably) autogenerated, so if you're going to include them in version control I'd be inclined to include the Makefile.ins too.
In my projects this usually means having no more autotools related files than configure.ac, Makefile.am, the documentation if it's GNU and a directory called m4 which includes any custom/non-standard macros my configure.ac requires.