I know, it must be a silly question.
Assume I have a library using an autotools build system.
I have all that configure, configure.ac, Makefile.am, config.h and may other files in my project root folder. Some of them wre written by a developer, others are generated by autotools.
The question is: if I use a version control system (in my case - hg) - which of all that autotools files should be tracked by a VCS and which shouldn't (hgignore'd)?
Thanks,
Serge
I think the best procedure is to only put files under version control that are not generated - people working with the VCS are developers and should have the autotools installed on their machines, checking in generated files will only cause trouble for them.
On the other hand you have to make sure that source-level distribution is done with all generated files in place, so that non-developers are able to build the software without the autotools installed.
There are two schools of thought on this:
"I want to see the project exactly as it was at time/version X"
"I can always re-generate anything which was automatically generated later"
I generally fall into the latter group personally, but the former can be nice if there are/were problems with the build system in some specific version that you probably don't have installed any more.
In your example configure and config.h are both (probably) autogenerated, so if you're going to include them in version control I'd be inclined to include the Makefile.ins too.
In my projects this usually means having no more autotools related files than configure.ac, Makefile.am, the documentation if it's GNU and a directory called m4 which includes any custom/non-standard macros my configure.ac requires.
Related
Coming from programming environments that support package managers, I experience a lot of discomfort installing and using libraries not included in the default project.
For example, #include <threads.h> triggers an error threads.h file not found. I found that the compiler looks for header files in /Library/Developer/CommandLineTools/usr/include/c++/v1 by issuing gcc -print-prog-name=cpp -v. I am not sure if this a complete folder list? How do I find the ones that it doesn't find by default? I am on OSX, but Windows solution is also desired.
The question doesn't really say whether you are building your own project, or someone else's, and whether you use an IDE or some build system. I'll try to give a generic answer suitable for most scenarios.
But first, it's header files, not libraries (which are a different kind of pain, by the way). You need to explicitly make them available to the compiler, unless they reside on a standard search path. Alas, it's a lot of manual work sometimes, especially when you need to build a third-party project with a ton of dependencies.
I am not sure if this a complete folder list?
Figuring out the standard include paths of your compiler can be tricky. Here's one question that has some hints: What are the GCC default include directories?
How do I find the ones that it doesn't find by default?
They may or may not be present on your machine. If they are, you'll have to find out where they are located. Otherwise you have to figure out what library they belong to, then download and unpack (and probably build) it. Either way, you will have to specify the path to that library's header files in your IDE (or Makefile, or whatever you use). Oh, and you need to make sure that the library version matches the version required by the project. Fun!
On macOS you can use third-party package managers (e.g. brew) to handle library installation for you.
pkg-config is not available on macOS, unless you install it from a third-party source.
If you are building your own project, a somewhat better solution is to use CMake and its find_package command. However, only libraries supported by CMake can be discovered this way. Fortunately, their collection of supported libraries is quite extensive, and you can make your own find_package scripts. Moreover, CMake is cross-platform, and it can handle versioning for you.
I want to require a c library which was build by with the autotools.
To be honest I have little to no idea how they work :/
(The library which I want to require is "https://github.com/p4lang/PI")
I have executed the ./configure etc. scripts and successfully installed it.
When I search my usr I find the library under /usr/local/lib/libpi.a
and analogously the header files under /usr/local/include/PI.
I build my project with cmake and would like to have a cross platform solution with it.
However I would be satisfied to use the pkg-config command.
Does anybody know what is the "correct" / "recommended" way to get cflags,
or at least a variant in which I do not have to hard code the paths?
The involvement of the Autotools ends at the point where the built artifacts are installed on the system. Using those does not go through the Autotools.* This applies just as much when the installed artifacts are libraries and headers as when they are executables. There's nothing special or different about using Autotools-built programs or libraries.
I build my project with cmake and would like to have a cross platform
solution with it. However I would be satisfied to use the pkg-config
command.
Just like projects served by any other build system, Autotools projects can build and install pkg-config configuration files, or CMake macros, or whatever other bits and pieces they might think appropriate to assist users, but this is project-specific. The Autotools do not create such additional pieces of their own accord, but some Autotools-based projects do add them. And some don't, just like some CMake projects don't, and some projects with hand-rolled build systems don't, etc..
Does anybody know what is the "correct" / "recommended" way to get cflags, or at least a variant in which I do not have to hard code the paths?
Note that typically, for a library whose name you know, the only flags you might need are those specifying the location of the library headers and / or one specifying the location of the libraries themselves. Even these are unnecessary if the relevant pieces are installed in places that the compiler looks by default. Also these are generally not considered CFLAGS, per se. Terminology varies a bit, but the former is a preprocessor flag, and the latter is a link flag.
Since you're using CMake, you could consider writing CMake code to search likely directories for the wanted libraries and headers, and to set the results in suitable variables for other code to use. That's more of an Autotools-style approach, though. Alternatively, you could define a user-set variable by which the wanted location(s) can be specified to CMake. This assumes that the third-party project is not already providing something useful for the purpose. Or, licensing permitting, you could package the third-party library together with your own, so that you are in control of where it gets installed.
In the general case, however, this is simply something that people have to deal with themselves when they build software. Make life easier for them by providing good documentation of what your project's dependencies are, and of how to inform the build system of their locations, and make useful provisions for feeding that information into the build system.
*An exception could be asserted for use of libtool archives, which an Autotools project might install alongside regular libraries -- if one wanted to use those, they would directly or indirectly go through libtool. But in practice, that's only going to happen in another Autotools project.
I have been reading up on make and looking at the Makefiles for popular C projects on GitHub to cement my understanding.
One thing I am struggling to understand is why none of the examples I've looked at (e.g. lz4, linux and FFmpeg) seem to account for header file dependencies.
For my own project, I have header files that contain:
Numeric and string constants
Macros
Short, inline functions
It would seem essential, therefore, to take any changes to these into account when determining whether to recompile.
I have discovered that gcc can automatically generate Makefile fragments from dependencies as in this SO answer but I haven't seen this used in any of the projects I've looked at.
Can you help me understand why these projects apparently ignore header file dependencies?
I'll attempt to answer.
The source distros of some projects include a configure script which creates a makefile from a template/whatever.
So the end user which needs to recompile the package for his/her target just has to do:
$ configure --try-options-until-it-works
$ make
Things can go wrong during the configure phase, but this has nothing to do with the makefile itself. User has to download stuff, adjust paths or configure switches and run again until makefile is successfully generated.
But once the makefile is generated, things should go pretty smooth from there for the user which only needs to build the product once to be able to use it.
A few portion of users will need to change some source code. In that case, they'll have to clean everything, because the makefile provided isn't the way the actual developpers manage their builds. They may use other systems (code::blocks, Ant, gprbuild...) , and just provide the makefile to automate production from scratch and avoid to depend on a complex production system. make is fairly standard even on Windows/MinGW.
Note that there are some filesystems which provide build audit (Clearcase) where the dependencies are automatically managed (clearmake).
If you see the makefile as a batch script to build all the sources, you don't need to bother adding a dependency system using
a template makefile
a gcc -MM command to append dependencies to it (which takes time)
Note that you can build it yourself with some extra work (adding a depend target to your makefile)
There is few files with .c anf .h extensions (cmdline.c cmdline.h core.c core.h and so on) in src directory, also there is one file "MakeFile" without extension. Is there any possibility to build these source files into some executable file on Windows 7 (64bits) ? I think i need to download compilers for C or some sdks right?
Yes.
You need to:
download and install a C/C++ compiler (I recommend TDragon's distribution of MinGW ),
add the compiler to your PATH (the installer can do it for you most of the cases); verify it's done by opening cmd.exe and typing gcc -v and mingw32-make -v, both should give you half a screenful of version information if your path is set correctly,
via cmd.exe, navigate to the folder in which the Makefile resides and call mingw32-make.
From now on everything should compile automatically. If it doesn't, post the errors.
Update:
First of all, it'd be useful for you to get the MSys package. Install it and you'll have a more recent version of make (use it instead of mingw32-make from now on).
About the CreateProcess bug, it has to do with the system PATH variable being too long. You'd need to do something like this:
open cmd
execute set PATH=c:/mingw32/bin;c:/msys/1.0/bin (change the paths here to reflect your own installation if it's different)
then as before: navigate to your project's directory, run make. Everything should be smooth now if you're not missing any external libraries.
BTW- remember not to install MinGW or MSys in directories with spaces.
I am not a Windows Developer..
But still as per my knowledge. Visual Studio (i.e 2008, I guess) has the ability to read the Makefile.
Please have a look at it..and if needed change this makefile to their format..
There are many opensource product which are platform independent..and they get compiled on both OS with the just Makefile they provided.
Or else use 'cygwin'
Developer C++ works in windows but it is actually GCC code bought into Windows, Is anyone familiar about the procedure they used to convert the linux ( .sh) to executables ??
I think i need to download compilers for C or some sdks right?
A compiler certainly, but what additional libraries you may need will depend entirely on the code itself. A successful build may also depend on the intended target of the original code and makefile. The makefile may be a GNU makefile, but there are other similar but incompatible make utilities such as Borland Make and MS NMake. If it is a simple build, you may be able to avoid the makefile issue altogether and use the project management provided by an IDE such as Visual C++ 2010 Express.
If you do not know what this code is or what it does and what it needs to build, you are taking a risk building it at all. Maybe you should post a link to the original source so that you can get more specific advice on how to build it.
[EDIT]
Ok, now looking at the code you are attempting to build, it is a very simple build, so if you wanted to avoid using GNU make, then you could just add all the *.c files in the src folder to a project in your IDE and build it.
However there is one serious gotcha, it uses the BSD sockets API and Linux system headers. You will need to first port the code to Windows APIs such as WinSock (very similar to BSD Sockets), or build it under Cygwin (a sledgehammer for a nut somewhat). There may be other Linux dependencies that need sorting, I have not looked in detail, but it looks fairly simple. That said, if you did not have the first clue regarding compiling this stuff, then perhaps this is not a task you could do?
Of course compiling the code may only be half teh problem, if it was designed to run on Linux, there may be run-time dependencies that prevent it running on Windows. Again I have not looked in detail.
Also looking at the code, I would suggest some caution, this may not be the best quality code. That may be unfair, but one obvious flaw and an indication if inexperience is the lack of include guards in the headers.
Hey guys,
I want to create a self-contained C project to be machine-independent.
An example? I want to "make all" my project on a machine where external libraries are not installed (but included in my project) and I want all keep working :)
The library I'm talking about is the GSL, you can find it in the libgsl0-dev ubuntu package.
Now, I want to include all the header and .c files in my project, uninstall the packages and the project must build and run as before :)
Ideas?
Thanks!
Bye!
Don't forget about dependencies.
There are reasons why libraries like GSL are distributed as independant entities:
Users can upgrade the library independantly of the software that uses it saving you from having to constantly update your project when the GSL version changes.
Licensing issues.
Dependancies. If GSL has dependencies and you want to build GSL as part of your project then you will also need to include ALL the source code for ALL dependencies...and their dependencies...and their dependencies...and so on and so on. If you are going to make it a requirement that some sub-dependency need to already be installed then you may as well make it a requirement that GSL is already installed.
Other reasons I can't be bothered to think up because I have other things to do.
Just copy the library's source code somewhere into your project's hierarchy, and start either creating or modifying Makefiles (or whatever GSL uses) to get it to build.
For instance, you could have it in a directory external/libgsl, and then set up a Makefile target for your project that does the building. Then you make your project's code dependent on the library's, so that the library is always built first.
Of course, you also need to think about any license issues that might arise if/when you distribute your project.