how to speed up build time of an c++ eclipse project - linker

I have an eclipse CDT application which uses a number of external libraries. These libraries take a lot of time in the linking phase and the total build time shoots up. Is there any way out to improve on the build time?

The fact that you're running on VMWare might slow you down a bit.
Some ideas that come to mind:
make sure you have VMWare tools installed
increase the amount of RAM to your VM (the more, the better)
use a ramdisk for your libraries (but might not do much good, since running on VMWare)
use the visibility features of the compiler as much as possible, so that the libraries you're linking against have less exported symbols, making the linking stage less complicated. If they're libraries from the OS, not much you can do about it and they're probably optimized as such.
(not very helpful) use a real machine for building. Virtualization takes its toll on an disk/cpu intensive operation, such as linking a lot of libraries.

Related

Will it be feasible to use gcc's function multi-versioning without code changes?

According to most benchmarks, Intel's Clear Linux is way faster than other distributions, mostly thanks to a GCC feature called Function Multi-Versioning. Right now the method they use is to compile the code, analyze which function contains vectorized loops, then patch the code with FMV attributes and compile it again.
How feasible will it be for GCC to do it automatically? For example, by passing -mmultiarch=sandybridge,skylake (or a similar -m option listing CPU extensions like AVX and AVX2).
Right now I'm interested in two usage scenarios:
Use this option for our large math-heavy program for delivering releases to our customers. I don't want to pollute the code with non-standard attributes and I don't want to modify the third-party libraries we use.
The other Linux distributions will be able to do this easily, without patching the code as Intel does. This should give all Linux users massive performance gains.
No, but it doesn't matter. There's very, very little code that will actually benefit from this; for the most part by doing it globally you'll just (without special effort to sort matching versions in pages together) make your system much more memory-constrained and slower due to the huge increase in code size. Most actual loads aren't even CPU-bound; they're syscall-overhead-bound, GPU-bound, IO-bound, etc. And many of the modern ones that are CPU-bound aren't running precompiled code but JIT'd code (i.e. everything running in a browser, whether that's your real browser or the outdated and unpatched fork of Chrome in every Electron app).

Options for distributing a c program with library-dependencies under linux

I developed a C program requiring some dynamic libraries, most notably libmysqlclient.so, which I intent to run on some remote-hosts. It seems like I have the following Options for distribution:
Compile the program static.
Install the required dependencies on the remote host
Distribute the dependencies with the program.
The first option is problematic as I need glibc-version at runtime anyway (since I use glibc and libnss for now).
I'm not sure about the second option: Is there a mechanism which checks if a installed library-version is sufficient for a program to run (beside libxyz.so.VERSION). Can I somehow check ABI-compatibility at startup?
Regarding the last Option: would I distribute ALL shared-libraries with the binary, or just the one which are presumably not installed (e.g libmysqlclient, but not libm).
Apart form this, am I likely to encounter ABI-compatibility problems if I use a different compiler for the binary then the one the dependencies were build with (e.g binary clang, libraries gcc)?
Version checking is distribution-specific. Usually, you would package your application in a .deb or .rpm file using the target distribution's packaging tools, and ship that to users. This means that you have to build your application once for each supported distribution, but there really is no way around that anyway because different distributions have slightly different versions of libmysqlclient. These distribution build tools generate some dependency version information automatically, and in other cases, some manual help is needed.
As a starting point, it's a good idea to look at the distribution packaging for something that relies on the MySQL/MariaDB client library and copy that. Maybe inspircd in Debian is a good example.
You can reduce the amount of builds you need to create and test somewhat by building on the oldest distribution versions you want to support. But some caveats apply; distributions vary in the degree of backwards compatibility they provide.
Distributing dependencies with the program is very problematic because popular libraries such as libmysqlclient are also provided by the base operating system, and if you use LD_LIBRARY_PATH to inject your own version, this could unintentionally extend to other programs as well (e.g., those you launch from your own program). The latter risk is still present even if you use DT_RUNPATH (via the -rpath linker option), although it is somewhat reduced.
A different option is to link just application-specific support libraries statically, and link base operating system libraries dynamically. (This is what some software collections do.) This does not seem to be such a great choice for libmysqlclient, though, because there might be an expectation that its feature set is identical to the distribution (regarding the TLS library and available configuration options), and with static linking, this is difficult to achieve.

How can I compile C on Linux that will work on any distro?

If I compile a self-contained project with gcc and statically link the C run-time library, will this work on any Linux distro? I don't care about any static link disadvantages (such as big file size) so long as it works. It is a closed-source project and this is beyond my control. The architecture of the PCs is the same.
If you can guarantee that all of the code it will execute is self-contained in the binary you ship, then theoretically (emphasis on theoretically) it should work on any linux distribution. In this process there are an enormous number of pitfalls. My personal opinion is that it is pretty much impossible to make this work due to changing interfaces across versions. Interfacing with other libraries is a fragile nightmare.
Most companies that I'm familiar with (including my own) produce builds made for different distributions. There are some complications that result from having to make a build for SLES, Redhat, etc., but I'm confident that providing a few different builds ultimately is simpler and causes less problems than attempting to statically link everything.

Best way to optimize the building/Compilation task of C library on various machines

We need to build a C library on various environment like (suse x86, suse itanium, solaris, HPUX, IBM AIX) with different compiler options(like compiler flags, 32bit or 64 bit, static or dyanamic, endianess etc). Currently we are building it on around 200 build machines. We are having an efficient makefile but still we need to login to various machines and doing FTP to take the code and then doing make and then again transferring libraries. Then we need to keep all 200 platform libs in one release machine.
Our intension is to reduce the effort of logging in to different machine manually for triggering build, what is the best way to do it?
One way to automate this by writing an expect script on one linux machine which will do login to all 200 machine, triggering build and taking back the libs and keep it on place.
Is there any other way is there which will takes less effort when compared to writing expect scripts for 200 builds? For example we need to build around 50 Vxworks platform, for that we are having tornado packages(cross compiler) in only one windows machine which will do all 50 platforms. For this we have written one click automation scripts(small script which doesnt require to login to 50 machine).
Similarly if cross compilers available for all *nix machine (suse, solaris, hpux, ibm aix, etc) compilers we can install all that in one machine(either linux or windows). Then we can write a script to automate all 200 builds in one machine without writing scripts for remote login or ftp.
Or is there any other easy way to handle builds in multiple *nix platforms?
Would Jenkins be a possibility for you?
It's becoming increasingly popular beyond Java projects (C/C++/C#). From what I understand, Jenkins would be a fit.
Take a look at the GNU AutoTools. Everybody knows how to use their output (the typical ./configure; make; make install dance), no strange requirements on the target, has extensive machinery to handle operating system vagaries. It's learning curve is quite steep, though.

Optimising cross platform build system

I'm looking for a cross platform build system for C which helps to find good compiler flags on a specific machine. It would need some notions of testing for correctness, benchmarking for performance and multiple versioning of the target, and perhaps even recognising the machine it is running on. For example, in a typical build I'd want to compare 64 bit versus 32 bit executables, with and without openmp, fast-math, with different optimisation levels, and builds by entirely different compilers. The atlas-blas libraries are an impressive example here but are a bit of a pain on windows due the shell scripting. Is this something that can be hacked onto systems like Scons or Waf? Any other suggestions?
Other than the one I'm thinking about writing when I'm done procrastinating, Boost Jam (bjam) would probably match your description closest.
There is also CMake, but I think it would require a scripting layer to automate multi-target building and testing.

Resources