Using cmake to detect what system the program is being compiled on - c

I have a project written in C that I am porting to an older system CentOS release 5.10 (Final)
For our newer system Fedora 20 we are using apr-1.5.0, these won't work on CentSO as I get the link problems there.
tools/apr/libs/libapr-1.so: undefined reference to `memcpy#GLIBC_2.14'
tools/apr/libs/libapr-1.so: undefined reference to `epoll_create1#GLIBC_2.9'
tools/apr/libs/libapr-1.so: undefined reference to `dup3#GLIBC_2.9'
tools/apr/libs/libapr-1.so: undefined reference to `accept4#GLIBC_2.10'
So I downloaded the older apr-1.2.7 libraries and headers and I compile and link with them and everything works OK.
However, I am using cmake and I have to adjust the path everytime I switch from different operating systems.
For CentOS I have to use this:
link_directories(${PROJECT_SOURCE_DIR}/tools/apr-1_2_7/libs)
And for a newer system I have to modify and use this:
link_directories(${PROJECT_SOURCE_DIR}/tools/apr/libs)
I am just wondering if there anyway cmake can detect the system and then use the appropriate libraries.
if(CSENTOS_5_10)
link_directories(${PROJECT_SOURCE_DIR}/tools/apr-1_2_/libs)
else
link_directories(${PROJECT_SOURCE_DIR}/tools/apr/libs)
endif
I was thinking of creating a toolchain file, but I think that would overkill for just a small thing.
I cannot use the apr that are installed using yum, as there is no guarantee that the libraries and headers have been installed.
Many thanks for any suggestions.

You're doing it wrong(tm).
See the docs:
http://www.cmake.org/cmake/help/v2.8.12/cmake.html#command:link_directories
You should be using find_library instead, with hints of where to look for the library.
You can then put such a thing in a Find-module.

Related

Haskell: Missing C library on Arch Linux works on Ubuntu

I recently switched my PC at work from Ubuntu to Arch Linux.
And I am now getting the following error (I am using stack to build my project):
setup-Simple-Cabal-1.22.4.0-ghc-7.10.2: Missing dependency on a
foreign
library:
* Missing C library: HSrts-ghc7.10.2
This problem can usually be solved by installing the system package that
provides this library (you may need the "-dev" version). If the library is
already installed but in a non-standard location then you can use the flags
--extra-include-dirs= and --extra-lib-dirs= to specify where it is.
As far as I understand it, the difference in Linux Distribution should not cause any issue.
Things I have tried:
-add the path where the library is with --extra-lib-dirs
-make sure that the version of stack/ghc are the same acrose both systems
-tried unsucesfully to find a relevant difference between the 2 systems
(gcc version was different but didn't change anything)
I have a docker container based on ubutu where it builds without an issue.
The only thing I can think of is that this library gets handled differently from some random C-library since it contains the Haskell-Runtime. But I have no idea what this difference would be. Or how a differnent handling would cause an issue on my Arch System.
Here my .cabal file (the folder also contains the whole project):
https://github.com/opencog/atomspace/blob/master/tests/haskell/libExecutionOutputTest/opencoglib.cabal
Okay i figured out a workaround, instead of specifiyc the library in the .cabal file:
...
extra-libraries: HSrts-ghc7.10.2
...
you add it to your stack.yaml file:
...
ghc-options:
package-name: -lHSrts-ghc7.10.2
...
If you also have a exectuable defined in your .cabal file this will break the executable, since the library is not only included in the library. And including the runtime library in an executable results in an instant segementation fault.

How to work with external libraries when cross compiling?

I am writing some code for raspberry pi ARM target on x86 ubuntu machine. I am using the gcc-linaro-armhf toolchain. I am able to cross compile and run some independent programs on pi. Now, I want to link my code with external library such as ncurses. How can I achieve this.
Should I just link my program with the existing ncurses lib on host machine and then run on ARM? (I don't think this will work)
Do I need to get source or prebuilt version of lib for arm, put it in my lib path and then compile?
What is the best practice in this kind of situation?
I also want to know how it works for the c stdlib. In my program I used the stdio functions and it worked after cross compiling without doing anything special. I just provided path for my arm gcc in makefile. So, I want to know, how it got correct std headers and libs?
Regarding your general questions:
Why the C library works:
The C library is part of your cross toolchain. That's why the headers are found and the program correctly links and runs. This is also true for some other very basic system libraries like libm and libstdc++ (not in every case, depends on the toolchain configuration).
In general when dealing with cross-development you need some way to get your desired libraries cross-compiled. Using binaries in this case is very rare. That is, especially with ARM hardware, because there are so many different configurations and often everything is stripped down much in different ways. That's why binaries are not very much binary compatible between different devices and Linux configurations.
If you're running Ubuntu on the Raspberry Pi then there is a chance that you may find a suitable ncurses library on the internet or even in some Ubuntu apt repository. The typical way, however, will be to cross compile the library with the specific toolchain you have got.
In cases when a lot and complex libraries need to be cross-compiled there are solutions that make life a bit easier like buildroot or ptxdist. These programs build complete Linux kernels and root file systems for embedded devices.
In your case, however, as long as you only want ncurses you can compile the source code yourself. You just need to download the sources, run configure while specifying your toolchain using the --host option. The --prefix option will choose the installation directory. After running make and make install, considering everything went fine, you will have got a set of headers and the ARM-compiled library for your application to link against.
Regarding cross compilation you will surely find loads of information on the internet and maybe ncurses has got some pointers in its shipped documentation, too.
For the query How the C library works in cross-tools
When compiling and building cross-tool chain during configuration they will provide sysroot.
like --with-sysroot=${CLFS_CROSS_TOOLS}
--with-sysroot
--with-sysroot=dir
Tells GCC to consider dir as the root of a tree that contains (a subset of) the root filesystem of the target operating system. Target system headers, libraries and run-time object files will be searched for in there. More specifically, this acts as if --sysroot=dir was added to the default options of the built compiler. The specified directory is not copied into the install tree, unlike the options --with-headers and --with-libs that this option obsoletes. The default value, in case --with-sysroot is not given an argument, is ${gcc_tooldir}/sys-root. If the specified directory is a subdirectory of ${exec_prefix}, then it will be found relative to the GCC binaries if the installation tree is moved.
So instead of looking /lib /usr/include it will look /Toolchain/(libc) and (include files) when its compiling
you can check by
arm-linux-gnueabihf-gcc -print-sysroot
this show where to look for libc .
also
arm-linux-gnueabihf-gcc -print-search-dirs
gives you clear picture
Clearly, you will need an ncurses compiled for the ARM that you are targeting - the one on the host will do you absolutely no good at all [unless your host has an ARM processor - but you said x86, so clearly not the case].
There MAY be some prebuilt libraries available, but I suspect it's more work to find one (that works and matches your specific conditions) than to build the library yourself from sources - it shouldn't be that hard, and I expect ncurses doesn't take that many minutes to build.
As to your first question, if you intend to use ncurses library with your cross-compiler toolchain, you'll have its arm-built binaries prepared.
Your second question is how it works with std libs, well it's really NOT the system libc/libm the toolchain is using to compile/link your program is. Maybe you'll see it from --print-file-name= option of your compiler:
arm-none-linux-gnuabi-gcc --print-file-name=libm.a
...(my working folder)/arm-2011.03(arm-toolchain folder)/bin/../arm-none-linux-gnuabi/libc/usr/lib/libm.a
arm-none-linux-gnuabi-gcc --print-file-name=libpthread.so
...(my working folder)/arm-2011.03(arm-toolchain folder)/bin/../arm-none-linux-gnuabi/libc/usr/lib/libpthread.so
I think your Raspberry toolchain might be the same. You can try this out.
Vinay's answer is pretty solid. Just a correction when compiling the ncurses library for raspberry pi the option to set your rootfs is --sysroot=<dir> and not --with-sysroot . Thats what I found when I was using the following compiler:
arm-linux-gnueabihf-gcc --version
arm-linux-gnueabihf-gcc (crosstool-NG linaro-1.13.1+bzr2650 - Linaro GCC 2014.03) 4.8.3 20140303 (prerelease)
Copyright (C) 2013 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

Link error with GetACP under mingw64 (mingw-builds)

I was trying to build gdal-1.10.0
(http://trac.osgeo.org/gdal/wiki/DownloadSource) using mingw64 (from
http://sourceforge.net/projects/mingwbuilds/files/host-windows/
x64-4.8.0-release-posix-seh-rev2.7z). I have compiled gdal-1.10.0 under the
standard MinGW (32-bit) version without a problem.
The reason I have to switch to mingw64 is that the standard 32-bit MinGW distribution
does not support c++11 features like std::thread, and (I suspect) other features as
well. But I get an linking error in the end telling me something about
undefined reference to '__imp_GetACP'
(or a different decorated name if I use the 32-bit variant from
mingw64/mingw-builds). BTW, I tried different versions of mingw64, including
64-bit, 32-bit, seh, sjlj, but all gave the same error about GetACP().
I did some homework and found some instructions for a similar compilation task:
http://www.gaia-gis.it/spatialite-3.0.0-BETA/mingw64_how_to.html#env
According to the above website, it seems that they suggest the problem has to do
with WOW64 and the correct version of windows dll files cannot be used because
windows automatically determines it for you depending on whether a 32-bit or
64-bit application making the call. This is supposedly a problem for mingw64
because the compiler gcc is 64-bit but msys is hopelessly 32-bit.
But since I tried 32-bit versions as well, the above does not seem to explain
the error.
Even more, I tried in a dirty way to comment out all calls to GetACP(),
because I don't really care about code pages and all that for my purposes.
Strangely enough, compilation is OK (on a fresh source just with the GetACP()'s commented out), but the same link error is still reported. I checked that libkernel32.a, libiconv.a are in the lib folder, and also followed the instructions in the blog above to copy dll's out from
c:\windows\system32 and place them in mingw subfolders with appropriate renaming. The link error remains. This is where I stopped hacking after spending almost two days on this without success. I can't understand why the entire source-code does not contain a single call to the function and I am still getting the link error.
Can anyone explain what might have caused this issue between gdal and mingw64,
and how to fix it?
Also, a general question about mingw64 is that is it really able to support
posix functions? I see package names such as
x64-4.8.0-release-posix-seh-rev2.7z, but I remember that the MinGW people said
they will never support full posix.
P.S.
I am testing this on a Windows Server 2008 R2, 64-bit.
Update:
The complete steps for building gdal-1.10.0 under MinGW64 (mingw-builds) are:
$./configure
Then,
Edit GDALmake.opt, Find GDAL_ROOT and replace the cygwin drive format with dos/mingw format, e.g.
Change:
GDAL_ROOT = /d/temp/build/gdal-1.10.0
to
GDAL_ROOT = d:/temp/build/gdal-1.10.0
Replace
CONFIG_LIBS = $(GDAL_ROOT)/$(LIBGDAL)
with
CONFIG_LIBS = $(GDAL_ROOT)/$(LIBGDAL) -liconv
Finally,
$ make && make install && cp apps/*.exe /usr/local/bin/
I have accidentally encountered the same problem.
Maybe this is a MinGW bug or bad configuration files, but the solution is to add
-liconv to the end of linker flags, for example, replace
CONFIG_LIBS = $(GDAL_ROOT)/$(LIBGDAL)
with
CONFIG_LIBS = $(GDAL_ROOT)/$(LIBGDAL) -liconv
in GDALmake.opt file (found by searching Mingw directory for GetACP in files).

Rebuilding/Updating kernel module

Hey there,
following problem:
I'm using a rather weird linux distro here at work (Centos 5) which seems to have an older kernel (or at least some differences in the kernel) and you can't simply update it.
The program I need to install needs a function crypto_destro_tfm (and prob some more, but this is the only error at this point) which is included in the file linux/crypto/api.c - so I assume its in the kernel module crypto_api. Problem is: On my distro, I don't even have an crypto/api.c and even though I do have a module crypto_api.ko it seems that this function isn't in there.
My plan is the following: Take the crypto_api from a newer linux distro and then compile it and load the module into my centos.
Now I hope that some of you can tell me what I need to do to rebuild and replace that module. Of course I do have all the source files from a newer kernel. (Just to remind you: I can't simply recompile and use a newer kernel, b/c centos sucks in this way)
Thank you
FWIW: Here's the exact error
WARNING: "crypto_destroy_tfm" [/home/Chris/digsig-patched/digsig_verif.ko] undefined!
There is a good chance backporting API change in an older kernel will lead to a cascade of problem. Let's suppose you backport crypto api of version 2.6.Y to your local version, 2.6.X
Now you have the following situation :
module crypto api export 2.6.Y functions
your external module might be Happy with that situation
all other module that depends on version 2.6.X of the crypto API will complain.
But wait, I can backport recent kernel code into all the modules that complain, and here we go... Oops, but then we have the former situation, but now each backported module might trigger a similar situation.
If you can't update the CentOS kernel, because the CentOS kernel has a lot of custom code you are afraid to loose when going with a "vanilla" kernel, then you may find that it is an easier task to "downgrade" your external module :
Look at the current crypto API (for example using lxr.linux.no)
Look at your kernel version of this API
Try to see how the new API could be replaced with call to the old API to provide a similar function.
Modify your external module to use the old API instead of the new one.
In any case, you may not be able to replace your kernel with a vanilla one, but you should at least be able to rebuild it, and then to patch it and rebuild it etc... If you can't do this simple task, then I don't think backporting anything will be successful.
Try downloading the SRC RPM from a newer version of CentOS which has the module and recompile the RPM on your CentOS 5:
rpmbuild --rebuild kernel-X.XX-X.src.rpm
I don't have a copy of CentOS to compare with so you will want to read the man page on rpm/rpmbuild, but I've found recompiling the whole package which includes the kernel and all it's modules to be safer than trying to just porting one module from a newer kernel. I do this occasionally on Debian/Ubuntu when I need a newer package for something.

How to compile a C program?

I haven't done C in a long time. I'd like to compile this program, but I have no idea how to proceed. It seems like the makefile refers to GCC a lot and I've never used GCC.
I just want an executable that will run on windows.
You may need to install either cygwin or mingw, which are UNIX-like environments for Windows.
http://www.mingw.org/
http://www.cygwin.com/
When downloading/installing either cygwin or mingw, you will have the option of downloading and installing some optional features; you will need the following:
gcc (try version 2.x first, not 3.x)
binutils
GNU make (or gmake)
If it requires gcc and you want it to run on Windows, you could download Cygwin.
That's basically an emulator for GNU/Linux type stuff for Windows. It works with an emulation DLL.
http://www.cygwin.com/
In order to compile this program you need a C compiler. It does not have to be gcc, although you are already given a makefile set up to use gcc. The simplest thing for you to do would be the following:
Install cygwin
Open the cygwin command prompt
go into the directory where you have your makefile
type 'make'
That should compile your program
If you are not comfortable with using command line tools then you can download the free version of MS Visual Studio and import the source files into a new Visual Studio project. This way you would not need to install cygwin and use gcc, but you would need to know how to create projects and run programs in Visual Studio.
You almost certainly don't need all of cygwin to compile using gcc. There are plenty of standalone gcc clones for Windows, like gcw.
If it's reasonably portable C code (I haven't looked at it), then you may be able to just ignore the included Makefile and feed the source into whatever compiler you do want to use. What happens when you try that?
Dev-C++ provides a simple but nice IDE which uses the Mingw gcc compiler and provides Makefile support. Here are the steps I used to build the above code using Dev-C++ (i.e. this is a "how-to")
After downloading the source zip from NIST, I
downloaded and installed the Dev-C++ 5 beta 9 release
created a new empty project
added all the .c files from sts-2.0\src
Then under Project Options
added -lm in the Linker column under Parameters
added sts-2.0\include to the Include Directories in Directories
set the Executable and Object directories to the obj directory under the Build Options
and then hit OK to close the dialog. Go to Execute > Compile and let it whirl. A minute later, you can find the executable in the sts-2.0\obj directory.
First, there is little chance that a program with only makefiles will build with visual studio, if only because visual studio is not a good C compiler from a standard POV (the math functions in particular are very poorly supported on MS compilers). It may be possible, but it won't be easy, specially if you are not familiar with C. You should really stick to the makefiles instead of trying to import the code in your own IDE - this kind of scienfitic code is clearly meant to be compiled from the command line. It is a test suite, so trying things randomly is NOT a good idea.
You should use mingw + msys to install it: mingw will give you the compilers (gcc, etc...) and msys the shell for the make file to run correctly. Contrary to one other poster, I would advise you against using gcc 2 - I don't see any point in that. I routinely use gcc 3 (and even 4) on windows to build scientific code, it works well when the code is unix-like (which is the standard platform for this kind of code).

Resources