Makefile for C code - c

I inherited a code which has a makefile, but so far I was unable to run it on a linux server. The main complain of the compiler is that it is unable to load libgmp.so.3 : error while loading shared libraries: libgmp.so.3. I know that libgmp.so.10 exists on this server, but I was wondering which part of the makefile needs to be changed so the compiler looks for libgmp.so.10 rather than libgmp.so.3.
OPTFLAG = -O2 -Wall -fPIC -fexceptions -DNDEBUG
LDFLAGS = -O2 -Wl,-no_compact_unwind -DNDEBUG -lm -pthread
COMPILER = gcc ${OPTFLAG}
LINKER = gcc ${LDFLAGS}
# CPLEX directory
CPLEX_HOME = /opt/ibm/ILOG/CPLEX_Studio1263/cplex
CPLEX_INC = ${CPLEX_HOME}/include/
CPLEX_LIB = ${CPLEX_HOME}/lib/x86-64_linux/static_pic/ -lcplex
# Compile the main file
code: code.c
${COMPILER} -c code.c -o code.o -I${CPLEX_INC}
${LINKER} -o code code.o -L${CPLEX_LIB}
clean::
rm -f *.o
rm -f ${LIB}/*.o
rm -f *~
rm -f ${SRC}/*~ ${INCLUDE}/*~

You need to rebuild whatever program or library uses libgmp.so.3 from source code. Could you provide the exact command run by make and the error message it produces?
EDIT The problem here is that the system has installed a version of the IBM CPLEX software which comes with its own GCC binary, and that GCC binary uses libgmp.so.3. The easiest way to fix this would be to upgrade the CPLEX software to a version which supports the operating system being used, or use the software on the operating system for which it was written (i.e., something really old that actually ships libgmp.so.3).

The most easy way it to install libgmp-dev package, from your linux distribution. GMP is a package library to do multiple precision calculations on large integers, which is probably needed by your program. As you put in some comments, adding -L/usr/lib64/libgmp.so.10 is an error, as -L option allows to add a directory to search for libraries, and not a specific library.
If only the library is needed and no header file is missing in your compilation (this is something strange, but sometimes happen) then you can still link with only the libgmp.so.10 object, but you have to do in a something nasty way. Just add /usr/lib64/libgmp.so.10 as an object file (not a library, with -l option) to your link command.
EDIT
From looking more closely your Makefile I see no reference to the libgmp.so.3 library, so I only can assume this is a indirect reference from some other already compiled library that comes from outside with your package. Just use
ldd lib<nameOfLibrary>.so.x.x
with all the libraries needed by your final executable, so see which shared objetc is the one that requests libgmp.so.3 soname, and then recompile it, reinstall it, or use your system's libraries ONLY, and not mesh anymore with libraries coming from another system. For example you can try (this is an expensive command, but it will get the answer)
find / -name "lib*.so.*" -print | xargs ldd > all_libs.lddout
and then find all_libs.lddout to see which library uses libgmp.so.3 (this will be the outdated library) You'll need to deinstall it or upgrade it, to be able to continue.
Linux systems have a library version system that allows an executable to be able to load different versions of the same library and allow them to live together in the same system. One of two: or you are able to locate the sources of version 3 of the shared libgmp.so.3 library and install it on your system, or you'll need to update the libraries your program uses to be able to link with the libgmp.so.10 already installed on your system.
2ND EDIT
As I see in the comments, you have changed the default compiler on your system by another coming possibly from other linux distribution (as your installed library is libgmp.so.10 while the one cc1 requests is libgmp.so.3, which is not installed on your system.
Installing a different compiler from the one you have installed, and doing that without previously deinstalling the other compiler, can lead you to this kind of problems.
The most reliable thing you can do is to reinstall the compiler from your distribution, or better, reinstall the whole linux system, as you have probably broken many things that will be emerging as you use your system. There's very poor info on what you have done to go further in your problem. Anyway, my recommendation is to not use the comment parts to add new information about your problem, just edit your question and add all those new information to it.

Related

Questions about how libraries work in C

I am a newbie student learning C and wish to use the gLib library functions for a project: http://www.linuxfromscratch.org/blfs/view/svn/general/glib2.html
(I am using Ubuntu)
I have a couple questions about how libraries work in C and what happens when you install one or want to use one:
When I install this (run ./configure && make && make install inside the folder), what exactly is it doing? From what I learned there are shared libraries in C and static libraries in C. Is it true that it is installing library and include files to /usr/lib/ or somewhere?
When using gcc with external libraries, you have to specify -L and -I flags to specify where to look for library and header files. When I install glib, will I need to specify these flags?
If I want to package my executable for another machine, what would happen if the other machine doesn't have glib? I think if I had static libraries I would be able to include it in the binary, but how would it work for glib?
I am familiar with developing with GTK+ and GLIB. As i'm aware library files reside in /usr/lib and include files are found in /usr/include. Some libraries might be in places such as /usr/local/lib. I will attempt to answer your questions as best as I could.
When installing a library through the source package yes it installs files to the various folders /usr/share /usr/lib /usr/include and etc. It's highly recommended you use your distribution's package manager to install library packages and development headers. Installing from source is always bound to fail as necessary dependencies might be required.
This is where tools such as autogen and makefiles come handy. You don't necessarily need to concern yourself with specifying all that. tools such as pkg-config handle all that work. Most libraries will install a package configuration file into /usr/lib/pkgconfig & /usr/share/pkgconfig directories. This helps anyone developing an application easily link their code to the libraries.
Using package config to get the config:
$ pkg-config --cflags --libs glib-2.0
-I/usr/include/glib-2.0 -I/usr/lib/x86_64-linux-gnu/glib-2.0/include -lglib-2.0
Linking using GCC & package config:
$gcc example.c `pkg-config --cflags --libs gtk+2.0 glib-2.0` -o example
The above command would link my program with gtk & glib.
Using Makefile to not ever have to enter those long lines again:
Makefile:
OBJS = main.o callbacks.o
CFLAGS = `pkg-config --cflags --libs gtk+-2.0`
program: $(OBJS)
gcc -o program $(OBJS)
main.o: main.c
gcc -c main.c $(FLAGS)
callbacks.o: callbacks.c callbacks.h
gcc -c callbacks.c $(FLAGS)
.PHONY : clean
clean:
rm *.o
rm program
.PHONY : install
install:
cp program /usr/bin
.PHONY : uninstall
uninstall:
rm /usr/bin/program
The above makefile is for a simple GTK+2.0 application as you can tell by what package config is including in CFLAGS to make the program executable all you have to enter in your source directory would be make. pkg-config will only work if you have installed the development packages for the library you are trying to work with. For ubuntu to install GTK+-3.0 and GLIB development files you would enter:
$ apt-get install libgtk-3-dev
I think this is a good concern for portability. No single static library is going to be cross platform. It would have to be compiled for those platforms manually. I reckon to get rid of all the headache you would use Anjuta IDE developed by the GNOME software foundation. It makes developing GLIB & GTK+ apps a breeze supporting both C & C++. It will create the Makefile, configure and other files to make developing code on cross platforms easy and make deployment easy. I could link you some resources, but my reputation on stack overflow is less then 10. So I will just mention the name of some resources below.
Further Reading
Makefile Tutorial
Anjuta IDE (C/C++)): ://anjuta.org/
GTK+-3.0 Hello World with Compiling and linking using pkg-config:
When I install this (run ./configure && make && make install inside
the folder), what exactly is it doing? From what I learned there are
shared libraries in C and static libraries in C. Is it true that it is
installing library and include files to /usr/lib/ or somewhere?
Well it is running first ./configure and then if that succeeds it runs make and if that succeeds it runs make install. configure is a script that takes care of a lot of compatibility issues between systems. They are usually shell scripts as this is the common denominator across systems so the configure script will work across various systems. One of the thing configure does is create a Makefile. The second command make will use the newly created Makefile to build the library or executable. Since you did not specify a target (like you will in the make install) make will build the default target, which is typically the all target. This is just by convention. Makefiles are basically a list of things to build (targets) along with what they depend on (dependencies) and how to build to target (rules). Finally, make install will actually install the necessary components. For libraries this is the library and necessary header files for executables it is just the program. man pages might also be installed. Where you install the libraries depends on where you specify to install them. Typically configure will take the --prefix argument that lets you control where they are installed. If you do not use --prefix you will most likely install in the default location for your system.
When using gcc with external libraries, you have to specify -L and -I
flags to specify where to look for library and header files. When I
install glib, will I need to specify these flags?
Your question is a little unclear, so let me first make sure I understand. Are you asking if after you install glib will you need to use -L and -I to tell gcc where to look for them? If so it depends on where you install them. Typically when you make and install a library you will install the library and header files in the default location or not. If you did then assuming your gcc is configured correctly then no you will not. If you did not then you will most likely have to use -L and -I
If I want to package my executable for another machine, what would
happen if the other machine doesn't have glib? I think if I had static
libraries I would be able to include it in the binary, but how would
it work for glib?
If it doesn't have glib and you used the shared libraries your application will not work. You will need to either have the same version glib libraries on the other machine or build the libraries statically. How to build them statistically depends on the library. This SO question might help.
Regarding configure, make and make install. configure is a shell script that is used to discover (and configure) your development environment. make and make install are convenient way of building your software. Where make would normally involve compiling and linking, where as make install would normally involve copying executables and libraries to standard path and setting up things (also include files if any usually in /usr/include), so that you don't have to explicitly give path before running the executable. What make does can be done by hand, but it's very cumbersome.
For glibc - yes you have to specify those flags. Normally all libraries will come in two flavors on most of the platforms. The binary form are used for dynamic linking when programs are actually loaded. Also - most distributions will have -dev or -devel versions of those libraries. Those are required for building software that makes use of those libraries (configure above can help find out whether devel libraries are installed). Typically when you see a library installed but not it's devel - you are likely to see configure errors. In short you require devel versions if you want to link with those libraries. This step is not needed if you are building libraries also from source using make and make install.
If you want to package your executable for another machine and you are not sure whether another glib is there or you want to be sure that the glib to be installed should be one specific version that you want, you should statically link while building (compiling/linking) the library. this gcc man page has got several details about link options. I believe there should be a way to statically link glib(or glib2). Though normally that may not be required if you have enough applications that are using it already.

How to configure a non-standard linker for an autotooled build?

I wanted to configure an autotooled project to invoke a non-standard
linker (the gold linker),
using the stock autotools of Linux Mint 16/Ubuntu 13.10
I believed I would achieve this by:
libtoolize-ing the project
Running ./configure LD=/path/to/my/linker ... etc.
However this has been ineffective. libtoolize has been successful. After
a standard ./configure; make I now see that libtool is doing the
linking:
/bin/bash ./libtool --tag=CXX --mode=link g++ -g -O2 -o helloworld helloworld.o
But passing LD=/path/to/my/linker to configure makes no difference. Experimentally,
I even ran:
./configure LD=/does/not/exist
expecting to provoke an error, but I didn't. The output contains:
checking if the linker (/does/not/exist -m elf_x86_64) is GNU ld... no
checking whether the g++ linker (/does/not/exist -m elf_x86_64) supports shared libraries... yes
And thereafter a make continues to link, successfully, invoking g++ exactly as before.
What is the right way to configure a non-standard linker?
But passing LD=/path/to/my/linker to configure makes no difference
This is because LD is almost never and should almost never be used to link any user-space program. Correct links are performed by using the appropriate compiler driver (gcc, g++, etc) instead.
What is the right way to configure a non-standard linker?
If you have /some/path/ld and you want gcc to use that ld, pass -B/some/path flag to gcc.
It then follows that you likely want:
./configure CC='gcc -B/some/path' CXX='g++ -B/some/path' ...
I landed on this via a Google search, though my scenario is a bit different from yours; there was no libtool involved. An old open source program's Makefile was hard-coding ld to create an object file with a symbol from binary data.
This is what I ended up doing to work around the lack of $(LD) being recognized when passed to configure:
https://github.com/turboencabulator/tuxnes/commit/bab2747b175ee7f2fc3d9afb28d69d82db054b5e
Basically I added to configure.ac:
AC_CHECK_TOOL([LD], [ld])
Leaving this answer here for if someone else lands via a google search.

How to created a shared library (dylib) using automake that JNI/JNA can use?

How do I convince LibTools to generate a library identical to what gcc does automatically?
This works if I do things explicitly:
gcc -o libclique.dylib -shared disc.c phylip.c Slist.c clique.c
cp libclique.dylib [JavaTestDir]/libclique.dylib
But if I do:
Makefile libclique.la (which is what automake generates)
cp .libs/libclique.1.dylib [JavaTestDir]/libclique.dylib
Java finds the library but can't find the entry point.
I read the "How to create a shared library (.so) in an automake script?" thread and it helped a lot. I got the dylib created with a -shared flag (according to the generated Makefile). But when I try to use it from Java Native Access I get a "symbol not found" error.
Looking at the libclique.la that is generated by Makefile it doesn't seem to have any critical information in it, just looks to be link overloads and moving things around for the convenience of subsequent C/C++ compiler steps (which I don't have), so I would expect libclique.1.dylib to be a functioning dynamic library.
I'm guessing that is where I'm going wrong, but, given that JNA links directly to a dylib and is not compiled with it (per the example in the discussion cited above), it seems all the subsequent compilation steps described in the LibTools manual are moot.
Note: I'm testing on a Mac, but I'm going to have to do this on Windows and Linux machines also, which is why I'm trying to put this into Automake.
Note2: I'm using Eclipse for my Java development and, yes, I did import the dylib.
Thanks
You should be building a plugin and in particular pass
libclique_la_LDFLAGS = -avoid-version -module -shared -export-dynamic
This way you tell libtool you want a dynamically loadable module rather than a shared library (which for ELF are the same thing, but for Mach-O are not.)

Adding a header file to Xcode

I'm trying to add a C library to Xcode. I downloaded the library from an online C class, and the zipped file contains two files: cs50.c and cs50.h.
I installed these files using the following commands:
gcc -c -ggdb -std=c99 cs50.c -o cs50.o
ar rcs libcs50.a cs50.o
rm -f cs50.o
chmod 0644 cs50.h libcs50.a
sudo mv cs50.h /usr/include
sudo cp libcs50.a /usr/lib
When building the project, I get the following error message:
Undefined symbols for architecture x86_64:
"_GetString", referenced from:
_main in main.o
ld: symbol(s) not found for architecture x86_64
clang: error: linker command failed with exit code 1 (use -v to see invocation)
This is how I'm referencing the header file in my program:
#include </usr/include/cs50.h>
If I don't include the path, I get a can't find file message.
My version of Xcode is: Version 4.5.2 (4G2008a) and I'm running OS X 10.7.5.
Thanks for any suggestions.
Update:
As a general rule, you should never install or modify anything in /usr unless you have a very good reason to do so. This directory is reserved for the operating system, and you can quickly run into a lot of problems - for instance if you accidentally overwrite a system library or header file and even installing new things in there may cause problems when updating your operating system.
If you really need to install this system-wide, then put it into /usr/local.
However, since you compiled the library with debugging information, I assume that you also want to test and play around with it in your project.
To do that, it's much easier to add the sources as a new "C/C++ Library" target to your project. Then Xcode will take care of all the ugly details such as compiling, choosing the right processor architecture (32 or 64 bit), you'll get source-level debugging support in Xcode and if you ever want to install your app or create a package for it, then Xcode will also automatically bundle the dependencies for you.
If you trying to use gcc to compile the cs50.h library, I have found that to be unsuccessful on most modern 64 bit macs. Xcode 4.x generally wants a 64 bit compatible library format. GCC has not been updated to include 64 bit object files. Clang/LLVM is a rising alternative to gcc, and is used by Apple for Xcode as their preferred compiler engine. I have not personally tried it yet, but will be exploring Xcode to produce a compatible library for Xcode. I am taking the Harvardx cs50x course at edX, and it is great course, even for experienced programmers. The cs50.h library is interesting, because it provides relatively robust I/O routines for various variable types, e.g. String, Integer. float for the c programming language, including good protection for buffer overflow attacks.
You are actually building a custom dynamic library to be added to Xcode, which is also known as a framework. If you have apple developer account, check out the framework programming guide, it should prove useful.

Two method for linking a object using GCC?

I've known that I should use -l option for liking objects using GCC.
that is gcc -o test test.c -L./ -lmy
But I found that "gcc -o test2 test.c libmy.so" is working, too.
When I use readelf for those two executable I can't find any difference.
Then why people use -l option for linking objects? Does it have any advantage?
Because you may have either a static or a shared version of the library in your library directory, e. g. libmy.a and libmy.so, or both of them. This is more relevant to system libraries: if you link to libraries in your local build tree, you know which version you build, static or shared, but you may not know other systems' configuration and libraries mix.
In addition to that, some platforms may have different suffixes. So it's better to specify it in a canonical way.
The main reason is, -lname will search for libname.a (or libname.so, etc.) on the library search list. You can add directories to the library search list with the -L option. It's a convenience built into the compiler driver program, making it easier to find libraries that have been installed in standard places on the system.

Resources