Hey there,
following problem:
I'm using a rather weird linux distro here at work (Centos 5) which seems to have an older kernel (or at least some differences in the kernel) and you can't simply update it.
The program I need to install needs a function crypto_destro_tfm (and prob some more, but this is the only error at this point) which is included in the file linux/crypto/api.c - so I assume its in the kernel module crypto_api. Problem is: On my distro, I don't even have an crypto/api.c and even though I do have a module crypto_api.ko it seems that this function isn't in there.
My plan is the following: Take the crypto_api from a newer linux distro and then compile it and load the module into my centos.
Now I hope that some of you can tell me what I need to do to rebuild and replace that module. Of course I do have all the source files from a newer kernel. (Just to remind you: I can't simply recompile and use a newer kernel, b/c centos sucks in this way)
Thank you
FWIW: Here's the exact error
WARNING: "crypto_destroy_tfm" [/home/Chris/digsig-patched/digsig_verif.ko] undefined!
There is a good chance backporting API change in an older kernel will lead to a cascade of problem. Let's suppose you backport crypto api of version 2.6.Y to your local version, 2.6.X
Now you have the following situation :
module crypto api export 2.6.Y functions
your external module might be Happy with that situation
all other module that depends on version 2.6.X of the crypto API will complain.
But wait, I can backport recent kernel code into all the modules that complain, and here we go... Oops, but then we have the former situation, but now each backported module might trigger a similar situation.
If you can't update the CentOS kernel, because the CentOS kernel has a lot of custom code you are afraid to loose when going with a "vanilla" kernel, then you may find that it is an easier task to "downgrade" your external module :
Look at the current crypto API (for example using lxr.linux.no)
Look at your kernel version of this API
Try to see how the new API could be replaced with call to the old API to provide a similar function.
Modify your external module to use the old API instead of the new one.
In any case, you may not be able to replace your kernel with a vanilla one, but you should at least be able to rebuild it, and then to patch it and rebuild it etc... If you can't do this simple task, then I don't think backporting anything will be successful.
Try downloading the SRC RPM from a newer version of CentOS which has the module and recompile the RPM on your CentOS 5:
rpmbuild --rebuild kernel-X.XX-X.src.rpm
I don't have a copy of CentOS to compare with so you will want to read the man page on rpm/rpmbuild, but I've found recompiling the whole package which includes the kernel and all it's modules to be safer than trying to just porting one module from a newer kernel. I do this occasionally on Debian/Ubuntu when I need a newer package for something.
Related
I recently switched my PC at work from Ubuntu to Arch Linux.
And I am now getting the following error (I am using stack to build my project):
setup-Simple-Cabal-1.22.4.0-ghc-7.10.2: Missing dependency on a
foreign
library:
* Missing C library: HSrts-ghc7.10.2
This problem can usually be solved by installing the system package that
provides this library (you may need the "-dev" version). If the library is
already installed but in a non-standard location then you can use the flags
--extra-include-dirs= and --extra-lib-dirs= to specify where it is.
As far as I understand it, the difference in Linux Distribution should not cause any issue.
Things I have tried:
-add the path where the library is with --extra-lib-dirs
-make sure that the version of stack/ghc are the same acrose both systems
-tried unsucesfully to find a relevant difference between the 2 systems
(gcc version was different but didn't change anything)
I have a docker container based on ubutu where it builds without an issue.
The only thing I can think of is that this library gets handled differently from some random C-library since it contains the Haskell-Runtime. But I have no idea what this difference would be. Or how a differnent handling would cause an issue on my Arch System.
Here my .cabal file (the folder also contains the whole project):
https://github.com/opencog/atomspace/blob/master/tests/haskell/libExecutionOutputTest/opencoglib.cabal
Okay i figured out a workaround, instead of specifiyc the library in the .cabal file:
...
extra-libraries: HSrts-ghc7.10.2
...
you add it to your stack.yaml file:
...
ghc-options:
package-name: -lHSrts-ghc7.10.2
...
If you also have a exectuable defined in your .cabal file this will break the executable, since the library is not only included in the library. And including the runtime library in an executable results in an instant segementation fault.
I am new to yocto and developing drivers. I got source code (alter_driver.c and Makefile) for the drivers but I don't have any idea how to compile and get altera_driver.ko file, so that I can load that drivers and use them.
The version of yocto kernel is 3.0.32-yocto-standard which I got from terminal command uname -r.
Please help me in compiling the drivers. Thank you.
I suggest you read the Yocto Kernel Development Manual (the link is to current version: you should use the one for your Yocto release). If the only thing you have is a out-of-tree module, see part 2.5.2 which explains how to create a recipe for your driver.
The short version is: make a copy of the example recipe, add your sources in the files/-directory, modify the Makefile to build your sources... but read the manual, it's pretty good.
Also, the Kernel Lab may be useful: it mostly talks about working with a full kernel but also covers an out-of-tree module example (lab 4).
I have a project written in C that I am porting to an older system CentOS release 5.10 (Final)
For our newer system Fedora 20 we are using apr-1.5.0, these won't work on CentSO as I get the link problems there.
tools/apr/libs/libapr-1.so: undefined reference to `memcpy#GLIBC_2.14'
tools/apr/libs/libapr-1.so: undefined reference to `epoll_create1#GLIBC_2.9'
tools/apr/libs/libapr-1.so: undefined reference to `dup3#GLIBC_2.9'
tools/apr/libs/libapr-1.so: undefined reference to `accept4#GLIBC_2.10'
So I downloaded the older apr-1.2.7 libraries and headers and I compile and link with them and everything works OK.
However, I am using cmake and I have to adjust the path everytime I switch from different operating systems.
For CentOS I have to use this:
link_directories(${PROJECT_SOURCE_DIR}/tools/apr-1_2_7/libs)
And for a newer system I have to modify and use this:
link_directories(${PROJECT_SOURCE_DIR}/tools/apr/libs)
I am just wondering if there anyway cmake can detect the system and then use the appropriate libraries.
if(CSENTOS_5_10)
link_directories(${PROJECT_SOURCE_DIR}/tools/apr-1_2_/libs)
else
link_directories(${PROJECT_SOURCE_DIR}/tools/apr/libs)
endif
I was thinking of creating a toolchain file, but I think that would overkill for just a small thing.
I cannot use the apr that are installed using yum, as there is no guarantee that the libraries and headers have been installed.
Many thanks for any suggestions.
You're doing it wrong(tm).
See the docs:
http://www.cmake.org/cmake/help/v2.8.12/cmake.html#command:link_directories
You should be using find_library instead, with hints of where to look for the library.
You can then put such a thing in a Find-module.
I am trying to get memcached running on Windows. I have downloaded memcached stable latest and compiled it using Mingw under Windows 7. Configure failed with error,
checking for libevent directory... configure: error: libevent is
required. You can get it from http://www.monkey.org/~provos/libevent/
If it's already installed, specify its path using --with-libevent=/dir/
Then I downloaded libevent and compiled it. This produced 3 DLLs, libeventcore, libevent-extra and libevent-2.0.5.
I ran configure on memcached again with the option --with-libevent. But for some reason, it fails again with the same error. I have no clue on why it is failing. Can anyone help me to resolve this issue? Or is there a better way to get memcached running on Windows? I have seen lot of pre-built binaries for Windows. But all of them uses old versions of memcached. And AFAIK, Windows is officially supported by memcached in the newer versions.
I am using Windows7 64bit version with MinGW.
After you run make in libevent dir you get the files ready, but to make full use of it, they must be installed. So make install step is needed. If you configured it with a prefix, it will land in the directory of your choice. Otherwise it is /usr/local.
So maybe it's enough to run make install in libevent dir and run configure from memcache without parameters.
If you still have problems passing the configure stage, look at config.log. It shows the source file and the gcc command on which it failed.
Unfortunately successful configure is not everything. Later it fails on inclusion of sys/socket.h, netinet/in.h and netdb.h and perhaps also -pthread gcc parameter. I'm afraid it won't compile on mingw. At least not without a serious porting effort.
As I know, Never had an official Memcached port for Windows (Yes, there were few individual efforts. Last knowing porting effort can find on version 1.2.6 here) Best known Implementation for Memcached for windows on Couchbase with Memcached Bucket.
Late to the party I realize but the answer is to use:
$ export LIBS=-lws2_32
which will place $LIBS at the end of compile calls so that it is linked to libws2_32.a or winsocks2, but this probably means that your did not configure your build correctly and you will subsequent errors such as #include <sys/socket.h> header not found, etc.
see mingw-linker-error-winsock
gcc 4.4.4 c89 Fedora 13
I am wondering what is better. To give you a compile of examples: apache runtime portable and log4c.
The apr version in my fedora repository is 1.3.9. The latest stable version on the apr website is 1.4.2.
Questions
Would it be better to download from the website and install, or install using yum?
When you install from yum sometimes it can put things in many directories. When installing from the tarball you can put the includes and libraries where you want.
The log4c the versions are the same, as this is an old project.
I downloaded log4c using yum. I copied all the includes and libraries to my development project directory.
i.e.
project_name/tools/log4c/inc
project_name/tools/log4c/libs
However, I noticed that I had to look for some headers in the /usr/include directory.
Many thanks for any suggestions,
If the version in your distribution's package repository is recent enough, just use that.
Advantages are automatic updates via your distribution, easy and fast installs (including the automatic fetching and installing of dependencies!) and easy removals of packages.
If you install stuff from .tar.gz by yourself, you have to play your own distribution - keep track of security issues and bugs.
Using distribution packages, you have an eye on security problems as well, but a lot work does the distributor for you (like developing patches, repackaging, testing and catching serious stuff). Of course each distributor has a policy how to deal with different classes of issues for different package repositories. But with your own .tar.gz installs you have nothing of this.
It's an age-old question I think. And it's the same on all Linux distributions.
The package is created by someone - that person has an opinion as to where stuff should go. You may not agree - but by using a package you are spared chasing down all the dependencies needed to compile and install the software.
So for full control: roll your own - but be prepared for the possible work
otherwise use the package.
My view:
Use packages until it's impossible to do so (conflicts, compile parameters needed, ..) . I'd much rather spend time getting the software to work for me, than spend time compiling.
I usually use the packages provided by my distribution, if they are of a new enough version. There is two reasons for that:
1) Someone will make sure that I get new packages if security vulnerabilities in the old ones are uncovered.
2) It saves me time.
When I set up a development project, I never create my own include/lib directories unless the project itself is the authorative source for the relevant files I put there.
I use pkg-config to provide the location of necessary libraries and include files to my compiler. pkg-config use some .pc-files as a source of information about where things are supposed to be, and these are maintained by the same people who create the packages for your distribution. Some libraries does not provide this file, but an alternative '-config'-script. I'll provide two examples:
I'm not running Fedora 13, but an example on Ubuntu 10.04 would be;
*) Install liblog4c-dev
*) The command "log4c-config --libs" returns "-L/usr/lib -llog4c" ...
*) The command "log4c-config --cflags" returns "-I/usr/include"
And for an example using pkg-config (I'll use SDL for the example):
*) Install libsdl1.2-dev
*) The command "pkg-config sdl --libs" returns "-lSDL"
*) The command "pkg-config sdl --cflags" returns "-D_GNU_SOURCE=1 -D_REENTRANT -I/usr/include/SDL"
... So even if another distribution decides to put things in different paths, there are scripts that are supposed to give you a reliable answer to where things is - so things can be built on most distributions. Autotools (automake, autoconf, and the likes) amd cmake are quite helpful to make sure that you don't have to deal with these problems.
If you want to build something that has to work with the Apache that's included with Fedora, then it's probably best to use the apr version in Fedora. That way you get automatic security updates etc. If you want to develop something new yourself, it might be useful to track upstream instead.
Also, normally the headers that your distro provides should be found by gcc & co. without you needing to copy them, so it doesn't matter where they are stored by yum/rpm.