I have an application that uses libnl. It can use either versions (1 or 3), and during configure it tries first to use ibnl3 and fallback to libnl-1 if libnl3 was not found.
My app uses another library that also uses libnl.
The problem is that I only have libnl1-dev on my machine so my app must use it.
But the library that I use uses libnl3 (was installed with yum i guess it's static linked)
so i have both version and my application crashes!!
here are some prints
ldd myapp.so|grep libnl
libnl.so.1 => /lib64/libnl.so.1 (0x00007fda33eb5000)
libnl-route-3.so.200 => /lib64/libnl-route-3.so.200 (0x00007fda32a3d000)
libnl-3.so.200 => /lib64/libnl-3.so.200 (0x00007fda3281b000)
yum list|grep libnl
libnl.x86_64 1.1.4-3.el7
libnl-devel.x86_64 1.1.4-3.el7
libnl3.x86_64 3.2.28-2.el7
libnl3-cli.x86_64 3.2.28-2.el7
libnl.i686 1.1.4-3.el7
libnl-devel.i686 1.1.4-3.el7
libnl3.i686 3.2.28-2.el7
libnl3-cli.i686 3.2.28-2.el7
if in install libnl3-dev it fixes the issue
is there another solution?
if I install libnl3-dev it fixes the issue is there another solution?
There are other solutions, but the bottom line is that you can only have libnl.so.1 or libnl-3.so.200, but not both.
Fixing this by "going all in on libnl-3" is the simplest solution.
The alternative is to "go all on on libnl-1", which means rebuilding anything that requires libnl-3 from source (against libnl-1). This is assuming that your other dependencies can be built against libnl-1 at all (which is by no means guaranteed).
Related
I'm attempting to make a Lisp package with uiop/package:define-package. I'm using SBCL, and have confirmed that package-local nicknaming ought to be supported:
* *features*
(:QUICKLISP :ASDF3.3 :ASDF3.2 :ASDF3.1 :ASDF3 :ASDF2 :ASDF :OS-UNIX
:NON-BASE-CHARS-EXIST-P :ASDF-UNICODE :X86-64 :GENCGC :64-BIT :ANSI-CL
:COMMON-LISP :ELF :IEEE-FLOATING-POINT :LINUX :LITTLE-ENDIAN
:PACKAGE-LOCAL-NICKNAMES :SB-CORE-COMPRESSION :SB-LDB :SB-PACKAGE-LOCKS
:SB-THREAD :SB-UNICODE :SBCL :UNIX)
* (uiop:featurep :package-local-nicknames)
T
Nevertheless, when I try to define a package that has local nicknames, it doesn't work:
(uiop/package:define-package #:foo
(:use #:cl)
(:local-nicknames (#:b #:binparse)))
debugger invoked on a SIMPLE-ERROR in thread
#<THREAD "main thread" RUNNING {1001878103}>:
unrecognized define-package keyword :LOCAL-NICKNAMES
Type HELP for debugger help, or (SB-EXT:EXIT) to exit from SBCL.
restarts (invokable by number or by possibly-abbreviated name):
0: [ABORT] Exit debugger, returning to top level.
(UIOP/PACKAGE:PARSE-DEFINE-PACKAGE-FORM #:FOO ((:USE #:CL) (:LOCAL-NICKNAMES (#:B #:BINPARSE))))
source: (ERROR "unrecognized define-package keyword ~S" KW)
0] 0
(binparse being another package I've made, which worked fine, but which did not happen to use local nicknaming).
What I've found of the uiop/package source seems to indicate that this shouldn't happen? Going by that, it should either work, or have a specific error message indicating the non-supported-ness of local nicknames (if somehow uiop:featurep is inaccurate or changing), but it shouldn't give a generic unknown-keyword error. At this point I'm not sure what I could be getting wrong.
The version of asdf that's included in releases of sbcl is based on asdf version 3.3.1 (November 2017), except bundled into only two (larger) lisp files (one for asdf and one for uiop) rather than breaking them up by purpose as is done in official releases of asdf. asdf added #+sbcl support for package-local nicknames in 3.3.3.2 (August 2019), and switched to the more general #+package-local-nicknames in 3.3.4.1 (April 2020) (the latest release version is 3.3.4, though, so that wouldn't be in yet anyway). So it's "just" a delay in pulling from upstream. Following the instructions on upgrading ASDF did the trick – extract the latest release tarball into ~/common-lisp/asdf and run (load (compile-file #P"~/common-lisp/asdf/build/asdf.lisp")) once, and future shells will use the updated version.
gitolite info didn't work, adding keys turned them into a no access key and did NOT create a corresponding entry in auth-keys file.
To fix this run gitolite setup on gitolite server
Question: what could have landed me in that mess?
And what does gitolite setup do when invoked for the n-th time (it's no longer setting things up, according to the docs it fixes hooks, but I wonder what the use case would be and which was mine)?
More details on gitolite info
gitolite info command is invoked like so:
> ssh git-user#ser-git
PTY allocation request failed on channel 0
hello git-admin, this is ...#... running gitolite3 3.6.7-2 (Debian) on git 2.17.1
R W some-repository
R W gitolite-admin
R W testing
Connection to ser-git closed.
Bad output is: FATAL: unknown git/gitolite command: 'info'
More details: keys without access.
gitolite sshkeys-lint was showing keys with (no access), now those keys have access as I set them (now meaning after gitolite setup).
ssh-keygen -lf /home/repo/.ssh/authorized_keys | wc -l (or without piped part, regardless) number of keys and their names indicated I didn't have the newest one added.
Similar question that did not work for me: keydir entries not propagating to authorized_keys
Docs pretty much had the answer once I dug deeper, I guess. Which is fairly nice of #sitaramc.
Without options, 'gitolite setup' is a general "fix up everything" command
(for example, if you brought in repos from outside, or someone messed
around with the hooks, or you made an rc file change that affects access
rules, etc.)
Symptoms keys stopped propagating and error FATAL: unknown git/gitolite command: 'info' on ssh git-user#ser-git. Fix was to run gitolite setup. So onto first question, the title one:
what does gitolite setup fix?
gitolite setup is implemented here
my Perl is rather weak, but there's a setup function in line 56. It calls args (which parses options, so here it had nothing to parse), then unless h_only (hooks only arg for setup), which wasn't used, so we skip compile and POST_COMPILE trigger and go for the hooks.
sub setup {
my ( $admin, $pubkey, $h_only, $message ) = args();
unless ($h_only) {
setup_glrc();
setup_gladmin( $admin, $pubkey, $message );
_system("gitolite compile");
_system("gitolite trigger POST_COMPILE");
}
hook_repos(); # all of them, just to be sure
}
package Gitolite::conf::store has hook_repos(), line 228: we change the dir to repo base dir (as per config file), and for each phy_repo we do hook_1(phy_repo). What is a phy_repo? a physical one.
same package, different method and line: hook_1($repo) in line 354.
Method hook_1($repo)
It's quite literally about fixing all the hooks.
Recreates dirs for common and admin hooks.
Rewrites update_hook (common) and post_update_hook (admin).
Sets 755 permissions for both common and admin hooks.
Then using ln_sf it symlinks the folders for common/admin hooks.
ln_sf is in common module, in line 162
Running chef-solo (Installing Chef Omnibus (12.3)) on centos6.6
My recipe has the following simple code:
package 'cloud-init' do
action :install
end
log 'rpm-qi' do
message `rpm -qi cloud-init`
level :warn
end
log 'yum list' do
message `yum list cloud-init`
level :warn
end
But it outputs the following:
- install version 0.7.5-10.el6.centos.2 of package cloud-init
* log[rpm-qi] action write[2015-07-16T16:46:35+00:00] WARN: package cloud-init is not installed
[2015-07-16T16:46:35+00:00] WARN: Loaded plugins: fastestmirror, presto
Available Packages
cloud-init.x86_64 0.7.5-10.el6.centos.2 extras
I am at a loss as to why rpm/yum and actually rpmquery don't see the package as installed.
EDIT: To clarify I am specifically looking for the following string post package install to then apply a change to the file (I understand this is not a very chef way to do something I am happy to accept suggestions):
rpmquery -l cloud-init | grep 'distros/__init__.py$'
I have found that by using the following:
install_report = shell_out('yum install -y cloud-init').stdout
cloudinit_source = shell_out("rpmquery -l cloud-init | grep 'distros/__init__.py$'").stdout
I can then get the file I am looking for and perform
Chef::Util::FileEdit.new(cloudinit_source.chomp(''))
The file moves based on the distribution but I need to edit that file specifically with in place changes.
Untested code, just to give the idea:
package 'cloud-init' do
action :install
notifies :run,"ruby_block[update_cloud_init]"
end
ruby_block 'update_cloud_init' do
block do
cloudinit_source = shell_out("rpmquery -l cloud-init | grep 'distros/__init__.py$'").stdout
rc = Chef::Util::FileEdit.new(cloudinit_source.chomp(''))
rc.search_file_replace_line(/^what to find$/,
"replacement datas for the line")
rc.write_file
end
end
ruby_block example taken and adapted from here
I would better go using a template to manage the whole file, what I don't understand is why you don't know where it will be at first...
Previous answer
I assume it's a compile vs converge problem. at the time the message is stored (and so your command is executed) the package is not already installed.
Chef run in two phase, compile then converge.
At compile time it build a collection of resources and at converge time it execute code for the resource to get them in the described state.
When your log resource is compiled, the ugly back-ticks are evaluated, at this time there's a package resource in the collection but the resource has not been executed, so the output is correct.
I don't understand what you want to achieve with those log resources at all.
If you want to test your node state after chef-run use a handler maybe calling ServerSpec as in Test-Kitchen.
The project I'm working in uses fabric for many build steps and requires a offline build as fallback.
I'm currently stuck at installing python packages provided in tarballs.
The thing is I have trouble getting into the newly extracted directory and running setup.py install in there.
#task
def deploy_artifacts():
"""Installs dependencies from local path, useful for offline builds"""
#TODO: Handle downloading files and do something like this bellow
tmpdir = tempfile.mkdtemp()
artifacts_path = ''
if not 'http' in env.artifacts_path:
artifacts_path = env.artifacts_path
with lcd(artifacts_path):
for f in os.listdir(artifacts_path):
if 'gz' in f:
put(f, tmpdir)
tar = os.path.join(tmpdir, f)
target_dir = os.path.join(tempfile.gettempdir(), normalize(f))
if not files.exists(target_dir):
run('mkdir %s' % target_dir)
else:
run('rm -rf %s' %target_dir)
run('mkdir %s' % target_dir)
run('tar xf %s -C %s' % (tar, target_dir))
run('rm %s' % tar)
with cd(target_dir):
sudo('python setup.py install')
I come from reading the tar man page for the bazillion time and I got nowhere near to getting what I want.
Did some of you face a situation like this? is there some other (read: better) approach to this scenario?
There's nothing wrong (in principle) with what you're trying do. Maybe just take smaller steps getting there. Rather than using temporary directories, it might make debugging easier if everything was put in a systematic location that has known permissions that nothing else writes to by convention. At least that would let you use some combination of fabric and manual intervention to check what is going wrong.
In the longer term, there are a few alternatives that I see. For simplicity you want the online and offline versions to work the same way, and that means fetching packages using easy_install / pip for both cases.
One way to do this is to build a mirror of PyPi. The right way to do this if you've got plenty of storage space (30Gb) is to use software that implements PEP381 (Mirroring Infrastructure for PyPI), there is already a client that does this (pep381client). A number of other projects are available that do similar things (basketweaver, djangopypi2, chishop).
An alternative is to consider a lighter weight proxying scheme. I've been looking a pip2pi and pipli. I'm unsure if they will work directly with easy_install, but it would be worth a try.
It's also worth noting that if you were using pip, you could have installed directly from the tarballs.
Which one is better in following aspects to get OS name, OS version in windows -
time in getting information
compatibility in all windows OS like xp, vista and higher
systeminfo or wmic command? I wanted to avoid the use of OSVersionInfoEx in C as one has to hardcode the marketing name detection and will add to maintenance work if new flavour of windows gets introduced. Please share your opinion.
GetVersionEx - You can't get any faster than this for getting a basic os version number. But you are right, you won't be able to map newer versions of the OS to the correct string. Have you considered just doing this:
OSVERSIONINFOEX version = {};
char szOS[MAX_OS_LENGTH];
version.dwOSVersionInfoSize = sizeof(version);
GetVersionEx((OSVERSIONINFO*)&version);
if (MyFunctionToMapVersionToString(&version, szOS) == false)
{
sprintf(szOS, "Microsoft Windows %d.%d", version.dwMajorVersion, version.dwMinorVersion);
}
WMI - A bit more code to write. But you could likely just do this at app startup (or when it's needed) and cache the result if the information is needed again. It's not like the operating system product name will change after the app has queried for it once. :) As to backwards compatibility, I'm sure it work fine on older operating systems... but you are going to test it before shipping it to the customer, right?
If you want an undocumented way, there is a registry key that has exactly what you want in it:
HKEY_LOCAL_MACHINE\Software\Microsoft\Windows NT\CurrentVersion ("ProductName")