installing hackage package with stack (not in LTS or nightly) - cabal

I'm getting started with stack, and I'm not entirely sure how to pull in a package that appears in hackage but not in the curated builds.
In particular, I'd like to pull in thrift-0.10.0. It seems I can't specify it in my project.cabal file, nor does the extra-deps section work since there is no resolver that contains this package.
When I run stack install thrift-0.10.0, I receive the following error:
While constructing the build plan, the following exceptions were encountered:
In the dependencies for thrift-0.10.0:
vector-0.11.0.0 must match ==0.10.12.2 (latest applicable is 0.10.12.2)
I'm not really sure (a) what stack install does, and (b) how to resolve the build plan since the thrift package specifies an equality (==) on the vector-0.10.12.2 package. If I try and include the relevant vector == 0.10.12.2 in my package.cabal, that also fails. Do I need to specify an earlier resolver?
I realize I have much to learn about this build tool, but in this case, my primary question, for which no documentation seems readily available is:
how do I include a hackage package in my stack build?

nor does the extra-deps section work since there is no resolver that contains this package.
extra-deps can contain any hackage package.
(a) what stack install does
It just does a build of the package + copying of executables to .local/bin
Install shouldn't be used for dependencies, instead it should be used for your local project / applications from hackage (packages with executables). There is no benefit to installing the dependencies of your project, instead they should be specified in stack.yaml
(b) how to resolve the build plan since the thrift package specifies an equality (==) on the vector-0.10.12.2 package.
It seems really ugly for the thrift package to have an (==) constraint like that. To get around it either do "allow-newer: true" in your stack.yaml (causes constraints to get ignored). Or, probably better, add `vector-0.10.12.2" to your extra-deps.
for which no documentation seems readily available is:
how do I include a hackage package in my stack build?
See this section of docs: https://docs.haskellstack.org/en/stable/GUIDE/#external-dependencies

Related

C app deployment and managing dependencies in c

I'm new to c development, but I have some experience in other modern languages .so the first thing that I found hard is dependencies and deployment, while we got Gradle, maven, NuGet and pipy and... but in c I find it a bit difficult to manage this process.
for example, I have an app that should use mongo-c-library, log4c,libarchive so basically, in my development environment, I download and unzip all of the tar files of the above libraries and then followed their instruction(usually some make stuff) and installed them in order to include them in code make the code work.
I have studied a bit about CMake but I couldn't get a clear picture of how that could actually solve the problem.
at this moment my best solution is to create an install bash script and zip all dependencies unzipped folder with that install script and then send it to the production server to deploy it.
1.The first question is : is it possible to just copy and past all of .so .h and etc files in /path/of/installed/dependencies/include
and /path/of/installed/dependencies/lib in the destination server libary path.
2.if not what is the faster way?
while I was surfing the CMake source file I found that its developers just use this package source code directly.
cmxxx contains the xxx sources and headers files.
3.how can apt-get and Linux package manager help in the deployment process?
2 first question was more about dependencies. imagine we have a simple c app and we want to install(build and make a useable executable file) quickly. how it can be related to .deb packages.
1.The first question is : is it possible to just copy and past all of .so .h and etc files in /path/of/installed/dependencies/include and /path/of/installed/dependencies/lib in the destination server libary path.
Yes, technically it's possible. That's essentially what package managers do under the hood. However, doing that is a colossal mistake and screams bad practices. If that's what you want then in the very least you should look into package managers to build up your own installer, which handles this sort of stuff already for you.
2.if not what is the faster way?
You're actually asking an entirely different question, which is: how should I distribute my code, and how do I expect users to use/deploy it?
If you want users to access your source code and build it locally, as you've mentioned cmake then you just to set up your project right as cmake already supports that usecase.
If instead you just want to distribute binaries for a platform then you'll need to build and package that code. Again, cmake can also help you on that one, as cmake's cpack supports generating some types of packages like DEB packages used by Debian and Ubuntu, and which are handled by apt.
3.how can apt-get and Linux package manager help in the deployment process?
apt is designed to download and install packages from a repository.
Under the hood, apt uses DEB packages, which can be installed with dpkg.
If you're targeting a system that uses apt/deb, you can build DEB packages whenever you release a version to allow people to install their software.
You can also go a step beyond and release your DEB packages in a Personal Package Archive.
You would typically NOT download and install source packages. Instead you should generally rely on the libraries and development packages of the distribution. When building your own package you would typically just reference the packages or files that your package is dependent on. Then you build your own package and you're done. Upon installation of your package, all dependencies will automatically be resolved in an appropriate order.
What exactly needs to be done is dependent on the package management system, but generally the above statements apply. Be advised, package management apparently is pretty hard, because so many 3rd party developers screw it up.

Conan cannot find a certain package for the specified settings, options and dependencies

I am working on a small C executable project using Jetbrains CLion 2019.3, MinGW 8.1, and also the Conan C/C++ Package Manager 1.21.1. I am refreshing my knowledge about C and want to learn about new tools like Conan. My main development environment is Windows, but this project is intended to be cross-platform; I would like to be able to build and run the application on Linux/Unix as well.
Since my application needs to compute signatures using HMACSHA1, I want to use the OpenSSL library, so I added the OpenSSL/1.1.1a#conan/stable package to the requires section of my conanfile.txt file, and I also created a Conan profile for MinGW that has the following options:
toolchain=$MINGW64_PATH
target_host=x86_64-w64-mingw32
cc_compiler=gcc
cxx_compiler=g++
[env]
CONAN_CMAKE_FIND_ROOT_PATH=$toolchain
CHOST=$target_host
AR=$target_host-ar
AS=$target_host-as
RANLIB=$target_host-ranlib
CC=$target_host-$cc_compiler
CXX=$target_host-$cxx_compiler
STRIP=$target_host-strip
RC=$target_host-windres
[settings]
os_build=Windows
arch_build=x86_64
# We are cross-building to Windows
os=Windows
arch=x86_64
compiler=gcc
# Adjust to the gcc version of your MinGW package
compiler.version=8.1
compiler.libcxx=libstdc++11
build_type=Release
The MINGW64_PATH points to my MinGW installation folder.
When running conan install it complains about a missing package (obviously a dependency package of OpenSSL) that does not exist:
zlib/1.2.11#conan/stable: WARN: Can't find a 'zlib/1.2.11#conan/stable' package for the specified settings, options and dependencies:
- Settings: arch=x86_64, build_type=Release, compiler=gcc, compiler.version=8.1, os=Windows
- Options: minizip=False, shared=False
- Dependencies:
- Package ID: eb34f13b437ddfd63abb1f884c4b8886c48b74cd
ERROR: Missing prebuilt package for 'zlib/1.2.11#conan/stable'
Try to build it from sources with "--build zlib"
Or read "http://docs.conan.io/en/latest/faq/troubleshooting.html#error-missing-prebuilt-package"
Since I am a noob using Conan, I have no clue how I can fix this problem. What needs to be done to fix this issue, and also can I fix this on my own, or do I need help from the package author?
I found a description of the Missing prebuilt package error at https://docs.conan.io/en/latest/faq/troubleshooting.html#error-missing-prebuilt-package, but it does not help much.
so I added the OpenSSL/1.1.1a#conan/stable package to the requires
That package is obsolete, you can check it on Conan Community repository. You should try openssl/1.1.1a# instead, which is maintained by the new Conan Center Index.
conan install openssl/1.1.1d#
Where is the namespace? It has been removed, take a look on more information about recipes.
Since I am a noob using Conan, I have no clue how I can fix this problem. What needs to be done to fix this issue, and also can I fix this on my own, or do I need help from the package author?
As the FAQ recommends, you should build by yourself, running the command proposed by the error message:
conan install openssl/1.1.1a# --build zlib
But I'm sure it won't be enough, you will need to build OpenSSL too. So, the best approach in your situation is:
conan install openssl/1.1.1a# --build missing
Now, Conan will build from sources anything which is not pre-built on server side.
To summarize, this is not an error, like something is broken.
When you asked for OpenSSL 1.1.1a, Conan found the recipe on Conan Center, which explain how to build OpenSSL, however it didn't find your pre-built package, following your settings and options.
Well, MingW is not used in Conan Center Index, because there is no enough demand, all supported platforms and configurations are listed in the Wiki. But this specific recipe should support MingW, since when it was part of Conan Community, MingW was present in package lists for building.
I would say, you can use 1.1.1d instead, which newer and safer than 1.1.1a.

Is it possible to install external dependencies for R package on download

I have an R package with certain dependencies to C libraries on github, currently without the dependency C libraries inside the repo.
Normally, I would install R packages from GitHub with the following:
install.packages("devtools")
library(devtools)
install_github("github_repo/package_name")
All of the C code used by the R package is naturally located inside the subfolder package_name/src. But I'm confused how to release the C library dependencies necessary for the R package to work.
Based on the documentation in "Writing R Extensions", https://cran.r-project.org/doc/manuals/r-release/R-exts.htmlk, these dependencies should be listed:
"Dependencies external to the R system should be listed in the ‘SystemRequirements’ field, possibly amplified in a separate README file."
That makes sense. And I could put in the README how to install these C library dependencies, or even put the libraries in the github repo (if they are not too large).
However, this could easily become a mess for people to download, which is why I like Docker files, i.e. within Dockerfile, I would add the following:
RUN apt-get update && apt-get install -y \
make \
clang \
require_c_library1 \
require_c_library2 \
require_c_library3
Is it possible to load these C library dependencies in such a manner before the R package installs (i.e. R CMD INSTALL calls R CMD SHLIB which installs all C code and C dependencies in Makevars)?
Or is the only option to (1) put each and every C dependency in the R package to be downloaded and compiled at devtools::install_github("github_repo/package_name") or (2) ask users to install all of these dependencies in the README, and hope they do it correctly (and don't e-mail me endlessly)?
There could be something I'm not understanding here, so please correct me
tl;dr: I wish!
There were earlier, related questions. In essence you desire to have the (well-honed) CRAN dependencies (which work in a well-defined universe) become more universal across the host operating systems---of which there are too many, and too many variants.
Take example libries for XML, PostgreSQL, PNG or JPEG. Their names (and versions) will differ across operating systems so this, sadly, is really really hard.
[ I do have a package RcppAPT which allow you to query apt's cache from R, but that only addresses flow 'the other way' -- and is of course only for a subset of users as it is of no use to folks on Windows, macOS, RH/CentOS and so on. ]

Quicklisp Libraries

I am currently running SBCL with quicklisp. I found an old project that I was trying to load with
(ql:quickload "project")
when I get the dependency error SYSTEM FILE-IO NOT FOUND. The dependencies in my project.asd file are
:depends-on (#:file-io #:cl-ppcre #:logv #:cl-mustache #:local-time
#:rutils #:alexandria)
None of the other dependencies give me any trouble, logv seems to be a discontinued log viewer, but I can't find anything concerning "file-io" in https://www.quicklisp.org/beta/releases.html. Is it just another discontinued library? Any ideas/advice would be appreciated.
The code provided by file-io only deals with slurping and spitting files. You can safely download the system from github and install it in Quicklisp's "local-projects" directory. Alternatively, you can use UIOP equivalent functions, which are well supported and available in most distributions.

What is better downloading libraries from repositories of or installing from *.tar.gz

gcc 4.4.4 c89 Fedora 13
I am wondering what is better. To give you a compile of examples: apache runtime portable and log4c.
The apr version in my fedora repository is 1.3.9. The latest stable version on the apr website is 1.4.2.
Questions
Would it be better to download from the website and install, or install using yum?
When you install from yum sometimes it can put things in many directories. When installing from the tarball you can put the includes and libraries where you want.
The log4c the versions are the same, as this is an old project.
I downloaded log4c using yum. I copied all the includes and libraries to my development project directory.
i.e.
project_name/tools/log4c/inc
project_name/tools/log4c/libs
However, I noticed that I had to look for some headers in the /usr/include directory.
Many thanks for any suggestions,
If the version in your distribution's package repository is recent enough, just use that.
Advantages are automatic updates via your distribution, easy and fast installs (including the automatic fetching and installing of dependencies!) and easy removals of packages.
If you install stuff from .tar.gz by yourself, you have to play your own distribution - keep track of security issues and bugs.
Using distribution packages, you have an eye on security problems as well, but a lot work does the distributor for you (like developing patches, repackaging, testing and catching serious stuff). Of course each distributor has a policy how to deal with different classes of issues for different package repositories. But with your own .tar.gz installs you have nothing of this.
It's an age-old question I think. And it's the same on all Linux distributions.
The package is created by someone - that person has an opinion as to where stuff should go. You may not agree - but by using a package you are spared chasing down all the dependencies needed to compile and install the software.
So for full control: roll your own - but be prepared for the possible work
otherwise use the package.
My view:
Use packages until it's impossible to do so (conflicts, compile parameters needed, ..) . I'd much rather spend time getting the software to work for me, than spend time compiling.
I usually use the packages provided by my distribution, if they are of a new enough version. There is two reasons for that:
1) Someone will make sure that I get new packages if security vulnerabilities in the old ones are uncovered.
2) It saves me time.
When I set up a development project, I never create my own include/lib directories unless the project itself is the authorative source for the relevant files I put there.
I use pkg-config to provide the location of necessary libraries and include files to my compiler. pkg-config use some .pc-files as a source of information about where things are supposed to be, and these are maintained by the same people who create the packages for your distribution. Some libraries does not provide this file, but an alternative '-config'-script. I'll provide two examples:
I'm not running Fedora 13, but an example on Ubuntu 10.04 would be;
*) Install liblog4c-dev
*) The command "log4c-config --libs" returns "-L/usr/lib -llog4c" ...
*) The command "log4c-config --cflags" returns "-I/usr/include"
And for an example using pkg-config (I'll use SDL for the example):
*) Install libsdl1.2-dev
*) The command "pkg-config sdl --libs" returns "-lSDL"
*) The command "pkg-config sdl --cflags" returns "-D_GNU_SOURCE=1 -D_REENTRANT -I/usr/include/SDL"
... So even if another distribution decides to put things in different paths, there are scripts that are supposed to give you a reliable answer to where things is - so things can be built on most distributions. Autotools (automake, autoconf, and the likes) amd cmake are quite helpful to make sure that you don't have to deal with these problems.
If you want to build something that has to work with the Apache that's included with Fedora, then it's probably best to use the apr version in Fedora. That way you get automatic security updates etc. If you want to develop something new yourself, it might be useful to track upstream instead.
Also, normally the headers that your distro provides should be found by gcc & co. without you needing to copy them, so it doesn't matter where they are stored by yum/rpm.

Resources