OPAM: Can I work on different switches at the same time in different shells? - opam

This question is addressed in OPAM FAQ [1], but both solutions provided do not work with OPAM v.1.2.2, as env and exec commands are both unknown. How can I run a single command from the different switch?
[1] https://opam.ocaml.org/doc/2.0/FAQ.html#Can-I-work-on-different-switches-at-the-same-time-in-different-shells

You linked to the FAQ for opam 2.0. Here it is for opam 1.2: https://opam.ocaml.org/doc/FAQ.html#Can-I-work-on-different-switches-at-the-same-time-in-different-shells
🐫 Can I work on different switches at the same time in different
shells ?
Yes. Use one of:
eval $(opam config env --switch <switch>) # for the current
shell opam config exec --switch <switch> -- <command> # for one command
This only affects the environment.

Related

CRA Advanced Configurations from Terminal?

How can I set Create React App advanced configuration options from the terminal?
For example: TSC_COMPILE_ON_ERROR
https://create-react-app.dev/docs/advanced-configuration/
On Unix like shells like macOS or Linux you can set env vars in the shell by adding the variable in-front of your command:
$ TSC_COMPILE_ON_ERROR=true yarn start
or
$ TSC_COMPILE_ON_ERROR=true npm start
On Windows you can use the Unix sub system to help you use the same syntax (or so I'm told)

How to fake macports into using my build script for ffmpeg?

I have my own bash script to configure ffmpeg which successfully updates my git repo, builds, tests and installs everything correctly.
Basically I want to bypass macports env and configuration but recognize my build but only for ffmpeg, I don't want macports to look in /usr/local or another location, I want it installed in /opt/local
So it probably boils down to, how to completely disable all the macports env and just launch a subshell with my script?
Created tarball, checksums, etc
PortSystem 1.0
PortGroup muniversal 1.0
name ffmpeg
epoch 1
version 9.9.9
revision 9
license LGPL-2.1+
categories multimedia
maintainers nomaintainer
platforms darwin
homepage http://www.ffmpeg.org/
master_sites file:///Volumes/Apps_Media/my_repo
use_zip yes
checksums {they work}
depends_build port:pkgconfig \
port:gmake \
port:texinfo
use_configure no
build.cmd $HOME/bin/configFFMPEG
macports chokes on these lines in ffmpeg's configure
FFmpeg/configure: line 3596: ffbuild/config.log: Operation not permitted
FFmpeg/configure: line 3597: ffbuild/config.log: Operation not permitted
echo "# $0 $FFMPEG_CONFIGURATION" > $logfile
set >> $logfile
MacPorts runs builds as the macports user, not yoru normal user or root. Make sure that user can both read your script and write to the locations where your script is trying to write.
Even though you invoke MacPorts with superuser privileges, it will not use these privileges while building software.

postgreSQL error initdb: command not found

i was installing postgresql on ubuntu using linuxbrew:
brew install postgresql
it seems to work fine but after that because i was installing PostgreSQL for the first time i tried creating a database:
initdb /usr/local/var/postgres -E utf8
but it returned as:
initdb: command not found
i tried running the command with sudo but that doesn't helped
run locate initdb it should give you the list to chose. smth like:
MacBook-Air:~ vao$ locate initdb
/usr/local/Cellar/postgresql/9.5.3/bin/initdb
/usr/local/Cellar/postgresql/9.5.3/share/doc/postgresql/html/app-initdb.html
/usr/local/Cellar/postgresql/9.5.3/share/man/man1/initdb.1
/usr/local/Cellar/postgresql/9.6.1/bin/initdb
/usr/local/Cellar/postgresql/9.6.1/share/doc/postgresql/html/app-initdb.html
/usr/local/Cellar/postgresql/9.6.1/share/man/man1/initdb.1
/usr/local/bin/initdb
/usr/local/share/man/man1/initdb.1
So in my case I want to run
/usr/local/Cellar/postgresql/9.6.1/bin/initdb
If you don't have mlocate installed, either install it or use
sudo find / -name initdb
There's a good answer to a similar question on SuperUser.
In short:
Postgres groups databases into "clusters", each of which is a named collection of databases sharing a configuration and data location, and running on a single server instance with its own TCP port.
If you only want a single instance of Postgres, the installation includes a cluster named "main", so you don't need to run initdb to create one.
If you do need multiple clusters, then the Postgres packages for Debian and Ubuntu provide a different command pg_createcluster to be used instead of initdb, with the latter not included in PATH so as to discourage end users from using it directly.
And if you're just trying to create a database, not a database cluster, use the createdb command instead.
I had the same problem and found the answer here.
Ubuntu path is
/usr/lib/postgresql/9.6/bin/initdb
Edit: Sorry, Ahmed asked about linuxbrew, I'm talking about Ubuntu.
I Hope this answer helps somebody.
I had a similar issue caused by the brew install postgresql not properly linking postgres. The solve for me was to run:
brew link --overwrite postgresql
you can add the PATH to run from any location
sudo nano ~/.profile
inside nano go to the end and add the following
# set PATH so it includes user's private bin if it exists
if [ -d "/usr/lib/postgresql/14/bin/" ] ; then
PATH="/usr/lib/postgresql/14/bin/:$PATH"
fi
and configure the alternative
sudo update-alternatives --install /usr/bin/initdb initdb /usr/lib/postgresql/14/bin/initdb 1

How to disable linux space randomization via dockerfile?

I'm trying to disable randomization via Dockerfile:
RUN sudo echo 0 | sudo tee /proc/sys/kernel/randomize_va_space
but I get
Step 9 : RUN sudo echo 0 | sudo tee /proc/sys/kernel/randomize_va_space
---> Running in 0f69e9ac1b6e
[91mtee: /proc/sys/kernel/randomize_va_space: Read-only file system
any way to work around this? (I see its saying read-only file system any way to get around this?) If its something which the kernel does this means it's outside of my container scope, in that case how am i supposed to work with gdb inside my container? please note this is my target to work with gdb in a container because i'm experimenting with it, so i wanted a container which encapsulates gcc and gdb which i'll use for experimentations.
In host
run:
sudo echo 0 | sudo tee /proc/sys/kernel/randomize_va_space
not in docker
Docker has syntax for modifying some of the sysctls (not via dockerfile though) and kernel.randomize_va_space does not seem to be one of them.
Since you've said you're interested in running gcc/gdb you could disable ASLR only for these binaries with:
setarch `uname -m` -R /path/to/gcc/gdb
Also see other answers in this question.
Sounds like you are building a container for development on your own computer. Unlike production environment, you could (and probably should) opt for a privileged container. In a privileged container sysfs is mounted read-write, so you can control kernel parameters as you would on the host. This is an example of Amazon Linux container I use to develop for on my Debian desktop, which shows the difference
$ docker run --rm -it amazonlinux
bash-4.2# grep ^sysfs /etc/mtab
sysfs /sys sysfs ro,nosuid,nodev,noexec,relatime 0 0
bash-4.2# exit
$ docker run --rm -it --privileged amazonlinux
bash-4.2# grep ^sysfs /etc/mtab
sysfs /sys sysfs rw,nosuid,nodev,noexec,relatime 0 0
bash-4.2# exit
$
Notice ro mount in the unprivileged, rw in the privileged case.
Note that the Dockerfile command
RUN sudo echo 0 | sudo tee /proc/sys/kernel/randomize_va_space
makes no sense. It will be executed (a) during container build time (b) on the machine where you build the image. You want (a) happen at container's run time and (b) on the machine where you run the container. If you need to change sysctls on image start, write a script which does all the setup and then drops you into the interactive shell, like placing a script into e.g. /root and setting it as the ENTRYPOINT
#!/bin/sh
sudo sysctl kernel.randomize_va_space=0
exec /bin/bash -l
(Assuming you mount host working directory into /home/jas that's a good practice, as bash will read your startup files etc).
You need to make sure you have the same UID and GID inside the container, and can do sudo. How you enable sudo depends on a distro. In Debian, members of the sudo group have unrestricted sudo access, while on Amazon Linux (and, IIRC, other RedHat-like system, the group wheel has. Usually this boils down to an unwieldy run command that you rather want to script than type, like
docker run -it -v $HOME:$HOME -w $HOME -u $(id -u):$(id -g) --group-add wheel amazonlinux-devenv
Since your primary UID and GID match the host, files in mounted host directories won't end up owned by root. An alternative is create a bona fide user for yourself during image build (i.e., in the Dockerfile), but I find this more error-prone, because I can end up running this devenv image where my username has a different UID, and that will cause problems. The use of id(1) in a startup command guarantees UID match.

Why is my debian postinst script not being run?

I have made a .deb of my app using fpm:
fpm -s dir -t deb -n myapp -v 9 -a all -x "*.git" -x "*.bak" -x "*.orig" \
--after-remove debian/postrm --after-install debian/postinst \
--description "Automated build." -d mysql-client -d python-virtualenv home
Among other things, the postinst script is supposed to create a user for the app:
#!/bin/sh
set -e
APP_NAME=myapp
case "$1" in
configure)
virtualenv /home/$APP_NAME/local
#supervisorctl start $APP_NAME
;;
# http://www.debian.org/doc/manuals/securing-debian-howto/ch9.en.html#s-bpp-lower-privs
install|upgrade)
# If the package has default file it could be sourced, so that
# the local admin can overwrite the defaults
[ -f "/etc/default/$APP_NAME" ] && . /etc/default/$APP_NAME
# Sane defaults:
[ -z "$SERVER_HOME" ] && SERVER_HOME=/home/$APP_NAME
[ -z "$SERVER_USER" ] && SERVER_USER=$APP_NAME
[ -z "$SERVER_NAME" ] && SERVER_NAME=""
[ -z "$SERVER_GROUP" ] && SERVER_GROUP=$APP_NAME
# Groups that the user will be added to, if undefined, then none.
ADDGROUP=""
# create user to avoid running server as root
# 1. create group if not existing
if ! getent group | grep -q "^$SERVER_GROUP:" ; then
echo -n "Adding group $SERVER_GROUP.."
addgroup --quiet --system $SERVER_GROUP 2>/dev/null ||true
echo "..done"
fi
# 2. create homedir if not existing
test -d $SERVER_HOME || mkdir $SERVER_HOME
# 3. create user if not existing
if ! getent passwd | grep -q "^$SERVER_USER:"; then
echo -n "Adding system user $SERVER_USER.."
adduser --quiet \
--system \
--ingroup $SERVER_GROUP \
--no-create-home \
--disabled-password \
$SERVER_USER 2>/dev/null || true
echo "..done"
fi
# … and a bunch of other stuff.
It seems like the postinst script is being called with configure, but not with install, and I am trying to understand why. In /var/log/dpkg.log, I see the lines I would expect:
2012-06-30 13:28:36 configure myapp 9 9
2012-06-30 13:28:36 status unpacked myapp 9
2012-06-30 13:28:36 status half-configured myapp 9
2012-06-30 13:28:43 status installed myapp 9
I checked that /etc/default/myapp does not exist. The file /var/lib/dpkg/info/myapp.postinst exists, and if I run it manually with install as the first parameter, it works as expected.
Why is the postinst script not being run with install? What can I do to debug this further?
I think the example script you copied is simply wrong. postinst is not
supposed to be called with any install or upgrade argument, ever.
The authoritative definition of the dpkg format is the Debian Policy
Manual. The current version describes postinst in chapter
6
and only lists configure, abort-upgrade, abort-remove,
abort-remove, and abort-deconfigure as possible first arguments.
I don't have complete confidence in my answer, because your bad example
is still up on debian.org and it's hard to believe such a bug could slip
through.
I believe the answer provided by Alan Curry is incorrect, at least as of 2015 and beyond.
There must be some fault with the way the that your package is built or an error in the postinst file which is causing your problem.
You can debug your install by adding the -D (debug) option to your command line i.e.:
sudo dpkg -D2 -i yourpackage_name_1.0.0_all.deb
-D2 should sort out this type of issue
for the record the debug levels are as follows:
Number Description
1 Generally helpful progress information
2 Invocation and status of maintainer scripts
10 Output for each file processed
100 Lots of output for each file processed
20 Output for each configuration file
200 Lots of output for each configuration file
40 Dependencies and conflicts
400 Lots of dependencies/conflicts output
10000 Trigger activation and processing
20000 Lots of output regarding triggers
40000 Silly amounts of output regarding triggers
1000 Lots of drivel about e.g. the dpkg/info dir
2000 Insane amounts of drivel
The install command calls the configure option and in my experience the postinst script will always be run. One thing that may trip you up is that the postrm script of the "old" version, if upgrading a package, will be run after your current packages preinst script, this can cause havoc if you don't realise what is going on.
From the dpkg man page:
Installation consists of the following steps:
1. Extract the control files of the new package.
2. If another version of the same package was installed before
the new installation, execute prerm script of the old package.
3. Run preinst script, if provided by the package.
4. Unpack the new files, and at the same time back up the old
files, so that if something goes wrong, they can be restored.
5. If another version of the same package was installed before
the new installation, execute the postrm script of the old pack‐
age. Note that this script is executed after the preinst script
of the new package, because new files are written at the same
time old files are removed.
6. Configure the package.
Configuring consists of the following steps:
1. Unpack the conffiles, and at the same time back up the old
conffiles, so that they can be restored if something goes wrong.
2. Run postinst script, if provided by the package.
This is an old issue that has been resolved, but it seems to me that the accepted solution is not totally correct and I believe that it is necessary to provide information for those who, like me, are having this same problem.
Chapter 6.5 details all the parameters with which the preinst and postinst files are called
At https://wiki.debian.org/MaintainerScripts the installation and uninstallation flow is detailed.
Watch what happens in the following case:
apt-get install package
- It runs preinst install and then postinst configure
apt-get remove package
- Execute postrm remove and the package will be set to "Config Files"
For the package to actually be in the "not installed" state it must be used:
apt-get purge package
That's the only way we'll be able to run preinst install and postinst configure the next time the package is installed.

Resources