Using nix in a continuous delivery workflow - continuous-deployment

Can nix be used in a continuous-delivery workflow?
We're using semaphore as our continuous integration service, and now I'm looking into building packages upon a successful build. For this I'm considering using nix.
I don't know what would be the right way of setting up a continuous delivery pipeline with this package manager. It seems that such an automated process would involve:
Making a branch of the nixpkgs repository (in the CI server).
Updating the rev field of fetchFromGithub.
(automatically) submitting a pull-request.
But I don't know if this makes sense, and also I'm concerned that the continuous-delivery process involved a manual step (having an human aproving the pull-request).

Can nix be used in a continuous-delivery workflow?
Yes. It's typically done with Hydra, a CI system built with Nix. But, it may be possible to do this with Semaphore.
Semaphore CI provides build environments that are language specific, but... it's running Ubuntu, so theoretically you can do something like this:
Install Nix as if it were a dependency. See this article.
Add your Nix package, which I suppose you can do with Git. You don't really need to clone Nixpkgs.
Use nix-build to build your package. This will create a result symbolic link to the build output.
Deploy using git-deploy.
If you do something like this with your package you can call it directly from nix-build because you won't have to provide the package dependencies as arguments:
{ pkgs ? import <nixpkgs> {} }:
let
stdenv = pkgs.stdenv;
...
in
stdenv.mkDerivation {
..
}
Optimization
Installing Nix for every build is wasteful, but perhaps you can cache the Nix store. See this article.

Related

Is there an OCaml equivalent of npx?

When using node.js we have npx - a tool that makes it easy for users to install and manage applications (especially CLI tools) rather than libraries. For Python there is pipx. Is there a tool for OCaml that offers the same functionality?
pipx run "$APP" --example foo" install & update & run all-in-one
Apps are installed into isolated environments so that they do not step on each other's toes. (e.g. incompatible dependency requirements)
I know there is one general-purpose option - Docker, but is there anything tailored to OCaml?
Edit: the main reason I dislike Docker is that it alters my view of the filesystem. A CLI tool might accept paths via the arguments, the environment and config files and then open them. Reading relative to CWD is also common. If a CLI tool is distributed via Docker, the user has to be aware of the altered filesystem view, which adds cognitive overhead.
If you browse the list of OPAM packages, you will see not only libraries, but also applications. A few examples from a quick perusal:
dune
dumpast
utop
xe

Is using nohup in production is a bad practice ? (how to run server forever)

I have Linux Droplet on Digital Ocean, and I want to run on it some services – like SpringbootWeb and React.js.
Clearly I need to run the servers all the time, without being depends
on the terminal on/off (I’m using Putty) and I am planning to do it ,
by using nohup.
I saw other methods like those
In spring boot (See 3. Installing Spring Boot Applications) and in npm.
But I prefer for now use nohup since it’s easier and simpler.
I there is problem with that approach and it considered “bad practice” for production ?
(And if does, what considered a good practice ?)
Edit
Now seeing that nohup not saving react running after closing Putty-console
found also this idea for deploying React on nginx. (Digital Ocean run nginx)
There's nothing wrong with it, but you would still need to create some sort of init script to start your app on boot and stop it on shutdown.
So on a Linux system you would typically want to use systemd unit files for this, and have the init system handle the lifecycle of your server application. The reference guide mentions it here, or refer to this as a more complete example.

Julia package add (zip master file from github)

I am trying to install/add manually the *.jl-master.zip files, I am doing this because I have a computer without access to internet but I don't know how to do the offline installation.
The version of Julia is 1.3.0 and the O.S. Windows 10
I had tried this,
Instruction I followed
but for me It does not work.
Thanks
Regards
Installing Julia packages offline is very difficult due to the fact how binary dependencies work.
You have basically two options:
buying JuliaTeam/JuliaPro from Julia Computing (maybe someone who is using it will ever see that thread and could share their experience?)
hacking
Regarding the second option the best bet is to install all required packages on a different machine having and copy the .julia folder to your offline machine (or, depending on your configuration folder referenced by the JULIA_DEPOT_PATH system environment variable).
However, in most cases you will need to rebuild several packages. The problem is that Julia packages have several binary dependencies that come from different sources. This problem has been noted by the Julia community and is being addressed by the Julia Artifacts mechanism.
Today the most common step is to manually edit deps/build.jl file in each Julia package that is downloading binary resources in its build process and make the build code to point to files in your local repository. Once done you can rebuild the package offline by running using Pkg;Pkg.build("PackageName").

Simple Tensorflow with Custom Packages on Google Cloud

The task: Run a tensorflow train.py script I wrote in the cloud with at least 32GB of memory.
Requirements: The script has some dependencies like numpy, scipy, and mkt. I need to be able to install these. I just want a no-nonsense ssh shell like experience. I want to put all my files including the training data in a directory, pip install the packages if necessary, then just hit python train.py and let it run. I'm not looking to run a web app or have Google's machine learning platform do it for me.
All the tutorials around seem needlessly complicated, like they're meant for scaled deployments with http requests and all that. I'm looking for a simple way to run code on a server since my computer is too weak for machine learning.
Don't use AppEngine -- use Compute Engine instead. Almost the same thing, but very simple and you are completely in control of what you run, what you install etc.
Simple steps that should work for you:
-Create a Compute Engine instance
-Chose operating system (Ubuntu xx, but you can choose others instead)
-Chose how many CPUs and how much memory you want (select Customize in order to set yourself the CPU/memory ratio rather than getting default options)
-Enable HTTP/HTTPs in order to be able to use Tensorboard later
-Once created, SSH into the machine. Python is already pre-installed (2.7 default, but 3.x also available as Python3 alias)
-Install Tensorflow, Numpy, Pandas, and whatever you want with simple PIP
-You can also install Bazel if you want to build Tensorflow from source and to speed up the CPU operations
-Install gcsfuse if you want to copy/paste stuff quickly from cloud storage buckets
-Use tmux if you want to run several Tensorflow sessions in parallel (i.e.to try different hyperparameters/etc.)
This is all very clean and simple and works really well. Don't forget to shut it down after finished. You can also create a Preemptable instance to make it super-cheap (but it can be shut down at any time without warning, but happens rarely).

Passing JVM opts to maven plugin

I know that plugins like surefire have an "argLine" configuration parameter which they pass along to the JVM where the specific goals for the plugin is run. As I understand, by default, maven plugins are forked and run in a separate JVM (please correct me if I am wrong).
I am trying to figure out how to pass on VM arguments to a maven plugin jibx, but don't know if there is an easier/declarative way (or via a Util class) that I can configure it to do this. In surefire, there are utility classes in booterclient that seem to be handling this, but the functionality seems to be universal across plugins, right? Hence wondering if there might be some support from Mojo/Plexus to add this easily without writing a bunch of plumbing code. Again, please correct me if my understanding is incorrect
Thanks in advance!
The actual Maven plugin classes are run in the same JVM as Maven.
Where a Maven plugin is wrapping a separate tool, it depends on how the Maven plugin was written whether it forks a process or not.
For example the Cobertura Maven Plugin forks a process to do the Cobertura things. There is no technical reason for this fork, in the case of Cobertura the forking of the JVM is to work around the GPL licensing of the Cobertura Tool itself.
As you noticed the Surefire plugin usually forks a process for running the unit tests. It does not technically need to fork (see forkMode=never) however, there are good reasons due to the poor isolation of some parts of the JVM that require forking, e.g. System properties being global.
Looking at the plugin in your question you can see that it is just invoking the JIBX's main method directly. In other words it is not forking the JVM at all. If there are JVM options that you want, you will need to use the MAVEN_OPTS environment variable to specify them (with the side-effect that they are global for Maven, and if you forget to specify them, then things will not work as you expect)
I think to fix this you should really suggest a patch to the plugin that forks and accepts JVM options for the forked JVM

Resources