How to register a local Julia package in a local registry? - package

I have a Julia package in my home directory under the name Foo. Foo is a directory with Project.toml file that describes the package dependences, name, etc. I want to be able to use this package from another folder in a particular manner as follows.
Build the package Foo
Register Foo in a local Julia repository
Run Pkg.add("Foo") from anywhere on the system, such as script.jl which would have the following:
using Pkg
Pkg.add("Foo")
using Foo
# Now use Foo
Foo.bar()
Here is what I've tried so far.
Navigate to Baz directory where I want to run Baz/script.jl
Use repl, hit ] and run dev --local PATH_TO_FOO
From repl, run using Foo
Foo is now accessible in the current repl session (or script)
Summary: I have managed to import a package Foo in another directory Baz.
Basically, I want to be able to write a script.jl that can make use of the local registry instead of this dev --local . method.

I assume that this is for personal use and you are not a system administrator for a large group.
Maintaining a registry is hard work. I would avoid creating a registry if at all possible. If your goal is to make the use of scripts less painful, a lighter solution exists: shared environments.
A shared environment is a regular environment, created in a canonical location. This means that Pkg will be able to locate it by name: you do not have to specify an explicit path.
To set up a shared environment:
give it a name: Pkg.activate("ScriptEnvironment"; shared=true)
populate it: Pkg.add(Pkg.PackageSpec(; path="/path/to/Foo")) (depending on your use case, you can also use Pkg.develop, add registered packages, etc)
that's all!
Now all your scripts can use the shared environment:
using Pkg
Pkg.activate("ScriptEnvironment"; shared=true)
using Foo
If other sets of scripts require different sets of packages, repeat the procedure with a different shared name. Pkg environments are really cheap so feel free to make liberal use of this approach.
Note: I would discourage beginning scripts with Pkg.add("Foo") because this carries the risk of inadvertently polluting the active environment.

There are some instructions on registry creation and maintenance at https://github.com/HolyLab/HolyLabRegistry.

Related

Overriding chocolateyInstall.ps1 script with Ansible

I would like to override the default powershell scripts that come with win_chocolatey module of Ansible. How do I do that?
In my case, I am trying to override the ChocolateyInstall.ps1 that comes with MsSqlServerManagementStudio2014Express. I would like to pass a few more parameters such as system administrator password and instance names during the silent installation of MsSQLServer.
I have tried giving these additional parameters with "install_args" and "params" options in win_chocolatey module call in my ansible playbook. But there are no handlers written in MsSqlServerManagementStudio2014Express's powershell scripts to include them during silent installation.
Package Parameters vs Install Arguments
Install Arguments (--install-arguments option for choco.exe) are completely invisible to the packaging, and they are appended to the current set of silent arguments in the package. One can also override them completely with --override-arguments. In the commercial editions of Chocolatey, you can also pass --install-arguments-sensitive to keep secrets out of logs.
Package Parameters (--package-parameters|--params) are different, can be used with anything related to packaging (not just for the installer), but must also be present in the packaging itself. For commercial editions and secrets, you also have --package-parameters-sensitive.
Option 1: Community Repository Packages
You would need to ensure that the package is using Install-ChocolateyPackage, Install-ChocolateyInstallPackage, or other built-in methods to know that install arguments can be used with the package. If you need parameters, you will need to work with the maintainers of the package to get those implemented.
Option 2: Use Your Own Packages
If you are using Chocolatey in an organization, you will want to use your own package you store somewhere internally. That guarantees much more reliability and repeatability, something that is instrumental to organizational use of anything.
Plus you can bake installers directly into the package as you are not subject to distribution rights internally, providing an even more reliable experience.
It is especially helpful to read over and understand this when planning for Chocolatey in an organization: https://chocolatey.org/docs/community-packages-disclaimer

Local stack packages

I've got a number of local stack packages, in the usual form of a directory with a stack.yaml file, and PackageName.cabal, and a src directory.
I'd like to have a local stack packages depend on another local stack package, preferably without this option being carried with the package if I do ever upload it to Hackage (at which point the local package will make no sense).
A directory with symbolic links to all my local stackage packages and a global override to search that directory first will be fine, but also specifying the required local packages on a per package basis is also fine, as long as this configuration options don't get carried alone if I package it up with Hackage.
How can I achieve this? Links to the appropriate section in the stack docs is fine, and perhaps the answer is in there and I just can't find it.

ReactJS: Storing very simple settings/constants

I am very new to ReactJS and I might be thinking completely wrong. In our react app I make a lot of AJAX calls to the backend. For example, in dev I make calls to http://localhost:3000, in production I (naturally) use another host, which changes depending on the installation.
The hosts are static, set once and never change.
How do I make the host-information manageable in React?
I read about redux/flux etc to store global variable, but this is overkill for us. We just need to have one string (URL/host-info) that we can replace with another. We can store the info in a file, as a command-line param or whatever. We just need it to be simple.
UPDATE: Turn out that I fully did not understand the build system. As Dan pointed out we use webpack to package the solution. Using a loader we could swap out our configuration-settings in the code. We ended up using a simple string replacement loader (string-replace-webpack-plugin) since env variables are not suitable for this solution.
What you're describing are usually known as environment variables. You generally maintain a specific set of environment variables for each context your application is developed or run in.
For instance you might have an APP_HOST environment variable which would be set differently at your local machine, than it would at your server.
Most programs that run on the server can read the environment variables directly, but because React apps run in the client's browser, you'll have to make them aware of the appropriate environment variables before they are served.
The easiest way to do this is with envify.
Envify will allow you to reference environment variables from your frontend code.
// app.js
const host = process.env.APP_HOST;
fetch(host);
Envify is a Browserify transform, meaning you'd need to run your code through a command like this.
# define some environment variables
APP_HOST="localhost:3000"
# build the code
browserify app.js -t envify -o bundle.js
What comes out the other side would be:
// bundle.js
var host = "localhost:3000";
fetch(host);
If you use Webpack, there's a similar alternative in the form of envify-loader.

What is the purpose of packages "provide" and "ifneeded" command in TCL?

I Want to know , What is the purpose of packages "provide" and "if-needed" commands in TCL ? Please any body clarify my doubts.
Thanks in Advance.
The package provide command goes in the definition code of a package to declare that a particular package with a particular version has been defined.
Here's a trivial example:
namespace eval ::foo {
proc bar {} {
puts "this is the bar procedure in the foo namespace"
}
}
package provide foobar 1.1
The aim is that a package is a higher-level concept than just “oh, here's a bunch of files that you source”, even though that's quite possibly how it is implemented.
Some people prefer to put the package provide at the top of the script.
The package ifneeded command is used to supply metadata about a package to Tcl before loading the package itself. It goes inside a script named pkgIndex.tcl (usually) in the same directory as your package implementation, and it is used to say “if you want to use the package with this name and this version, run this script”. The final bit you need to know about this is that the package discovery code (run when you ask for a package it doesn't know about) runs the pkgIndex.tcl scripts that it finds in contexts that define the variable dir to be the directory containing the pkgIndex.tcl script that is being run; you don't need to do anything special to relocate the package if you use that convention (which is really convenient).
An example that might match up with the above code:
package ifneeded foobar 1.1 [list source [file join $dir foobar.tcl]]
There is a library command to generate the pkgIndex.tcl scripts for you — pkg_mkIndex — but I don't use it as it's not exactly difficult to write it by hand. (Note also that just because we've implemented the package with just a single Tcl script this time doesn't mean that it has to be done that way. It could also be many scripts, or shared libraries, or a mix of scripts and shared libraries. The user of the package shouldn't have to care about this.)
Maybe you just need an example to see how these commands are working.
Building reusable libraries is a good topic in the tcl tutorial which explaines how package ifneeded, provide and require fit in.
https://www.tcl.tk/man/tcl8.5/tutorial/Tcl31.html

Rely on PATH or provide an explicit path when using system()

I'm writing a 'C' program that makes several calls to system() to execute other programs. When constructing the command string is it better to explicitly give the full path to the program being called, or should I just give the executable name and let the shell resolve its location using the PATH environment variable?
The programs I'm calling are all part of a single package and I have the path to the installation directory from a preprocessor definition. Giving the explicit path would seem to avoid errors that might occur if multiple installed programs share the same name. However it makes building the command strings a little more complicated, and everything will break if the user moves the programs around after installation.
Is there a widely accepted best practice covering this?
[Clarification]
I'm using autoconf/automake to generate the distribuion. The preprocessor definition providing the installation directory is created by the makefile. It reflects the user's choice of the installation directory as specified either on the configure comamnd line or the make command line. I do take the point about using environment variables to specify the location for the binaries though. It seems like an unneeded pain in the butt to make users rebuild just to change the location of the binaries.
Best practice is never to assume that you know your install directory at build time. Let your users decide where to install and work anyway.
This means that you will need to find out where your programs are located using some other mechanism. Consider using environment variables or command line parameters to allow the user to specify the actual path, if your platform does not provide you with the means to find out where the executables are located. You can use your knowledge of where you are normally installed as a fallback option.
For your actual question, in case you can build the absolute path to your program (using another mechanism than preprocessor directives) - use that. Otherwise, fall back to having the system find out for you.
The best practice is to not presume anything about the system you're installing onto. You can have the best of both worlds if you just let the user choose. Make the command you call an application preference or require paths to be defined in the environment:
PATH_TO_TOOL1=foo
PATH_TO_TOOL2=/usr/bin/bar
You can, of course, just fall back to a default of some kind if the variables aren't defined or the preference isn't set. Writing your application to be more flexible is always the best choice!
You should definitely let the user specify the path with an environment variable to the installed binaries. Not all systems are the same and many people will want to put their execs in different places.
the best example I can think of is people doing a local install vs system install. If your program is installed in a home directory that user will have to set and env variable to say where the binaries are copied to.
If you're absolutely sure of the path names, and if they are not "well-known" commands (for example, POSIX shell utilities on Unix are "well-known"), you should specify the pathname, otherwise don't specify the full path, or let the user control it by using an environment variable.
In fact, you may be able to write something like a function such as int my_system(const char *);, which does the prefixing of the path for you. If later you determine that it was a bad idea, it's just a matter of making my_system() identical to system().
I'm not sure if it's a best practice, but what I do in these cases is I write my C code to extend the PATH environment variable to include the installation directory at the end. Then I just use the PATH. That way, if the user's PATH wants to override where I believe the stuff was installed, it can—but if the software was installed in an out-of-the-way place, I can call it without forcing my users to put the directory on $PATH themselves.
Please note that the extended PATH lasts only as long as the C program runs; I'm not proposing changing the persistent PATH.

Resources