Can I make a long fish array easier to read and maintain? - arrays

I am writing a simple script in fish. I need to pass in an array as follows:
set PACKAGES nginx supervisor rabbitmq-server
apt install $PACKAGES
But as the array gets longer it gets harder to read and maintain...
set PACKAGES nginx supervisor rabbitmq-server libsasl2-dev libldap2-dev libssl-dev python3-dev virtualenv
Is there another way to define an array that is easier to read? For example, vertically with comments:
set PACKAGES
nginx
supervisor
rabbitmq-server
# LDAP packages
libsasl2-dev
libldap2-dev
libssl-dev
# Python packages
python3-dev
virtualenv
end

You can escape the newline to continue the current command on the next line (and lines with comments are ignored)
You can use multiple set invocations
e.g.
set PACKAGES \
nginx supervisor rabbitmq-server \
# Python packages
python3-dev virtualenv
# LDAP
set PACKAGES $PACKAGES libsasl2-dev libldap2-dev libssl-dev
In current fish git, set has gained "--append"/"-a" and "--prepend"/"-p" options so you don't need to repeat the variable name (the "$PACKAGES" above).

Related

Is there a way to replace an element in an array with Bash?

I'm trying to create an array and use Homebrew to install apps. But before I install the app I want to check to see if it's installed. I know it's already there in Brew, but I was looking at something like this:
declare -a applications=(Spotify Discord Franz Rectangle visual-studio-code VLC microsoft-excel)
for i in "${applications[#]}"
do
#check for app installer
if [ -d "/Applications/$i.app" ]; then
echo " $i is installed"
appstatus="Installed"
else
echo "/Applications/$i.app"
appstatus=" $i, not installed - installing now"
brew install cask "$i"
fi
echo $appstatus
done`
However what's happening is the array of applications will always fail on VSC and Excel due to the -'s not being in the name in the application folder.
Either I was going to create another array with the correct names underneath - or I was wondering if I can parse the array and remove the -'s for when we check to see if the app is installed.
Hope this makes sense.
To modify your array, replacing dashes with spaces all at once:
applications=( "${applications[#]//-/ }" )
To do it one-at-a-time:
for idx in "${!applications[#]}"; do # iterate over array indices
application=${applications[$idx]} # look up item at index
application=${application//-/ } # transform to new value
applications[$idx]=$application # store new value
done

How do you assign an Array inside a Dockerfile?

I have tried a number of different ways to assign an array inside a RUN command within a Dockerfile. None of them seem to work. I am running on Ubuntu-Slim, with bash as my default shell.
I've tried this (second line below)
RUN addgroup --gid 1000 node \
&& NODE_BUILD_PACKAGES=("binutils-gold" "g++" "gcc" "gnupg" "libgcc-7-dev" "linux-headers-generic" "make" "python3" ) \
...
But it fails with /bin/sh: 1: Syntax error: "(" unexpected.
I also tried assigning it as an ENV variable, as in:
ENV NODE_BUILD_PACKAGES=("binutils-gold" "g++" "gcc" "gnupg" "libgcc-7-dev" "linux-headers-generic" "make" "python3" )
but that fails as well.
Assigning and using arrays in Bash is completely supported. Yet, it appears that I can't use that feature of Bash when running in a Dockerfile. Can someone confirm/deny that you can assign arrays variables inside of shell commands in Dockerfile RUN syntax (or ENV variable syntax)?
The POSIX shell specification does not have arrays. Even if you're using non-standard shells like GNU bash, environment variables are always simple strings and never hold arrays either.
The default shell in Docker is usually /bin/sh, which should conform to the POSIX spec, and not bash. Alpine-based images don't have bash at all unless you go out of your way to install it. I'd generally recommend trying to stick to the POSIX syntax whenever possible.
A typical Dockerfile is fairly straightforward; it doesn't have a lot of parts that get reused multiple times and most of the things you can specify in a Dockerfile you don't need to be user-configurable. So for a list of OS packages, for example, I'd just list them out in a RUN command and not bother trying to package them into a variable.
RUN apt-get update \
&& DEBIAN_FRONTEND=noninteractive \
apt-get install --no-install-recommends --assume-yes \
binutils-gold \
g++ \
gcc \
...
Other things I see in Stack Overflow questions that do not need to be parameterized include the container path (set it once as the WORKDIR and refer to . thereafter), the process's port (needs to be a fixed number for the second docker run -p part), and user IDs (can be overridden with docker run -u, and you don't usually want to build an image that can only run on one system).
WORKDIR /app # not an ENV or ARG
COPY . . # into the WORKDIR, do not need to repeat
RUN adduser node # with no specific uid
EXPOSE 3000 # a fixed port number
RUN mkdir /data # also use a fixed path for potential mount points
You can have array NODE_BUILD_PACKAGES in RUN if you define SHELL :
SHELL ["/bin/bash", "-c"]

Optimal usage of codecov in a monorepo context with separate flags for each package

I was just wondering what’s the best way to configure codecov for a monorepo setting. For example, let’s say I have packages A and B under my monorepo. The way I’m currently using codecov is by using a github action codecov/codecov-action#v1, by using multiple uses statement in my GitHub workflow YAML file like the following:-
- uses: codecov/codecov-action#v1
with:
files: ./packages/A/coverage/lcov.info
flags: flag_a
name: A
- uses: codecov/codecov-action#v1
with:
files: ./packages/B/coverage/lcov.info
flags: flag_b
name: B
I know it's possible to use a comma-separated value to upload multiple files, but I have to set a separate flag for each package, and doing it that way doesn't seem to work.
Thank you.
If anyone wants to know my solution, heres what I came up with.
I ended up replacing the github action with my own bash script.
final code
#!/usr/bin/env bash
codecov_file="${GITHUB_WORKSPACE}/scripts/codecov.sh"
curl -s https://codecov.io/bash > $codecov_file
chmod +x $codecov_file
cd "${GITHUB_WORKSPACE}/packages";
for dir in */
do
package="${dir/\//}"
if [ -d "$package/coverage" ]
then
file="$PWD/$package/coverage/lcov.info"
flag="${package/-/_}"
$codecov_file -f $file -F $flag -v -t $CODECOV_TOKEN
fi
done
this is what the above bash script does
Downloading the bash uploader script from codecov
Moving to the packages directory where are the packages are located, and going through all the 1st level directories
Change the package name by removing extra slash
If the directory contains coverage directory only then enter into it, since only those packages have been tested.
Create a file and flag variable (removing hypen with underscore as codecov doesn't support hypen in flag name)
Executed the downloaded codecov script by passing the file and flag variable as argument

tried to add a new update.secondary hook to my repos in gitolite and now git push fails

remote: Undefined subroutine &main::repo_rights called at hooks/update line 41.
remote: error: hook declined to update
I have removed the update hook from all of my repos in order to get around this, but I know that they are now wide open.
I ran gl-setup, and I may have mixed versions of gitolite on my machine. I am afraid that I ran the gl-setup from a version that is different than the one I am running currently. I am not sure how to tell. Please help. :-(
Update, for a more recent version of Gitolite (namely a V3.x or more), the official documentation would be: "adding your own update hooks", and it uses VREFs (virtual refs).
add this line in the rc file, within the %RC block, if it's not already present, or uncomment it if it's already present and commented out:
LOCAL_CODE => "$ENV{HOME}/local",
copy your update hook to a subdirectory called VREF under this directory, giving it a suitable name (let's say "crlf"):
# log on to gitolite hosting user on the server, then:
cd $HOME
mkdir -p local/VREF
cp your-crlf-update-hook local/VREF/crlf
chmod +x local/VREF/crlf
in your gitolite-admin clone, edit conf/gitolite.conf and add lines like this:
- VREF/crlf = #all
to each repo that should have that "update" hook.
Alternatively, you can simply add this at the end of the gitolite.conf file:
repo #all
- VREF/crlf = #all
Either way, add/commit/push the change to the gitolite-admin repo.

Mercurial, stop versioning cache directory but keep directory

I have a CakePHP project under Mercurial version control. Right now all the files in the app/tmp directory are being versioned, which are always changing.
I do not want to version control these files.
I know I can stop by running hg forget app/tmp/*
But this will also forget the file structure. Which I want to keep.
Now I know that Mercurial doesn't version directories, just files, but the CakePHP folks were also smart enough to put an empty file called empty in every empty directory (I am guessing for this reason).
So what I want to do is tell Mercurial to forget every file under app/tmp except files whos name is exactly empty.
What would the command be for this?
Well, if nothing else works, you can always just ask Mercurial to forget everything, and then revert empty before committing:
Here's how I reproduced it, first create initial repo:
hg init
md app
md app\tmp
echo a>app\empty
echo a>app\tmp\empty
hg commit -m "initial" -A
Then add some files we later want to get rid of:
echo a >app\tmp\test1.txt
echo a >app\tmp\test2.txt
hg commit -m "adding" -A
Then forget the files we don't want:
hg forget app\tmp\*
hg status <-- will show all 3 files
hg revert app\tmp\empty
hg status <-- now empty is gone
echo glob:app/tmp>.hgignore
hg commit -m "ignored" -A
Note that all .hgignore does is to prevent Mercurial from discovering new files during addremove or commit -A, if you have explicitly tracked files that match your ignore filter, Mercurial will still track changes to those files.
In other words, even though I asked Mercurial to ignore app/tmp above, the file empty inside will not be ignored, or removed, since I have explicitly asked Mercurial to track it.
At least theoretically (I don't have time to try it right now), pattern matching should work with the hg forget command. So, you could do something like hg forget -X empty while in the directory (-X means "exclude").
You may want to consider using .hgignore, of course.
Since you only need to do it once I'd just do this:
find app/tmp -type f | grep -v empty | xargs hg forget
hg commit
from then on just put this in your `.hgignore'
^app/tmp
Mercurial has built-in support for globbing and regexes, as explained in the relevant chapter in the mercurial book. The python regex implementation is used.
This should work for you:
hg forget "re:app/tmp/.*(?<!/empty)$"

Resources