In WSL2 unable to run custom pre-commit githook - githooks

Firstly, thanks for any and all responses given. And I'm sorry if I have not included helpful relevant information.
I am trying to configure a pre-commit githook (lint, prettify and test) without the need for adding husky and lint-staged as dependencies, but I am not having much luck.
As per the instructions included in the resources below, I have added a post-install script which reconfigures the hooksPath (see below).
"postinstall": "git config core.hooksPath ./git-hooks"
Now the problem arises that STAGED_FILES uses the relative path of each staged file as its path reference. And, because I am running Ubuntu on WSL2 this will not work (for reasons I am hoping someone can explain to me). The result of running git-hooks/pre-commit from cli is [error] No files matching the pattern were found: "LIST_OF_STAGED_FILES" with LIST_OF_STAGED_FILES being a string, e.g. "src/index.js src/anotherFile.js ..."
./git-hooks/pre-commit
#!/bin/sh
STAGED_FILES=$(git diff --cached --name-only --diff-filter=ACMR | sed 's| |\\ |g')
# run linter on staged files
echo "Running Linter..⚒️⚒️⚒️"
./node_modules/.bin/eslint $STAGED_FILES --quiet --fix
LINTER_EXIT_CODE=$?
# run Prettier on staged files
echo "Running Prettier..✨✨✨"
./node_modules/.bin/prettier $STAGED_FILES --ignore-unknown --write
# add files auto-fixed by the linter and prettier
git add -f $STAGED_FILES
# check linter exit code
if [ $LINTER_EXIT_CODE -ne 0 ]; then
echo "No, no, no! fix those lint errors first..😠"
exit 1
else
echo "lint all good..👍"
fi
# run tests related to staged files
echo "Running Tests"
./node_modules/.bin/jest --bail --findRelatedTests $STAGED_FILES --passWithNoTests
JEST_EXIT_CODE=$?
# check jest exit code
if [ $JEST_EXIT_CODE -ne 0 ]; then
echo "Please you can do better than this..🙏🙏🙏"
exit 1
else
echo "test all good..👍"
fi
# return 0-exit code
echo "🎉 you are a rockstar..🔥🔥🔥"
exit 0
related information
project files are located on linux distro network drive/partition
related resources
https://www.antstack.io/blog/adding-git-hooks-to-your-project/
https://dev.to/krzysztofkaczy9/do-you-really-need-husky-247b

Related

Optimal usage of codecov in a monorepo context with separate flags for each package

I was just wondering what’s the best way to configure codecov for a monorepo setting. For example, let’s say I have packages A and B under my monorepo. The way I’m currently using codecov is by using a github action codecov/codecov-action#v1, by using multiple uses statement in my GitHub workflow YAML file like the following:-
- uses: codecov/codecov-action#v1
with:
files: ./packages/A/coverage/lcov.info
flags: flag_a
name: A
- uses: codecov/codecov-action#v1
with:
files: ./packages/B/coverage/lcov.info
flags: flag_b
name: B
I know it's possible to use a comma-separated value to upload multiple files, but I have to set a separate flag for each package, and doing it that way doesn't seem to work.
Thank you.
If anyone wants to know my solution, heres what I came up with.
I ended up replacing the github action with my own bash script.
final code
#!/usr/bin/env bash
codecov_file="${GITHUB_WORKSPACE}/scripts/codecov.sh"
curl -s https://codecov.io/bash > $codecov_file
chmod +x $codecov_file
cd "${GITHUB_WORKSPACE}/packages";
for dir in */
do
package="${dir/\//}"
if [ -d "$package/coverage" ]
then
file="$PWD/$package/coverage/lcov.info"
flag="${package/-/_}"
$codecov_file -f $file -F $flag -v -t $CODECOV_TOKEN
fi
done
this is what the above bash script does
Downloading the bash uploader script from codecov
Moving to the packages directory where are the packages are located, and going through all the 1st level directories
Change the package name by removing extra slash
If the directory contains coverage directory only then enter into it, since only those packages have been tested.
Create a file and flag variable (removing hypen with underscore as codecov doesn't support hypen in flag name)
Executed the downloaded codecov script by passing the file and flag variable as argument

Use enviroment variables in dockerized react app

The goal I am trying to achieve is to build a docker image (with a react app within) that is using environment variables from the host.
Planned workflow:
Build the docker image locally
Upload the docker image
Call command docker-compose up
I want the environment variable REACT_APP_SOME_ENV_VARIABLE of the system (where the image is hosted) to be usable by the react app.
Current solution:
// App.js
function App() {
return (
<p>SOME_ENV_VARIABLE = {process.env.REACT_APP_SOME_ENV_VARIABLE}</p>
);
}
# Dockerfile
FROM node:13.12.0-alpine as build-step
# Install the app
RUN mkdir /app
WORKDIR /app
COPY package.json /app
RUN npm install --silent
# Build the app
COPY . /app
RUN npm run-script build
# Create nginx server and copy build there
FROM nginx:1.19-alpine
COPY --from=build-step /app/build /usr/share/nginx/html
# docker-compose.yml
version: '3.5'
services:
react-env:
image: react-env
ports:
- 80:80/tcp
environment:
- REACT_APP_SOME_ENV_VARIABLE=FOO
What am I doing wrong and how do I fix it?
This was solved by using an NGINX docker package, inject the compiled React production code into the NGINX html folder, then modify the docker-entrypoint.sh file.
FROM nginx:1.19-alpine
COPY --from=build-step /app/build /usr/share/nginx/html
COPY ./docker/docker-entrypoint.sh /docker-entrypoint.sh
Then in that file add the following code at the end of the old script
#!/bin/sh
#!/bin/sh
# vim:sw=4:ts=4:et
set -e
if [ -z "${NGINX_ENTRYPOINT_QUIET_LOGS:-}" ]; then
exec 3>&1
else
exec 3>/dev/null
fi
if [ "$1" = "nginx" -o "$1" = "nginx-debug" ]; then
if /usr/bin/find "/docker-entrypoint.d/" -mindepth 1 -maxdepth 1 -type f -print -quit 2>/dev/null | read v; then
echo >&3 "$0: /docker-entrypoint.d/ is not empty, will attempt to perform configuration"
echo >&3 "$0: Looking for shell scripts in /docker-entrypoint.d/"
find "/docker-entrypoint.d/" -follow -type f -print | sort -n | while read -r f; do
case "$f" in
*.sh)
if [ -x "$f" ]; then
echo >&3 "$0: Launching $f";
"$f"
else
# warn on shell scripts without exec bit
echo >&3 "$0: Ignoring $f, not executable";
fi
;;
*) echo >&3 "$0: Ignoring $f";;
esac
done
echo >&3 "$0: Configuration complete; ready for start up"
else
echo >&3 "$0: No files found in /docker-entrypoint.d/, skipping configuration"
fi
fi
# Set up endpoint for env retrieval
echo "window._env_ = {" > /usr/share/nginx/html/env_config.js
# Collect enviroment variables for react
eval enviroment_variables="$(env | grep REACT_APP.*=)"
# Loop over variables
env | grep REACT_APP.*= | while read -r line;
do
printf "%s',\n" $line | sed "s/=/:'/" >> /usr/share/nginx/html/env_config.js
# Notify the user
printf "Env variable %s' was injected into React App. \n" $line | sed "0,/=/{s//:'/}"
done
# End the object creation
echo "}" >> /usr/share/nginx/html/env_config.js
echo "Enviroment Variable Injection Complete."
exec "$#"
Functionality:
This will find all environment variable sent to the docker container running the frontend to extract all variables starting with REACT_APP and add them to a file named env_config.js.
All you need to do in the react app is to load that script file, then access the environment variables using window._env_.<property>.
DISCLAIMER
Environment variables injected with this method is fully readable by anyone using the site. This is not a secure method for sensitive information. Only use this for things such as "where is the backend api endpoint" or other non-sensitive information that can be extracted just as easily.
In your approach, the environment variables are injected when the container starts, and by that time your App is built and docker image is created, and also you cannot access process.env on client side. Therefore to access them on client side, we have to do the below steps.
You must be using webpack in your React App for bundling and other stuff.
So in you webpack.config.js, declare your environment variable REACT_APP_SOME_ENV_VARIABLE using Define plugin which will result in declaring the variables as global variables for the app.
Your webpack config should look something like this:
const path = require("path");
const webpack = require("webpack");
module.exports = {
target: "web",
performance: {
hints: false,
},
node: {
fs: "empty"
},
entry: "./src/index.js",
output: {
path: path.join(__dirname, "/build"),
filename: "[name].[contenthash].js"
},
module: {
rules: [
//your rules
]
},
plugins: [
new webpack.DefinePlugin({
"envVariable": JSON.stringify(process.env.REACT_APP_SOME_ENV_VARIABLE),
}),
],
};
And in your App, you can use the variable like this
// App.js
function App() {
return (
<p>SOME_ENV_VARIABLE = {envVariable}</p>
);
}
NOTE: Make sure before RUN npm run-script build command is run, your environment variables are injected to docker container.
For that you should declare your environment variables in DockerFile using ENV before RUN npm run-script build step.

How to merge duplicate entries that produced by for loop

Following my previous question which got closed— basically I have a script that check availability of packages on target server, the target server and the packages have been stored to an array.
declare -a prog=("gdebi" "firefox" "chromium-browser" "thunar")
declare -a snap=("beer2" "beer3")
# checkvar=$(
for f in "${prog[#]}"; do
for connect in "${snap[#]}"; do
ssh lg#"$connect" /bin/bash <<- EOF
if dpkg --get-selections | grep -qE "(^|\s)"$f"(\$|\s)"; then
status="[INSTALLED] [$connect]"
else
status=""
fi
printf '%s %s\n' "$f" "\$status"
EOF
done
done
With the help of fellow member here, I've made several fix to original script, script ran pretty well— except there's one problem, the output contain duplicate entries.
gdebi [INSTALLED] [beer2]
gdebi
firefox [INSTALLED] [beer2]
firefox [INSTALLED] [beer3]
chromium-browser [INSTALLED] [beer2]
chromium-browser [INSTALLED] [beer3]
thunar
thunar
I know it this is normal behavior, as for pass multiple server from snap array, making ssh travel to all the two server.
Considering that the script checks same package for two server, I want the output to be merged.
If beer2 have firefox packages, but beer3 doesn't.
firefox [INSTALLED] [beer2]
If beer3 have firefox packages, but beer2 doesn't.
firefox [INSTALLED] [beer3]
If both beer2 and beer3 have the packages.
firefox [INSTALLED] [beer2, beer3]
or
firefox [INSTALLED] [beer2] [beer3]
If both beer2 and beer3 doesn't have the package, it will return without extra parameter.
firefox
Sound like an easy task, but for the love of god I can't find how to achieve this, here's list of things I have tried.
Try to manipulate the for loops.
Try putting return value after one successful loops (exit code).
Try nested if.
All of the above doesn't seem to work, I haven't tried changing/manipulate the return string as I'm not really experienced with some text processing such as: awk, sed, tr and many others.
Can anyone shows how It's done ? Would really mean the world to me.
Pure Bash 4+ solution using associative array to store hosts the program is installed on:
#!/usr/bin/env bash
declare -A hosts_with_package=(["gdebi"]="" ["firefox"]="" ["chromium-browser"]="" ["thunar"]="")
declare -a hosts=("beer2" "beer3")
# Collect installed status
# Iterate all hosts
for host in "${hosts[#]}"; do
# Read the output of dpkg --get-selections with searched packages
while IFS=$' \t' read -r package status; do
# Test weather package is installed on host
if [ "$status" = "install" ]; then
# If no host listed for package, create first entry
if [ -z "${hosts_with_package[$package]}" ]; then
# Record the first host having the package installed
hosts_with_package["$package"]="$host"
else
# Additional hosts are concatenated as CSV
hosts_with_package["$package"]="${hosts_with_package[$package]}, $host"
fi
fi
# Feed the whole loop with the output of the dpkg --get-selections for packages names
# Packages names are the index of the hosts_with_package array
done < <(ssh "lg#$host" dpkg --get-selections "${!hosts_with_package[#]}")
done
# Output results
# Iterate the package name keys
for package in "${!hosts_with_package[#]}"; do
# Print package name without newline
printf '%s' "$package"
# If package is installed on some hosts
if [ -n "${hosts_with_package[$package]}" ]; then
# Continue the line with installed hosts
printf ' [INSTALLED] [%s]' "${hosts_with_package[$package]}"
fi
# End with a newline
echo
done
Instead of making several ssh connections in nested loops consider this change
prog=( mysql-server apache2 php ufw )
snap=( localhost )
for connect in ${snap[#]}; do
ssh $connect "
progs=( ${prog[#]} )
for prog in \${progs[#]}; do
dpkg -l | grep -q \$prog && echo \"\$prog [INSTALLED]\" || echo \"\$prog\"
done
"
done
Based on #Ivan answer
#!/bin/bash
prog=( "gdebi" "firefox" "chromium-browser" "thunar" )
snap=( "beer2" "beer3" )
# First, retrieve the list on installed program for each host
for connect in ${snap[#]}; do
ssh lg#"$connect" /bin/bash >/tmp/installed.${connect} <<- EOF
progs=( "${prog[#]}" )
for prog in \${progs[#]}; do
dpkg --get-selections | awk -v pkg=\$prog '\$1 == pkg && \$NF ~ /install/ {print \$1}'
done
EOF
done
# Filter the previous results to format the output as you need
awk '{
f = FILENAME;
gsub(/.*\./,"",f);
a[$1] = a[$1] "," f
}
END {
for (i in a)
print i ":[" substr(a[i],2) "]"
}' /tmp/installed.*
rm /tmp/installed.*
Example of output :
# With prog=( bash cat sed tail something firefox-esr )
firefox-esr:[localhost]
bash:[localhost,localhost2]
sed:[localhost,localhost2]

SOLR POST files with no extension

I am using SOLR 5 and I want to scan documents that have no extensions. Unfortunately changing the file to have extensions is not an option in my case.
the command I am using is simply:
$bin/post -c mycore ../foldertobescaned -type application/pdf
the command works fine for documents that do have extension but I am getting:
Entering auto mode. File endings considered are xml,json,csv,pdf,doc,docx,ppt,pptx,xls,xlsx,odt,odp,ods,ott,otp,ots,rtf,htm,html,txt,log
If renaming the files is not an option, you can use the following script as a workaround until Solr improves its post method. It is a simple bash for loop that submits each file individually and works regardless of the file extension. Note that this script will be slower than using post on the whole folder, because each individual file transfer needs to be initialized.
Save the script below as postFolderToSolr.sh inside your Solr folder (so that Solrs bin/ folder is a subdirectory), make it executable with chmod +x postFolderToSolr.sh and then use it as follows: ./postFolderToSolr.sh mycore /home/user1/foldertobescaned/ application/pdf
Using no arguments or the wrong number of arguments prints a short usage message as help.
#!/bin/bash
set -o nounset
if [ "$#" -ne 3 ]
then
echo "Post contents of a folder to Solr."
echo
echo "Usage: postFolderToSolr.sh <colletionName> </path/to/folder> <MIME>"
echo
exit 1
fi
collection=$1
inputPath=${2%/} # remove suffix / if it exists
mime=$3
for element in $inputPath"/"*; do
bin/post -c $collection -type $mime $element
done

Debian package drop datebase on purge

i am on creating a deb package for ubuntu
in my postinst script i use
# Configure database
dbc_mysql_createdb_encoding="UTF8"
if ! dbc_go portal3 $# ; then
echo 'Automatic configuration using dbconfig-db_version 2.0common failed!'
fi
to create the database what works fine.
In the postrm file i have:
echo "Remove database"
if [ -f /usr/share/debconf/confmodule ]; then
. /usr/share/debconf/confmodule
fi
if [ -f /usr/share/dbconfig-common/dpkg/postrm ]; then
. /usr/share/dbconfig-common/dpkg/postrm
if ! dbc_go portal3 $# ; then
echo 'Automatic configuration using dbconfig-common failed!'
fi
fi
but this don’t drop the created user or database.
There is no response on console or anything else that helps me to debug the issue.
Has anyone an idea how to drop database and user created while installation?
It also requires an prerm script like
#!/bin/sh
set -e
. /usr/share/debconf/confmodule
. /usr/share/dbconfig-common/dpkg/prerm.mysql
if ! dbc_go portal3 $# ; then
echo 'Automatic configuration using dbconfig-common failed!'
fi
# dh_installdeb will replace this with shell code automatically
# generated by other debhelper scripts.
exit 0
Otherwise dbconfig-commond dont know what database needs to be dropt.

Resources