I made a React app that I want to build for different servers / customers.
I created a folder configurations with different .env files. E.g.: .env.foo, .env.bar, .env.tree, .env.mirror, .env.backup.
To point three values out as an example I have:
REACT_OWNER_NAME=Malcom X
REACT_OWNER_MAIL=mx#mail.com
REACT_FTP_FOLDER_PATH=foo.domain.com
In my App.js I have
const App = () => {
return (
<>
{process.env.REACT_OWNER_NAME}<br />
{process.env.REACT_OWNER_MAIL}
</>
);
}
Now I want to place each configurations/.env* file in the root, build the app and deploy it to the different servers.
To automate this, I build the following builder.sh script:
mv .env env
for CURR in foo bar tree mirror backup
do
cp configurations/.env.$CURR .env
export $(cat .env | grep -v '#' | awk '/=/ {print $1}')
if [ -z ${REACT_FTP_FOLDER_NAME+x} ]; then foldername=$CURR; else foldername=$REACT_FTP_FOLDER_NAME; fi
yarn build
mkdir apps/${foldername}
mv build/* apps/${foldername}/.
rm -rf build
mv .env apps/${foldername}/.env
done
mv env .env
Now my issue: When I run the script with for CURR in foo, then for CURR in bar and so on (each file one by one) I have no issue, I get all the apps build into apps/.
But when I run it with for CURR in foo bar tree mirror backup the output is the same, but when I log process.env all variables containing a space get cut off by the space.
When I just yarn build the apps, no issue. When I add "" around the strings I get REACT_OWNER_NAME: "\Malcom".
In every case when I compare configurations/.env.foo with apps/foo.domain.com/.env there is no difference between the files.
And the funnies thing... the first app (in this example foo) is correct.
Related
I have the following git action which allows me to download an image.
I have to make sure if the file already exists to skip the "Commit file" and the "Push changes"
How can I check if the file already exists if it already exists nothing is done.
on:
workflow_dispatch:
name: Scrape File
jobs:
build:
name: Build
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
name: Check out current commit
- name: Url
run: |
URL=$(node ./action.js)
echo $URL
echo "URL=$URL" >> $GITHUB_ENV
- uses: suisei-cn/actions-download-file#v1
id: downloadfile
name: Download the file
with:
url: ${{ env.URL }}
target: assets/
- run: ls -l 'assets/'
- name: Commit files
run: |
git config --local user.email "41898282+github-actions[bot]#users.noreply.github.com"
git config --local user.name "github-actions[bot]"
git add .
git commit -m "Add changes" -a
- name: Push changes
uses: ad-m/github-push-action#master
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
branch: ${{ github.ref }}
There are a few options here - you can go directly with bash and do something like this:
if test -f "$FILE"; then
#file exists
fi
or use one of the existing actions like this:
- name: Check file existence
id: check_files
uses: andstor/file-existence-action#v1
with:
files: "assets/${{ env.URL }}"
- name: File exists
if: steps.check_files.outputs.files_exists == 'true'
run: echo "It exists !"
WARNING: Linux (and maybe MacOS) only solution ahead!
I was dealing with a very similar situation some time earlier and developed a method to not just check for added files, but also will be useful if you wanted to check for modified or deleted files or directories as well.
Warning:
This solution works only if the file is added/modified/deleted in git repository.
Introduction:
The command git status --short will return list of untracked, , deleted and modified files. For example:-
D deleted_foo
M modified_foo
?? untracked_dir_foo/
?? untracked_file_foo
A tracked_n_added_foo
Note that we run the same command as git status -s.
Understanding `git status -s` output:
When you read the output, you will see some lines in this form:
** filename
** dirname/
Note that here ** represent the first word of the line (ones like D, ?? etc.).
Here is a summary of all ** in the lines:
**
Meaning
D
File/dir has been deleted.
M
File/dir has been modified.
??
File/dir has been added but not tracked using git add [FILENAME].
A
File/dir has been added and also tracked using git add [FILENAME].
NOTE: Take care of the spaces! Using, for example, M instead of M in the following solution will not work as expected!
Solution:
Shell part of solution:
We can grep the output of git status -s to check whether a file/dir was added/modified/deleted.
The shell part of the solution goes like this:
if git status -s | grep -x "** [FILENAME]"; then
# Do whatever you wanna on match
else
# Do whatever you wanna on no-match
fi
Note: Get desired ** from the table above and replace [FILENAME] with filename.
For example, to check whether a file named foo was modified, use:
git status -s | grep -x " M foo"
Explanation: We use git status -s to get the output and pipe the output to grep. We also use command line option -x with grep so as to match whole line.
Workflow part of solution:
A very simple solution will go like this:
...
- name: Check for file
id: file_check
run: |
if git status -s | grep -x "** [FILENAME]"; then
echo "check_result=true" >> $GITHUB_OUTPUT
else
echo "check_result=false" >> $GITHUB_OUTPUT
fi
...
- name: Run dependent step
if: steps.file_check.outputs.check_result == 'true'
run: |
# Do whatever you wanna do on file found to be
# added/modified/deleted, based on what you set '**' to
...
EDIT
I want to pass the last git short hash into my React app build with the following command:
git log -1 --pretty=%h
Thus in my Dockerfile I want something such as:
ARG REACT_APP_GIT_SHORTHASH
RUN git clone https://token#github.com/paulywill/repo.git && cd repo && export REACT_APP_GIT_SHORTHASH=`git log -1 --pretty=%h`
ENV REACT_APP_GIT_SHORTHASH $REACT_APP_GIT_SHORTHASH
In my Github Actions build I'm getting the following:
Step 6/14 : ARG REACT_APP_GIT_SHORTHASH
---> Running in f45f530d5c76
Removing intermediate container f45f530d5c76
---> 87a91c010aaf
Step 7/14 : RUN git clone https://***#github.com/paulywill/repo.git && cd repo && export REACT_APP_GIT_SHORTHASH=$(git log -1 --pretty=%h)
---> Running in b8a8fa3cd703
Cloning into 'repo'...
Removing intermediate container b8a8fa3cd703
---> 5bbf3a76b928
Step 8/14 : ENV REACT_APP_GIT_SHORTHASH $REACT_APP_GIT_SHORTHASH
---> Running in f624f2e59dc6
Removing intermediate container f624f2e59dc6
---> d15c3c276062
Are these command even visible or able to pass values if they're different intermediate containers?
ROM ubuntu:20.04
ARG REACT_APP_GIT_SHORTHASH
RUN apt-get update -y
RUN apt-get install git -y
RUN git clone https://github.com/pooya-mohammadi/deep_utils.git
WORKDIR deep_utils
RUN REACT_APP_GIT_SHORTHASH=`git log -1 --pretty=%h`
ENV REACT_APP_GIT_SHORTHASH $REACT_APP_GIT_SHORTHASH
Change deep_utils with your repo name. I found the cd <directory> to be problematic.
As per a previous answer and thanks in part for #david-maze for pointing me in the right direction, I can easily grab the git short hash before the docker build.
.github/deploy.yml
...
- name: Set outputs
id: vars
run: echo "::set-output name=sha_short::$(git rev-parse --short HEAD)"
- name: Check outputs
run: echo ${{ steps.vars.outputs.sha_short }}
...
docker build --build-arg REACT_APP_GIT_SHORTHASH=${{ steps.vars.outputs.sha_short }} -t $ECR_REPOSITORY_CLIENT .
Dockerfile
FROM node:16-alpine
ARG VERSION
ENV VERSION $VERSION
ARG REACT_APP_GIT_SHORTHASH
ENV REACT_APP_GIT_SHORTHASH $REACT_APP_GIT_SHORTHASH
...
I was just wondering what’s the best way to configure codecov for a monorepo setting. For example, let’s say I have packages A and B under my monorepo. The way I’m currently using codecov is by using a github action codecov/codecov-action#v1, by using multiple uses statement in my GitHub workflow YAML file like the following:-
- uses: codecov/codecov-action#v1
with:
files: ./packages/A/coverage/lcov.info
flags: flag_a
name: A
- uses: codecov/codecov-action#v1
with:
files: ./packages/B/coverage/lcov.info
flags: flag_b
name: B
I know it's possible to use a comma-separated value to upload multiple files, but I have to set a separate flag for each package, and doing it that way doesn't seem to work.
Thank you.
If anyone wants to know my solution, heres what I came up with.
I ended up replacing the github action with my own bash script.
final code
#!/usr/bin/env bash
codecov_file="${GITHUB_WORKSPACE}/scripts/codecov.sh"
curl -s https://codecov.io/bash > $codecov_file
chmod +x $codecov_file
cd "${GITHUB_WORKSPACE}/packages";
for dir in */
do
package="${dir/\//}"
if [ -d "$package/coverage" ]
then
file="$PWD/$package/coverage/lcov.info"
flag="${package/-/_}"
$codecov_file -f $file -F $flag -v -t $CODECOV_TOKEN
fi
done
this is what the above bash script does
Downloading the bash uploader script from codecov
Moving to the packages directory where are the packages are located, and going through all the 1st level directories
Change the package name by removing extra slash
If the directory contains coverage directory only then enter into it, since only those packages have been tested.
Create a file and flag variable (removing hypen with underscore as codecov doesn't support hypen in flag name)
Executed the downloaded codecov script by passing the file and flag variable as argument
The goal I am trying to achieve is to build a docker image (with a react app within) that is using environment variables from the host.
Planned workflow:
Build the docker image locally
Upload the docker image
Call command docker-compose up
I want the environment variable REACT_APP_SOME_ENV_VARIABLE of the system (where the image is hosted) to be usable by the react app.
Current solution:
// App.js
function App() {
return (
<p>SOME_ENV_VARIABLE = {process.env.REACT_APP_SOME_ENV_VARIABLE}</p>
);
}
# Dockerfile
FROM node:13.12.0-alpine as build-step
# Install the app
RUN mkdir /app
WORKDIR /app
COPY package.json /app
RUN npm install --silent
# Build the app
COPY . /app
RUN npm run-script build
# Create nginx server and copy build there
FROM nginx:1.19-alpine
COPY --from=build-step /app/build /usr/share/nginx/html
# docker-compose.yml
version: '3.5'
services:
react-env:
image: react-env
ports:
- 80:80/tcp
environment:
- REACT_APP_SOME_ENV_VARIABLE=FOO
What am I doing wrong and how do I fix it?
This was solved by using an NGINX docker package, inject the compiled React production code into the NGINX html folder, then modify the docker-entrypoint.sh file.
FROM nginx:1.19-alpine
COPY --from=build-step /app/build /usr/share/nginx/html
COPY ./docker/docker-entrypoint.sh /docker-entrypoint.sh
Then in that file add the following code at the end of the old script
#!/bin/sh
#!/bin/sh
# vim:sw=4:ts=4:et
set -e
if [ -z "${NGINX_ENTRYPOINT_QUIET_LOGS:-}" ]; then
exec 3>&1
else
exec 3>/dev/null
fi
if [ "$1" = "nginx" -o "$1" = "nginx-debug" ]; then
if /usr/bin/find "/docker-entrypoint.d/" -mindepth 1 -maxdepth 1 -type f -print -quit 2>/dev/null | read v; then
echo >&3 "$0: /docker-entrypoint.d/ is not empty, will attempt to perform configuration"
echo >&3 "$0: Looking for shell scripts in /docker-entrypoint.d/"
find "/docker-entrypoint.d/" -follow -type f -print | sort -n | while read -r f; do
case "$f" in
*.sh)
if [ -x "$f" ]; then
echo >&3 "$0: Launching $f";
"$f"
else
# warn on shell scripts without exec bit
echo >&3 "$0: Ignoring $f, not executable";
fi
;;
*) echo >&3 "$0: Ignoring $f";;
esac
done
echo >&3 "$0: Configuration complete; ready for start up"
else
echo >&3 "$0: No files found in /docker-entrypoint.d/, skipping configuration"
fi
fi
# Set up endpoint for env retrieval
echo "window._env_ = {" > /usr/share/nginx/html/env_config.js
# Collect enviroment variables for react
eval enviroment_variables="$(env | grep REACT_APP.*=)"
# Loop over variables
env | grep REACT_APP.*= | while read -r line;
do
printf "%s',\n" $line | sed "s/=/:'/" >> /usr/share/nginx/html/env_config.js
# Notify the user
printf "Env variable %s' was injected into React App. \n" $line | sed "0,/=/{s//:'/}"
done
# End the object creation
echo "}" >> /usr/share/nginx/html/env_config.js
echo "Enviroment Variable Injection Complete."
exec "$#"
Functionality:
This will find all environment variable sent to the docker container running the frontend to extract all variables starting with REACT_APP and add them to a file named env_config.js.
All you need to do in the react app is to load that script file, then access the environment variables using window._env_.<property>.
DISCLAIMER
Environment variables injected with this method is fully readable by anyone using the site. This is not a secure method for sensitive information. Only use this for things such as "where is the backend api endpoint" or other non-sensitive information that can be extracted just as easily.
In your approach, the environment variables are injected when the container starts, and by that time your App is built and docker image is created, and also you cannot access process.env on client side. Therefore to access them on client side, we have to do the below steps.
You must be using webpack in your React App for bundling and other stuff.
So in you webpack.config.js, declare your environment variable REACT_APP_SOME_ENV_VARIABLE using Define plugin which will result in declaring the variables as global variables for the app.
Your webpack config should look something like this:
const path = require("path");
const webpack = require("webpack");
module.exports = {
target: "web",
performance: {
hints: false,
},
node: {
fs: "empty"
},
entry: "./src/index.js",
output: {
path: path.join(__dirname, "/build"),
filename: "[name].[contenthash].js"
},
module: {
rules: [
//your rules
]
},
plugins: [
new webpack.DefinePlugin({
"envVariable": JSON.stringify(process.env.REACT_APP_SOME_ENV_VARIABLE),
}),
],
};
And in your App, you can use the variable like this
// App.js
function App() {
return (
<p>SOME_ENV_VARIABLE = {envVariable}</p>
);
}
NOTE: Make sure before RUN npm run-script build command is run, your environment variables are injected to docker container.
For that you should declare your environment variables in DockerFile using ENV before RUN npm run-script build step.
Is it possible to mass rename objects on Google Cloud Storage using gsutil (or some other tool)? I am trying to figure out a way to rename a bunch of images from *.JPG to *.jpg.
Here is a native way to do this in bash with an explanation below, line by line of the code:
gsutil ls gs://bucket_name/*.JPG > src-rename-list.txt
sed 's/\.JPG/\.jpg/g' src-rename-list.txt > dest-rename-list.txt
paste -d ' ' src-rename-list.txt dest-rename-list.txt | sed -e 's/^/gsutil\ mv\ /' | while read line; do bash -c "$line"; done
rm src-rename-list.txt; rm dest-rename-list.txt
The solution pushes 2 lists, one for the source and one for the destination file (to be used in the "gsutil mv" command):
gsutil ls gs://bucket_name/*.JPG > src-rename-list.txt
sed 's/\.JPG/\.jpg/g' src-rename-list.txt > dest-rename-list.txt
The line "gsutil mv " and the two files are concatenated line by line using the below code:
paste -d ' ' src-rename-list.txt dest-rename-list.txt | sed -e 's/^/gsutil\ mv\ /'
This then runs each line in a while loop:
while read line; do bash -c "$line"; done
Lastly, clean up and delete the files created:
rm src-rename-list.txt; rm dest-rename-list.txt
The above has been tested against a working Google Storage bucket.
https://cloud.google.com/storage/docs/gsutil/addlhelp/WildcardNames
gsutil supports URI wildcards
EDIT
gsutil 3.0 release note
As part of the bucket sub-directory support we changed the * wildcard to match only up to directory boundaries, and introduced the new ** wildcard...
Do you have directories under bucket? if so, maybe you need to go down to each directories or use **.
gsutil -m mv gs://my_bucket/**.JPG gs://my_bucket/**.jpg
or
gsutil -m mv gs://my_bucket/mydir/*.JPG gs://my_bucket/mydir/*.jpg
EDIT
gsutil doesn't support wildcard for destination so far (as of 4/12/'14)
nether API.
so at this moment you need to retrieve list of all JPG files,
and rename each files.
python example:
import subprocess
files = subprocess.check_output("gsutil ls gs://my_bucket/*.JPG",shell=True)
files = files.split("\n")[:-1]
for f in files:
subprocess.call("gsutil mv %s %s"%(f,f[:-3]+"jpg"),shell=True)
please note that this would take hours.
gsutil does not support parallelized and mass-copy/rename.
You have two options:
use a dataflow process to do the operation
or
use GNU parallel to launch it using several processes
If you use GNU Parallel, it is better to deploy a new instance to do the mass copy/rename operation:
First: - Make a list of files you want to copy/rename (a file with source and destination separated by a space or tab), like this:
gs://origin_bucket/path/file gs://dest_bucket/new_path/new_filename
Second: Launch a new compute instance
Third: Login in that instance and install Gnu parallel
sudo apt install parallel
Third: authorize yourself with google (gcloud auth login) because the service account for compute might not have permissions to move/rename the files.
gcloud auth login
Make the copy (gsutil cp) or move (gsutil mv) operation with parallel:
parallel -j 20 --colsep ' ' gsutil mv {1} {2} :::: file_with_source_destination_uris.txt
This will make 20 parallel runs of the gsutil cp operation.
Yes, it is possible:
Move/rename objects and/or subdirectories