EDIT
I want to pass the last git short hash into my React app build with the following command:
git log -1 --pretty=%h
Thus in my Dockerfile I want something such as:
ARG REACT_APP_GIT_SHORTHASH
RUN git clone https://token#github.com/paulywill/repo.git && cd repo && export REACT_APP_GIT_SHORTHASH=`git log -1 --pretty=%h`
ENV REACT_APP_GIT_SHORTHASH $REACT_APP_GIT_SHORTHASH
In my Github Actions build I'm getting the following:
Step 6/14 : ARG REACT_APP_GIT_SHORTHASH
---> Running in f45f530d5c76
Removing intermediate container f45f530d5c76
---> 87a91c010aaf
Step 7/14 : RUN git clone https://***#github.com/paulywill/repo.git && cd repo && export REACT_APP_GIT_SHORTHASH=$(git log -1 --pretty=%h)
---> Running in b8a8fa3cd703
Cloning into 'repo'...
Removing intermediate container b8a8fa3cd703
---> 5bbf3a76b928
Step 8/14 : ENV REACT_APP_GIT_SHORTHASH $REACT_APP_GIT_SHORTHASH
---> Running in f624f2e59dc6
Removing intermediate container f624f2e59dc6
---> d15c3c276062
Are these command even visible or able to pass values if they're different intermediate containers?
ROM ubuntu:20.04
ARG REACT_APP_GIT_SHORTHASH
RUN apt-get update -y
RUN apt-get install git -y
RUN git clone https://github.com/pooya-mohammadi/deep_utils.git
WORKDIR deep_utils
RUN REACT_APP_GIT_SHORTHASH=`git log -1 --pretty=%h`
ENV REACT_APP_GIT_SHORTHASH $REACT_APP_GIT_SHORTHASH
Change deep_utils with your repo name. I found the cd <directory> to be problematic.
As per a previous answer and thanks in part for #david-maze for pointing me in the right direction, I can easily grab the git short hash before the docker build.
.github/deploy.yml
...
- name: Set outputs
id: vars
run: echo "::set-output name=sha_short::$(git rev-parse --short HEAD)"
- name: Check outputs
run: echo ${{ steps.vars.outputs.sha_short }}
...
docker build --build-arg REACT_APP_GIT_SHORTHASH=${{ steps.vars.outputs.sha_short }} -t $ECR_REPOSITORY_CLIENT .
Dockerfile
FROM node:16-alpine
ARG VERSION
ENV VERSION $VERSION
ARG REACT_APP_GIT_SHORTHASH
ENV REACT_APP_GIT_SHORTHASH $REACT_APP_GIT_SHORTHASH
...
Related
I have the following git action which allows me to download an image.
I have to make sure if the file already exists to skip the "Commit file" and the "Push changes"
How can I check if the file already exists if it already exists nothing is done.
on:
workflow_dispatch:
name: Scrape File
jobs:
build:
name: Build
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
name: Check out current commit
- name: Url
run: |
URL=$(node ./action.js)
echo $URL
echo "URL=$URL" >> $GITHUB_ENV
- uses: suisei-cn/actions-download-file#v1
id: downloadfile
name: Download the file
with:
url: ${{ env.URL }}
target: assets/
- run: ls -l 'assets/'
- name: Commit files
run: |
git config --local user.email "41898282+github-actions[bot]#users.noreply.github.com"
git config --local user.name "github-actions[bot]"
git add .
git commit -m "Add changes" -a
- name: Push changes
uses: ad-m/github-push-action#master
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
branch: ${{ github.ref }}
There are a few options here - you can go directly with bash and do something like this:
if test -f "$FILE"; then
#file exists
fi
or use one of the existing actions like this:
- name: Check file existence
id: check_files
uses: andstor/file-existence-action#v1
with:
files: "assets/${{ env.URL }}"
- name: File exists
if: steps.check_files.outputs.files_exists == 'true'
run: echo "It exists !"
WARNING: Linux (and maybe MacOS) only solution ahead!
I was dealing with a very similar situation some time earlier and developed a method to not just check for added files, but also will be useful if you wanted to check for modified or deleted files or directories as well.
Warning:
This solution works only if the file is added/modified/deleted in git repository.
Introduction:
The command git status --short will return list of untracked, , deleted and modified files. For example:-
D deleted_foo
M modified_foo
?? untracked_dir_foo/
?? untracked_file_foo
A tracked_n_added_foo
Note that we run the same command as git status -s.
Understanding `git status -s` output:
When you read the output, you will see some lines in this form:
** filename
** dirname/
Note that here ** represent the first word of the line (ones like D, ?? etc.).
Here is a summary of all ** in the lines:
**
Meaning
D
File/dir has been deleted.
M
File/dir has been modified.
??
File/dir has been added but not tracked using git add [FILENAME].
A
File/dir has been added and also tracked using git add [FILENAME].
NOTE: Take care of the spaces! Using, for example, M instead of M in the following solution will not work as expected!
Solution:
Shell part of solution:
We can grep the output of git status -s to check whether a file/dir was added/modified/deleted.
The shell part of the solution goes like this:
if git status -s | grep -x "** [FILENAME]"; then
# Do whatever you wanna on match
else
# Do whatever you wanna on no-match
fi
Note: Get desired ** from the table above and replace [FILENAME] with filename.
For example, to check whether a file named foo was modified, use:
git status -s | grep -x " M foo"
Explanation: We use git status -s to get the output and pipe the output to grep. We also use command line option -x with grep so as to match whole line.
Workflow part of solution:
A very simple solution will go like this:
...
- name: Check for file
id: file_check
run: |
if git status -s | grep -x "** [FILENAME]"; then
echo "check_result=true" >> $GITHUB_OUTPUT
else
echo "check_result=false" >> $GITHUB_OUTPUT
fi
...
- name: Run dependent step
if: steps.file_check.outputs.check_result == 'true'
run: |
# Do whatever you wanna do on file found to be
# added/modified/deleted, based on what you set '**' to
...
I made a React app that I want to build for different servers / customers.
I created a folder configurations with different .env files. E.g.: .env.foo, .env.bar, .env.tree, .env.mirror, .env.backup.
To point three values out as an example I have:
REACT_OWNER_NAME=Malcom X
REACT_OWNER_MAIL=mx#mail.com
REACT_FTP_FOLDER_PATH=foo.domain.com
In my App.js I have
const App = () => {
return (
<>
{process.env.REACT_OWNER_NAME}<br />
{process.env.REACT_OWNER_MAIL}
</>
);
}
Now I want to place each configurations/.env* file in the root, build the app and deploy it to the different servers.
To automate this, I build the following builder.sh script:
mv .env env
for CURR in foo bar tree mirror backup
do
cp configurations/.env.$CURR .env
export $(cat .env | grep -v '#' | awk '/=/ {print $1}')
if [ -z ${REACT_FTP_FOLDER_NAME+x} ]; then foldername=$CURR; else foldername=$REACT_FTP_FOLDER_NAME; fi
yarn build
mkdir apps/${foldername}
mv build/* apps/${foldername}/.
rm -rf build
mv .env apps/${foldername}/.env
done
mv env .env
Now my issue: When I run the script with for CURR in foo, then for CURR in bar and so on (each file one by one) I have no issue, I get all the apps build into apps/.
But when I run it with for CURR in foo bar tree mirror backup the output is the same, but when I log process.env all variables containing a space get cut off by the space.
When I just yarn build the apps, no issue. When I add "" around the strings I get REACT_OWNER_NAME: "\Malcom".
In every case when I compare configurations/.env.foo with apps/foo.domain.com/.env there is no difference between the files.
And the funnies thing... the first app (in this example foo) is correct.
The goal I am trying to achieve is to build a docker image (with a react app within) that is using environment variables from the host.
Planned workflow:
Build the docker image locally
Upload the docker image
Call command docker-compose up
I want the environment variable REACT_APP_SOME_ENV_VARIABLE of the system (where the image is hosted) to be usable by the react app.
Current solution:
// App.js
function App() {
return (
<p>SOME_ENV_VARIABLE = {process.env.REACT_APP_SOME_ENV_VARIABLE}</p>
);
}
# Dockerfile
FROM node:13.12.0-alpine as build-step
# Install the app
RUN mkdir /app
WORKDIR /app
COPY package.json /app
RUN npm install --silent
# Build the app
COPY . /app
RUN npm run-script build
# Create nginx server and copy build there
FROM nginx:1.19-alpine
COPY --from=build-step /app/build /usr/share/nginx/html
# docker-compose.yml
version: '3.5'
services:
react-env:
image: react-env
ports:
- 80:80/tcp
environment:
- REACT_APP_SOME_ENV_VARIABLE=FOO
What am I doing wrong and how do I fix it?
This was solved by using an NGINX docker package, inject the compiled React production code into the NGINX html folder, then modify the docker-entrypoint.sh file.
FROM nginx:1.19-alpine
COPY --from=build-step /app/build /usr/share/nginx/html
COPY ./docker/docker-entrypoint.sh /docker-entrypoint.sh
Then in that file add the following code at the end of the old script
#!/bin/sh
#!/bin/sh
# vim:sw=4:ts=4:et
set -e
if [ -z "${NGINX_ENTRYPOINT_QUIET_LOGS:-}" ]; then
exec 3>&1
else
exec 3>/dev/null
fi
if [ "$1" = "nginx" -o "$1" = "nginx-debug" ]; then
if /usr/bin/find "/docker-entrypoint.d/" -mindepth 1 -maxdepth 1 -type f -print -quit 2>/dev/null | read v; then
echo >&3 "$0: /docker-entrypoint.d/ is not empty, will attempt to perform configuration"
echo >&3 "$0: Looking for shell scripts in /docker-entrypoint.d/"
find "/docker-entrypoint.d/" -follow -type f -print | sort -n | while read -r f; do
case "$f" in
*.sh)
if [ -x "$f" ]; then
echo >&3 "$0: Launching $f";
"$f"
else
# warn on shell scripts without exec bit
echo >&3 "$0: Ignoring $f, not executable";
fi
;;
*) echo >&3 "$0: Ignoring $f";;
esac
done
echo >&3 "$0: Configuration complete; ready for start up"
else
echo >&3 "$0: No files found in /docker-entrypoint.d/, skipping configuration"
fi
fi
# Set up endpoint for env retrieval
echo "window._env_ = {" > /usr/share/nginx/html/env_config.js
# Collect enviroment variables for react
eval enviroment_variables="$(env | grep REACT_APP.*=)"
# Loop over variables
env | grep REACT_APP.*= | while read -r line;
do
printf "%s',\n" $line | sed "s/=/:'/" >> /usr/share/nginx/html/env_config.js
# Notify the user
printf "Env variable %s' was injected into React App. \n" $line | sed "0,/=/{s//:'/}"
done
# End the object creation
echo "}" >> /usr/share/nginx/html/env_config.js
echo "Enviroment Variable Injection Complete."
exec "$#"
Functionality:
This will find all environment variable sent to the docker container running the frontend to extract all variables starting with REACT_APP and add them to a file named env_config.js.
All you need to do in the react app is to load that script file, then access the environment variables using window._env_.<property>.
DISCLAIMER
Environment variables injected with this method is fully readable by anyone using the site. This is not a secure method for sensitive information. Only use this for things such as "where is the backend api endpoint" or other non-sensitive information that can be extracted just as easily.
In your approach, the environment variables are injected when the container starts, and by that time your App is built and docker image is created, and also you cannot access process.env on client side. Therefore to access them on client side, we have to do the below steps.
You must be using webpack in your React App for bundling and other stuff.
So in you webpack.config.js, declare your environment variable REACT_APP_SOME_ENV_VARIABLE using Define plugin which will result in declaring the variables as global variables for the app.
Your webpack config should look something like this:
const path = require("path");
const webpack = require("webpack");
module.exports = {
target: "web",
performance: {
hints: false,
},
node: {
fs: "empty"
},
entry: "./src/index.js",
output: {
path: path.join(__dirname, "/build"),
filename: "[name].[contenthash].js"
},
module: {
rules: [
//your rules
]
},
plugins: [
new webpack.DefinePlugin({
"envVariable": JSON.stringify(process.env.REACT_APP_SOME_ENV_VARIABLE),
}),
],
};
And in your App, you can use the variable like this
// App.js
function App() {
return (
<p>SOME_ENV_VARIABLE = {envVariable}</p>
);
}
NOTE: Make sure before RUN npm run-script build command is run, your environment variables are injected to docker container.
For that you should declare your environment variables in DockerFile using ENV before RUN npm run-script build step.
I have a database for my app and I want to create it in run time docker
I have a file CreateDB.sh and it creates all the tables and stored procedure that I want.
I tried this :
FROM mcr.microsoft.com/mssql/server
ENV ACCEPT_EULA=Y \
SA_PASSWORD=qwe123QWE
USER root
RUN mkdir /home/db
COPY ./db /home/db
RUN chmod +x /home/db/DbScriptLinux.sh
WORKDIR /home/db/
CMD ["/bin/bash", "/home/db/DbScriptLinux.sh"]
but it returns an error :
LoginTimeout
is there any way to run my script after all services (sql-server) start?
You can use an if statement, for example, RUN if [[ -z "$arg" ]] ; then echo Argument not provided ; else echo Argument is $arg ; fi
Another way would be to use command1 && command2 so if the command 1 is successfull, then command 2 would run afterwards.
Your last line CMD ["/bin/bash", "/home/db/DbScriptLinux.sh"] if this is to start the database every time you start the container, as your default command to run, then should be alright, otherwise it would be better to use the RUN command.
I'm trying to deploy code with Capistrano 3 to Ubuntu server from GIT repository, but I'm getting this error.
==========================================================================
Here is my Gemfile.
gem 'capistrano', '~> 3.1.0'
#//Use unicorn as the app server
gem 'unicorn'
#// Use Capistrano for deployment
group :development do
gem 'capistrano-rails'
gem 'capistrano-bundler'
gem 'capistrano-rbenv', "~> 2.0"
end
source 'https://rubygems.org'
==========================================================================
#deploy.rb
lock '3.1.0'
#// Define the name of the application
set :application, 'my_app'
#// Define where can Capistrano access the source repository
#// set :repo_url, 'https://github.com/[user name]/[application name].git'
set :scm, :git
set :repo_url, 'git#github.com:jaipratik/rw.git'
set :use_sudo, true
set :log_level, :debug
#// Define where to put your application code
set :deploy_to, "/var/www/my_app"
set :pty, true
set :format, :pretty
==========================================================================
#// production.rb
role :app, %w{ubuntu#{IP/Host}}
server '{IP/Host}', user: 'ubuntu', roles: %w{web app}
set :ssh_options, {
keys: %w(/Users/jay/.ssh/id_rsa),
forward_agent: false,
user: 'user'
# auth_methods: %w(password)
}
==========================================================================
result for $ bundle exec cap production deploy --trace
** Invoke production (first_time)
** Execute production
** Invoke load:defaults (first_time)
** Execute load:defaults
** Invoke deploy (first_time)
** Execute deploy
** Invoke deploy:starting (first_time)
** Execute deploy:starting
** Invoke deploy:check (first_time)
** Execute deploy:check
** Invoke git:check (first_time)
** Invoke git:wrapper (first_time)
** Execute git:wrapper
INFO[f8299d4f] Running /usr/bin/env mkdir -p /tmp/my_app/ on {IP/Host}
DEBUG[f8299d4f] Command: /usr/bin/env mkdir -p /tmp/my_app/
INFO[f8299d4f] Finished in 0.723 seconds with exit status 0 (successful).
DEBUGUploading /tmp/my_app/git-ssh.sh 0.0%
INFOUploading /tmp/my_app/git-ssh.sh 100.0%
INFO[b509dfb7] Running /usr/bin/env chmod +x /tmp/my_app/git-ssh.sh on {IP/Host}
DEBUG[b509dfb7] Command: /usr/bin/env chmod +x /tmp/my_app/git-ssh.sh
INFO[b509dfb7] Finished in 0.084 seconds with exit status 0 (successful).
** Execute git:check
DEBUG[9646aea0] Running /usr/bin/env git ls-remote git#github.com:jaipratik/rw.git on {IP/Host}
DEBUG[9646aea0] Command: ( GIT_ASKPASS=/bin/echo GIT_SSH=/tmp/my_app/git-ssh.sh /usr/bin/env git ls-remote git#github.com:jaipratik/rw.git )
DEBUG[9646aea0] c452c845bb80f72d3023557d2ea8f776950c659f
DEBUG[9646aea0]
DEBUG[9646aea0] HEAD
DEBUG[9646aea0]
DEBUG[9646aea0] c452c845bb80f72d3023557d2ea8f776950c659f
DEBUG[9646aea0]
DEBUG[9646aea0] refs/heads/master
DEBUG[9646aea0]
DEBUG[9646aea0] Finished in 0.940 seconds with exit status 0 (successful).
** Invoke deploy:check:directories (first_time)
** Execute deploy:check:directories
INFO[fa4b1f56] Running /usr/bin/env mkdir -pv /var/www/my_app/shared /var/www/my_app/releases on {IP/Host}
DEBUG[fa4b1f56] Command: /usr/bin/env mkdir -pv /var/www/my_app/shared /var/www/my_app/releases
INFO[fa4b1f56] Finished in 0.086 seconds with exit status 0 (successful).
** Invoke deploy:check:linked_dirs (first_time)
** Execute deploy:check:linked_dirs
** Invoke deploy:check:make_linked_dirs (first_time)
** Execute deploy:check:make_linked_dirs
** Invoke deploy:check:linked_files (first_time)
** Execute deploy:check:linked_files
** Invoke deploy:started (first_time)
** Execute deploy:started
** Invoke deploy:updating (first_time)
** Invoke deploy:new_release_path (first_time)
** Execute deploy:new_release_path
** Execute deploy:updating
** Invoke git:create_release (first_time)
** Invoke git:update (first_time)
** Invoke git:clone (first_time)
** Invoke git:wrapper
** Execute git:clone
DEBUG[fa77f295] Running /usr/bin/env [ -f /var/www/my_app/repo/HEAD ] on {IP/Host}
DEBUG[fa77f295] Command: [ -f /var/www/my_app/repo/HEAD ]
DEBUG[fa77f295] Finished in 0.081 seconds with exit status 1 (failed).
DEBUG[0cc407cc] Running /usr/bin/env if test ! -d /var/www/my_app; then echo "Directory does not exist '/var/www/my_app'" 1>&2; false; fi on {IP/Host}
DEBUG[0cc407cc] Command: if test ! -d /var/www/my_app; then echo "Directory does not exist '/var/www/my_app'" 1>&2; false; fi
DEBUG[0cc407cc] Finished in 0.075 seconds with exit status 0 (successful).
INFO[de52ca0e] Running /usr/bin/env git clone --mirror git#github.com:jaipratik/rw.git /var/www/my_app/repo on {IP/Host}`enter code here`
DEBUG[de52ca0e] Command: cd /var/www/my_app && ( GIT_ASKPASS=/bin/echo GIT_SSH=/tmp/my_app/git-ssh.sh /usr/bin/env git clone --mirror git#github.com:jaipratik/rw.git /var/www/my_app/repo )
cap aborted!
SSHKit::Runner::ExecuteError: Exception while executing on host {IP/Host}: git exit status: 1
git stdout: Nothing written
git stderr: Nothing written
/Users/jay/.rvm/gems/ruby-2.1.0/gems/sshkit-1.5.1/
Just write the command in console:
ssh-add
If that don't work, write below command in console:
ssh-add ~/.ssh/id_rsa
and then re-run the command (i.e. cap production deploy), It will work for sure.
Capistrano wasn't able to create the folder on ec2. Once I created the folder it worked like a charm.
So if you also have similar issues try creating the folder on ec2 and then execute the cap production deploy.
If all the above solutions did not work for you, try this. It works for me. Cheer.
ssh-add ~/.ssh/your_private_id_rsa
and run
eval `ssh-agent`
Ref: http://mjacobus.github.io/2015/08/20/solving-weird-capistrano-problems-with-ssh-authentication.html
So what are the permissions for /var/www/my_app in your remote machine ? Make sure they are owned by the same user that you specified in this config option:
set :user, "mydeployuser"
If not I believe Capistrano defaults to the user you are using to run the 'cap' command on the client machine. Make sure that it has permissions to modify/create /var/www/my_app
Since you have:
set :use_sudo, true
it's possible that sudo may not be setup as passwordless sudo like suggested in the Cap 3 docs
I had this issue when trying to deploy a Rails application to a server using Capistrano.
When I run the command cap deploy, I get the error:
SSHKit::runner::ExecuteError: Exception while executing as deploy
Here' how I fixed it:
The issue was that the server to which I was trying to deploy to did not have my SSH key authorized. I had to do the following:
Generate a new SSH key on my local machine:
ssh-keygen
Display the contents of the public file. In my case mine was id_rsa.pub:
cat ~/.ssh/your-public-file
Login to the server which you want to deploy the app to:
ssh your-username#server-ip-address
Paste the contents of the public file into the ~/.ssh/authorized_keys file on a new line.
That's all.
I hope this helps
I had the same problem. Turns out I forgot to add my Gitlab instance SSH key (the 'runner') to my server's ~/.ssh/authorized_keys file.
If you are setting up a new Ubuntu server, public key authentication may be switched off by default which you will have to switch on. See this answer