I'm currently trying to build react app within Github action workflow and putting up on EC2 instance.
Problem is that I can't seem to grab the /build folder that gets created during action.
This is what I have currently:
# This is a basic workflow to help you get started with Actions
name: Deploy to staging
# Controls when the workflow will run
on:
# Triggers the workflow on push or pull request events but only for the master branch
push:
branches: [master]
# Allows you to run this workflow manually from the Actions tab
# workflow_dispatch:
# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
# This workflow contains a single job called "build"
build:
# The type of runner that the job will run on
runs-on: ubuntu-latest
# Steps represent a sequence of tasks that will be executed as part of the job
steps:
# Step 1
# Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it
- uses: actions/checkout#v2
# Step 2 : verification npm install
- name: npm install for Server
run: |
npm ci
- name: npm install in client DIR and build
run: |
cd client
npm ci
npm run build
echo 'CLIENT DONE'
ls
env:
CI: false
# Deploy
deploy:
needs: build
runs-on: ubuntu-latest
if: github.ref == 'refs/heads/master'
steps:
# Step 1
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials#v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: us-east-1
# Step 2
- name: Create CodeDeploy Deployment
id: deploy
run: |
aws deploy create-deployment \
--application-name "app-name" \
--deployment-group-name "deply-app-ec2" \
--deployment-config-name CodeDeployDefault.OneAtATime \
--github-location repository=${{ github.repository }},commitId=${{ github.sha }}
This is somewhat expected since --github-location repository=${{ github.repository }},commitId=${{ github.sha }} for code deploy trigger looks at the sha of the latest commit towards master.
Is there a way to grab the repo that the Github Action is working on (where the node_modules, build folders exist).
What I tried:
Run npm run build on EC2 using CodeDeploy script -> This was killing the server since our resource is very limited.
Make a separate commit during CI to master and grab the commit SHA id. eg;
- name: Create commit
run: |
git config --global user.name 'name'
git config --global user.email 'email'
git add .
git commit -m "Build client for deploying"
- name: Set latest commit id as var
id: vars
shell: bash
run: |
echo "::set-output name=sha_short::$(git rev-parse HEAD)"
echo "${{ steps.vars.outputs.sha_short }}"
then use commitId=${{ steps.vars.outputs.sha_short}}- But CodeDeploy run results in 404 saying it couldn't find the commit (also expected since the commit wasn't pushed to master).
Am I missing something or is building through Github Action and then deploying to EC2 using code deploy impossible to do atm???
Resolved by manually uploading the build folder onto EC2 using SCP during workflow run.
Related
I am trying run reach a React application inside Circleci, so it can be reached by Cypress e2e. First I tried running the app with the Webpack web server, but the issue is the same.
When Cypress runs the tests, it cannot react to the instance where the React is being served from, so I am trying to use http-server instead.
This is what I have at the moment:
version: 2.1
jobs:
cypress:
working_directory: ~/project
docker:
- image: cypress/base:18.12.1
environment:
CYPRESS_baseUrl: http://127.0.0.1:3000
steps:
- checkout:
path: ~/project
- run:
name: Install dependencies
command: npm ci --ignore-scripts
- run:
name: Build
command: npm run build
- run:
name: Install http-server
command: npm install -g http-server
- run:
name: Serve React app
command: |
http-server ./dist/ -a 127.0.0.1 -p 3000 &
# - run:
# name: Wait for server to start
# command: npx wait-on http-get:http://127.0.0.1:3000
- run:
name: cypress install
command: npx cypress install
- run:
name: Run cypress tests
command: npx cypress run
# Invoke jobs via workflows
# See: https://circleci.com/docs/2.0/configuration-reference/#workflows
workflows:
e2e_test:
jobs:
- cypress
When I run http-server ./dist/ -a 127.0.0.1 -p 3000 in the foreground, I can see the server is starting:
Starting up http-server, serving ./dist/
http-server version: 14.1.1
http-server settings:
CORS: disabled
Cache: 3600 seconds
Connection Timeout: 120 seconds
Directory Listings: visible
AutoIndex: visible
Serve GZIP Files: false
Serve Brotli Files: false
Default File Extension: none
Available on:
http://127.0.0.1:3000
Hit CTRL-C to stop the server
When I run it in the background and the script reaches the tests, I get this:
[667:0213/085129.291037:ERROR:gpu_memory_buffer_support_x11.cc(44)] dri3 extension not supported.
Cypress could not verify that this server is running:
> http://127.0.0.1:3000
We are verifying this server because it has been configured as your baseUrl.
Cypress automatically waits until your server is accessible before running tests.
We will try connecting to it 3 more times...
We will try connecting to it 2 more times...
We will try connecting to it 1 more time...
Cypress failed to verify that your server is running.
Please start this server and then run Cypress again.
Exited with code exit status 1
CircleCI received exit code 1
I tried waiting for the server with: npx wait-on http-get:http://127.0.0.1:3000 but it just stays waiting forever.
I had the same issue when I ran the React app with the Webpack server. The app and the tests run without issues in my dev env.
Any help would be greatly appreciated.
For anyone else facing this issue, I solved it by running the server and the test under the same process. I did not know that every command runs in a separate process as described in this post. The code changed to this thanks to ChatGpt suggesting the use of the nohup command, which runs the specified command with no hang-up signals so that it continues to run even if the terminal session is closed:
jobs:
cypress:
working_directory: ~/project
docker:
- image: cypress/base:18.12.1
environment:
CYPRESS_baseUrl: http://127.0.0.1:3000
steps:
- checkout:
path: ~/project
- run:
name: Install dependencies
command: npm ci --ignore-scripts
- run:
name: Build
command: npm run build
- run:
name: Install http-server
command: npm install -g http-server
- run:
name: cypress install
command: npx cypress install
- run:
name: Serve React app and run test
command: |
nohup http-server ./dist/ -a 127.0.0.1 -p 3000 > /dev/null 2>&1 &
npx wait-on http://127.0.0.1:3000 && npx cypress run
# This is a basic workflow to help you get started with Actions
name: CI
# Controls when the action will run. Triggers the workflow on push or pull request
# events but only for the master branch
on:
push:
branches: [ master ]
# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
# This workflow contains a single job called "build"
build:
# The type of runner that the job will run on
runs-on: ubuntu-latest
# Steps represent a sequence of tasks that will be executed as part of the job
steps:
# Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it
- uses: actions/checkout#v2
- name: Setup Node.js environment
uses: actions/setup-node#v2-beta
with:
version: 12
- name: GitHub Action For Yarn
uses: Borale/actions-yarn#v2.1.0
with:
cmd: install
- name: Creating build
run: npm build
- name: installing surge
run: npm install -g surge
- name: deploying on surge
run: surge ./build http://boot_camppractice.surge.sh/ --token ${{secrets.SURGE_TOKEN}}
I'm using a bitbucket pipeline to deploy my react application.
Right now my pipeline looks like this:
image: node:10.15.3
pipelines:
default:
- step:
name: Build
script:
- npm cache clean --force
- rm -rf node_modules
- npm install
- CI=false npm run deploy-app
artifacts: # defining build/ as an artifact
- 'build-artifact/**'
- step:
name: Deploy
script:
- apt-get update
- apt-get install ncftp
- ncftpput -v -u "$USERNAME" -p "$PASSWORD" -R $SERVER build 'build-artifact/*'
- echo Finished uploading build
It works really well like this, but the ftp upload takes about 8 minutes, which is way to long because with the free plan of Bitbucket I can only use the pipeline feature for 50 minutes per month.
It seems like the uploads of every small file takes forever. That's why I thought that maybe uploading a single zip file will be way more performant.
So my question is: Is it really faster? And how it is possible to ZIP the artifact, upload the zip to the server and unzip it there?
Thanks for your help
In fact, you should consider using another tool to upload file, for example rsync which has a couple of useful features, such as compression of the data. It also only uploads files that were changed from the previous upload, which is gonna speed up the uploads as well. You can use the rsync-deploy pipe for example:
script:
- pipe: atlassian/rsync-deploy:0.3.2
variables:
USER: 'ec2-user'
SERVER: '127.0.0.1'
REMOTE_PATH: '/var/www/build/'
LOCAL_PATH: 'build'
EXTRA_ARGS: '-z'
Note the -z option passed via the EXTRA_ARGS. This will enable the data compression when transferring files.
I set this settings in pipelines in Bitbucket. Everything works well, but it doesn't look good when I commit every time Build. But when I don't make it. It says to me that I need to commit for the first time. Have someone best practice/experience?
bitbucket-pipelines.yml
# Check our guides at https://confluence.atlassian.com/x/e8YWN for more examples.
# Only use spaces to indent your .yml configuration.
# -----
# You can specify a custom docker image from Docker Hub as your build environment.
pipelines:
branches:
production:
- step:
name: Build and deploy to FTP
image: node:11.9.0
caches:
- node
script:
- npm install
- npm run build
- apt-get update
- apt-get -qq install git-ftp
- git add /build
- git commit -m "Build"
- git push
- git ftp push --user $FTP_USERNAME --passwd $FTP_PASSWORD ftp://someurl.com/
- git rm /build
- git commit -m "Remove build"
- git push
If I understand properly what you are asking, you are on the page that shows the examples of templates and you are pressing the button "Commit file".
It is kind of confusing what you should do, here, indeed, but actually what you should do is to have a file called bitbucket-pipelines.yaml containing your desired behavior in the root of your repository, and then, pipelines will do the job automatically based on the instructions in this file.
I am trying to set up bitbucket-pipelines.yml file to do the build and then deploy react project. There is my code below.
image: node:10.15.1
pipelines:
default: # Pipelines that are triggered manually via the Bitbucket GUI
- step:
name: Build
script:
- yarn
- yarn build
- step:
name: Deploy
script:
- apt-get update
- apt-get install ncftp
- ncftpput -v -u "$FTP_USERNAME" -p "$FTP_PASSWORD" -R $FTP_HOST $FTP_SITE_ROOT_DEV build/*
- echo Finished uploading /build files to $FTP_HOST$FTP_SITE_ROOT
I am getting the result:
+ ncftpput -v -u "$FTP_USERNAME" -p "$FTP_PASSWORD" -R $FTP_HOST $FTP_SITE_ROOT_DEV build/*
could not stat build/*: No such file or directory.
ncftpput build/*: no valid files were specified.
It says that there is no build file or directory. but yarn build is actually build folder creates: react-scripts build
From Atlassian documentation
Key concepts
A pipeline is made up of a set of steps.
Each step in your pipeline runs a separate Docker container. If you
want, you can use different types of container for each step, by
selecting different images
So, when you try to send it in Deploy Step it's not there because you built it in another container.
To pass files between steps you have to use Artifacts
image: node:10.15.1
pipelines:
default: # Pipelines that are triggered manually via the Bitbucket GUI
- step:
name: Build
script:
- yarn
- yarn build
artifacts: # defining build/ as an artifact
- build/**
- step:
name: Deploy
script:
- apt-get update
- apt-get install ncftp
- ncftpput -v -u "$FTP_USERNAME" -p "$FTP_PASSWORD" -R $FTP_HOST $FTP_SITE_ROOT_DEV build/*
- echo Finished uploading /build files to $FTP_HOST$FTP_SITE_ROOT