I set this settings in pipelines in Bitbucket. Everything works well, but it doesn't look good when I commit every time Build. But when I don't make it. It says to me that I need to commit for the first time. Have someone best practice/experience?
bitbucket-pipelines.yml
# Check our guides at https://confluence.atlassian.com/x/e8YWN for more examples.
# Only use spaces to indent your .yml configuration.
# -----
# You can specify a custom docker image from Docker Hub as your build environment.
pipelines:
branches:
production:
- step:
name: Build and deploy to FTP
image: node:11.9.0
caches:
- node
script:
- npm install
- npm run build
- apt-get update
- apt-get -qq install git-ftp
- git add /build
- git commit -m "Build"
- git push
- git ftp push --user $FTP_USERNAME --passwd $FTP_PASSWORD ftp://someurl.com/
- git rm /build
- git commit -m "Remove build"
- git push
If I understand properly what you are asking, you are on the page that shows the examples of templates and you are pressing the button "Commit file".
It is kind of confusing what you should do, here, indeed, but actually what you should do is to have a file called bitbucket-pipelines.yaml containing your desired behavior in the root of your repository, and then, pipelines will do the job automatically based on the instructions in this file.
Related
I'm currently trying to build react app within Github action workflow and putting up on EC2 instance.
Problem is that I can't seem to grab the /build folder that gets created during action.
This is what I have currently:
# This is a basic workflow to help you get started with Actions
name: Deploy to staging
# Controls when the workflow will run
on:
# Triggers the workflow on push or pull request events but only for the master branch
push:
branches: [master]
# Allows you to run this workflow manually from the Actions tab
# workflow_dispatch:
# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
# This workflow contains a single job called "build"
build:
# The type of runner that the job will run on
runs-on: ubuntu-latest
# Steps represent a sequence of tasks that will be executed as part of the job
steps:
# Step 1
# Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it
- uses: actions/checkout#v2
# Step 2 : verification npm install
- name: npm install for Server
run: |
npm ci
- name: npm install in client DIR and build
run: |
cd client
npm ci
npm run build
echo 'CLIENT DONE'
ls
env:
CI: false
# Deploy
deploy:
needs: build
runs-on: ubuntu-latest
if: github.ref == 'refs/heads/master'
steps:
# Step 1
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials#v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: us-east-1
# Step 2
- name: Create CodeDeploy Deployment
id: deploy
run: |
aws deploy create-deployment \
--application-name "app-name" \
--deployment-group-name "deply-app-ec2" \
--deployment-config-name CodeDeployDefault.OneAtATime \
--github-location repository=${{ github.repository }},commitId=${{ github.sha }}
This is somewhat expected since --github-location repository=${{ github.repository }},commitId=${{ github.sha }} for code deploy trigger looks at the sha of the latest commit towards master.
Is there a way to grab the repo that the Github Action is working on (where the node_modules, build folders exist).
What I tried:
Run npm run build on EC2 using CodeDeploy script -> This was killing the server since our resource is very limited.
Make a separate commit during CI to master and grab the commit SHA id. eg;
- name: Create commit
run: |
git config --global user.name 'name'
git config --global user.email 'email'
git add .
git commit -m "Build client for deploying"
- name: Set latest commit id as var
id: vars
shell: bash
run: |
echo "::set-output name=sha_short::$(git rev-parse HEAD)"
echo "${{ steps.vars.outputs.sha_short }}"
then use commitId=${{ steps.vars.outputs.sha_short}}- But CodeDeploy run results in 404 saying it couldn't find the commit (also expected since the commit wasn't pushed to master).
Am I missing something or is building through Github Action and then deploying to EC2 using code deploy impossible to do atm???
Resolved by manually uploading the build folder onto EC2 using SCP during workflow run.
I have been trying to deploy create-react-app, but got an error
fatal: repository 'https://github.com/charyyev2000/Portfolio-React.git/' not found
then I created a token and tried to deploy with that,
git remote add origin https://<TOKEN>#github.com/charyyev2000/Portfolio-React.git
again got an error;
Deleted repository, created again and tried to deploy, again got the same error
fatal: repository 'https://github.com/charyyev2000/Portfolio-React.git/' not found
What did I do wrong? or, is there something wrong with my homepage in packages.json?
"homepage": "https://charyyev2000.github.io/Portfolio-React",
I cant deploy from any other of my repositories too.
What should I do now.
You are new comer in Git/GitHub tools, I hope you can done this task.
Your remote repository URL is https://github.com/charyyev2000/Portfolio-React
Solution 1:
Create a new repository on the command line
echo "# Portfolio-React" >> README.md
git init
git add README.md
git commit -m "first commit"
git branch -M main
git remote add origin https://github.com/charyyev2000/Portfolio-React.git
git push -u origin main
Solution 2: (I think you will prefer this solution, because your source code is exist).
Push an existing repository from the command line
git remote add origin https://github.com/charyyev2000/Portfolio-React.git
git branch -M main
git push -u origin main
Let's follow the guide what you see likes this
P/S: Sometime, you need
git add .
git commit -m"foo"
git push -v
or you see any guide on your console screen, let's follow guide what you see.
Your repository is probably private.
Try this: git remote set-url origin https://YOUR_GITHUB_USER#github.com/charyyev2000/Portfolio-React.git
Or use an ssh key.
https://docs.github.com/en/github/authenticating-to-github/connecting-to-github-with-ssh/generating-a-new-ssh-key-and-adding-it-to-the-ssh-agent&ved=2ahUKEwjoysreiIjzAhVwTTABHQPHDzIQFnoECGwQAQ&sqi=2&usg=AOvVaw2B_nuIizqk0LtbV4qWtzzH
I have a Docker Compose environment that has been working very differently.
Here is the setup:
docker-compose.prod.yaml
front_end:
image: front-end-build
build:
context: ./front_end
dockerfile: front_end.build.dockerfile
nginx:
build:
context: ./front_end
dockerfile: front_end.prod.dockerfile
ports:
- 80:80
- 5000:5000
environment:
- CHOKIDAR_USEPOLLING=true
stdin_open: true
tty: true
depends_on:
- front_end
front_end.build.dockerfile
FROM node:13.12.0-alpine
COPY package.json ./
WORKDIR /srv
RUN yarn install
RUN yarn global add react-scripts
COPY . /srv
RUN yarn build
front_end.prod.dockerfile
FROM nginx
EXPOSE 80
COPY --from=front-end-build /app/build /usr/share/nginx/html
COPY nginx.conf /etc/nginx/conf.d
command:
docker-compose down && docker-compose -f docker-compose.prod.yml up --build --remove-orphans nginx
It doesn't work, for various reasons on various runs.
After various errors, I'm starting with a docker system prune, which at least "resets" the problems to some starting state.
Various problems include:
yarn install says info There appears to be trouble with your network connection. Retrying... but then proceeds to continue, spitting out various deprecation/incompatibility warnings, and finally getting to "Done".
Following this, it usually takes maybe 60+ seconds to even show "Removing intermediate container" and move on to the next step in the dockerfile.
Sometimes the network error will be all I get, and then yarn install will fail which halts the whole process.
yarn install might not show that network error, but show its various warnings between "Resolving packages" and "Fetching packages", which doesn't seem to make sense although this might be normal.
yarn install might, at any point in this process (including after install is done, during install, or even during yarn build), report that we're out of space: error An unexpected error occurred: "ENOSPC: no space left on device, mkdir '/node_modules/fast-glob/package/out/providers/filters'". or something similar.
The farthest we might get is, in yarn build:
There might be a problem with the project dependency tree.
It is likely not a bug in Create React App, but something you need to fix locally.
The react-scripts package provided by Create React App requires a dependency:
"webpack-dev-server": "3.11.0"
Don't try to install it manually: your package manager does it automatically.
However, a different version of webpack-dev-server was detected higher up in the tree:
/node_modules/webpack-dev-server (version: 3.10.3)
Manually installing incompatible versions is known to cause hard-to-debug issues.
If you would prefer to ignore this check, add SKIP_PREFLIGHT_CHECK=true to an .env file in your project.
That will permanently disable this message but you might encounter other issues.
To fix the dependency tree, try following the steps below in the exact order:
1. Delete package-lock.json (not package.json!) and/or yarn.lock in your project folder.
2. Delete node_modules in your project folder.
3. Remove "webpack-dev-server" from dependencies and/or devDependencies in the package.json file in your project folder.
4. Run npm install or yarn, depending on the package manager you use.
In most cases, this should be enough to fix the problem.
If this has not helped, there are a few other things you can try:
5. If you used npm, install yarn (http://yarnpkg.com/) and repeat the above steps with it instead.
This may help because npm has known issues with package hoisting which may get resolved in future versions.
6. Check if /node_modules/webpack-dev-server is outside your project directory.
For example, you might have accidentally installed something in your home folder.
7. Try running npm ls webpack-dev-server in your project folder.
This will tell you which other package (apart from the expected react-scripts) installed webpack-dev-server.
If nothing else helps, add SKIP_PREFLIGHT_CHECK=true to an .env file in your project.
That would permanently disable this preflight check in case you want to proceed anyway.
P.S. We know this message is long but please read the steps above :-) We hope you find them helpful!
error Command failed with exit code 1.
webpack-dev-server does not actually appear anywhere in my package.json file so there's nothing for me to change there, but otherwise I've tried those 4 steps. And then the next time I run I get the "no space left" error.
I'll also say, almost separately from this, that there have been times when, for some reason, it will go through all the steps, except with no output whatsover for yarn build, not even "Using cache". And this, of course, will have the nginx container fail as it tries to get the build files. Or something like that, honestly it's been a while. But what does happen when we move on to nginx, is that it will say "Building nginx" for an absurd amount of time, several minutes, before it even gets to the first step in the nginx dockerfile.
But the problem with the front end build is so big that that nginx thing is basically a separate issue.
Has anyone experienced (and solved!) anything similar to what I'm experiencing?
What you're attempting here is a multistage build using the old style before Docker 17.05.
The prod Dockerfile depends on the front-end-build image, that's why you get "Building nginx" until the image is ready.
You can condense both dockerfiles into one now.
Dockerfile
FROM node:13.12.0-alpine AS front-end-build
WORKDIR /srv
COPY package.json ./
RUN yarn install
RUN yarn global add react-scripts
COPY . /srv
RUN yarn build
FROM nginx
EXPOSE 80
COPY --from=front-end-build /app/build /usr/share/nginx/html
COPY nginx.conf /etc/nginx/conf.d
docker-compose.yml
nginx:
build: front_end/
ports:
- 80:80
- 5000:5000
environment:
- CHOKIDAR_USEPOLLING=true
stdin_open: true
tty: true
Regarding the weird behaviour, check that you're copying the package.json to /package.json and right after you switch the WORKDIR to /srv (which is empty) and run yarn install.
Try moving the COPY order after the WORKDIR:
WORKDIR /srv
COPY package.json ./
https://docs.docker.com/develop/develop-images/multistage-build/
I'm using a bitbucket pipeline to deploy my react application.
Right now my pipeline looks like this:
image: node:10.15.3
pipelines:
default:
- step:
name: Build
script:
- npm cache clean --force
- rm -rf node_modules
- npm install
- CI=false npm run deploy-app
artifacts: # defining build/ as an artifact
- 'build-artifact/**'
- step:
name: Deploy
script:
- apt-get update
- apt-get install ncftp
- ncftpput -v -u "$USERNAME" -p "$PASSWORD" -R $SERVER build 'build-artifact/*'
- echo Finished uploading build
It works really well like this, but the ftp upload takes about 8 minutes, which is way to long because with the free plan of Bitbucket I can only use the pipeline feature for 50 minutes per month.
It seems like the uploads of every small file takes forever. That's why I thought that maybe uploading a single zip file will be way more performant.
So my question is: Is it really faster? And how it is possible to ZIP the artifact, upload the zip to the server and unzip it there?
Thanks for your help
In fact, you should consider using another tool to upload file, for example rsync which has a couple of useful features, such as compression of the data. It also only uploads files that were changed from the previous upload, which is gonna speed up the uploads as well. You can use the rsync-deploy pipe for example:
script:
- pipe: atlassian/rsync-deploy:0.3.2
variables:
USER: 'ec2-user'
SERVER: '127.0.0.1'
REMOTE_PATH: '/var/www/build/'
LOCAL_PATH: 'build'
EXTRA_ARGS: '-z'
Note the -z option passed via the EXTRA_ARGS. This will enable the data compression when transferring files.
I am trying to set up bitbucket-pipelines.yml file to do the build and then deploy react project. There is my code below.
image: node:10.15.1
pipelines:
default: # Pipelines that are triggered manually via the Bitbucket GUI
- step:
name: Build
script:
- yarn
- yarn build
- step:
name: Deploy
script:
- apt-get update
- apt-get install ncftp
- ncftpput -v -u "$FTP_USERNAME" -p "$FTP_PASSWORD" -R $FTP_HOST $FTP_SITE_ROOT_DEV build/*
- echo Finished uploading /build files to $FTP_HOST$FTP_SITE_ROOT
I am getting the result:
+ ncftpput -v -u "$FTP_USERNAME" -p "$FTP_PASSWORD" -R $FTP_HOST $FTP_SITE_ROOT_DEV build/*
could not stat build/*: No such file or directory.
ncftpput build/*: no valid files were specified.
It says that there is no build file or directory. but yarn build is actually build folder creates: react-scripts build
From Atlassian documentation
Key concepts
A pipeline is made up of a set of steps.
Each step in your pipeline runs a separate Docker container. If you
want, you can use different types of container for each step, by
selecting different images
So, when you try to send it in Deploy Step it's not there because you built it in another container.
To pass files between steps you have to use Artifacts
image: node:10.15.1
pipelines:
default: # Pipelines that are triggered manually via the Bitbucket GUI
- step:
name: Build
script:
- yarn
- yarn build
artifacts: # defining build/ as an artifact
- build/**
- step:
name: Deploy
script:
- apt-get update
- apt-get install ncftp
- ncftpput -v -u "$FTP_USERNAME" -p "$FTP_PASSWORD" -R $FTP_HOST $FTP_SITE_ROOT_DEV build/*
- echo Finished uploading /build files to $FTP_HOST$FTP_SITE_ROOT