How to deploy react project to ftp using Bitbucket Pipelines? - reactjs

I am trying to set up bitbucket-pipelines.yml file to do the build and then deploy react project. There is my code below.
image: node:10.15.1
pipelines:
default: # Pipelines that are triggered manually via the Bitbucket GUI
- step:
name: Build
script:
- yarn
- yarn build
- step:
name: Deploy
script:
- apt-get update
- apt-get install ncftp
- ncftpput -v -u "$FTP_USERNAME" -p "$FTP_PASSWORD" -R $FTP_HOST $FTP_SITE_ROOT_DEV build/*
- echo Finished uploading /build files to $FTP_HOST$FTP_SITE_ROOT
I am getting the result:
+ ncftpput -v -u "$FTP_USERNAME" -p "$FTP_PASSWORD" -R $FTP_HOST $FTP_SITE_ROOT_DEV build/*
could not stat build/*: No such file or directory.
ncftpput build/*: no valid files were specified.
It says that there is no build file or directory. but yarn build is actually build folder creates: react-scripts build

From Atlassian documentation
Key concepts
A pipeline is made up of a set of steps.
Each step in your pipeline runs a separate Docker container. If you
want, you can use different types of container for each step, by
selecting different images
So, when you try to send it in Deploy Step it's not there because you built it in another container.
To pass files between steps you have to use Artifacts
image: node:10.15.1
pipelines:
default: # Pipelines that are triggered manually via the Bitbucket GUI
- step:
name: Build
script:
- yarn
- yarn build
artifacts: # defining build/ as an artifact
- build/**
- step:
name: Deploy
script:
- apt-get update
- apt-get install ncftp
- ncftpput -v -u "$FTP_USERNAME" -p "$FTP_PASSWORD" -R $FTP_HOST $FTP_SITE_ROOT_DEV build/*
- echo Finished uploading /build files to $FTP_HOST$FTP_SITE_ROOT

Related

Deploy docker image of react app to Elastic beanstalk

I am trying to deploy my create-react-app to elastic bean stalk with docker
I have setup codepipeline with codebuild and elastic beanstalk.
I am getting this error
Stop running the command. Error: Dockerfile and Dockerrun.aws.json are both missing, abort deployment
My Dockerfile looks like this
FROM tiangolo/node-frontend:10 as build-stage
# Create app directory
# RUN mkdir -p /usr/src/app
# WORKDIR /usr/src/app
WORKDIR /app
# # fix npm private module
# ARG NPM_TOKEN
# COPY .npmrc /app/
#COPY package.json package.json
COPY package*.json /app/
COPY Dockerrun.aws.json /app/
RUN npm install
COPY ./ /app/
# RUN CI=true npm test
RUN npm run build
# FROM nginx:1.15
FROM nginx:1.13.3-alpine
# Install app dependencies
# Stage 1, based on Nginx, to have only the compiled app, ready for production with Nginx
COPY --from=build-stage /app/build/ /usr/share/nginx/html
# Copy the default nginx.conf provided by tiangolo/node-frontend
COPY --from=build-stage /nginx.conf /etc/nginx/conf.d/default.conf
RUN ls
EXPOSE 80
I also have a Dockerrun.aws.json
{
"AWSEBDockerrunVersion": "3",
"Image": {
"Name": "something.dkr.ecr.us-east-2.amazonaws.com/subscribili:latest",
"Update": "true"
},
"Ports": [
{
"ContainerPort": "5000"
}
],
"Logging": "/var/log/nginx"
}
my buildspec.yml file looks like this
version: 0.2
phases:
pre_build:
commands:
- $(aws ecr get-login --region $AWS_DEFAULT_REGION --no-include-email)
- REPOSITORY_URI=something.dkr.ecr.us-east-2.amazonaws.com/subscribili
- COMMIT_HASH=$(echo $CODEBUILD_RESOLVED_SOURCE_VERSION | cut -c 1-7)
- IMAGE_TAG=${COMMIT_HASH:=latest}
build:
commands:
- docker build -t $REPOSITORY_URI:latest .
- docker tag $REPOSITORY_URI:latest $REPOSITORY_URI:$IMAGE_TAG
post_build:
commands:
- docker push $REPOSITORY_URI:latest
- docker push $REPOSITORY_URI:$IMAGE_TAG
- printf '[{"name":"nginx","imageUri":"%s"}]' $REPOSITORY_URI:$IMAGE_TAG > imagedefinitions.json
artifacts:
files: imagedefinitions.json
I am sure there is some issue with buildspec file but I am just not sure what.
I have read all the documentation still couldn't figure out how to write the buildspec file Docker.
Is there anything I am missing?
Dockerfile and Dockerrun.aws.json these 2 files need to be in the same directory where the command "COPY Dockerrun.aws.json /app/ " is running. make sure these files exists in that directory and this error should disappear
"eb deploy" command creates a zip file from your code. However, to make it as small as possible, it only takes the file that are commited to git. So, if you did not commit Dockerfile and Dockerrun file, these two files won't be included in the zip.
If you do not want it to behave like this, you can add .ebignore file to your projects root directory. This files commands are the same as the gitignore file; you can copy everything from gitignore to ebignore. If there is a .ebignore, cli will not check if the project is commited to a source control.
Now to check what is included in zip file, watch the .elasticbeanstalk folder after "eb deploy" command. When the zip is prepared, copy it immediately and paste to another folder. Note: the original zip file will be removed after the cli upload that.

Appveyor not running the build script

Hello my build script isn't creating a build script for reason I do not know, the Packjson has the correct script which is
"build": "npm run silentrenew && react-scripts --max_old_space_size=8192 build",
I have double checked my YML file and all the tags
version: '1.0.{build}'
image: Ubuntu
init:
- cmd: set NODE_OPTIONS=--max-old-space-size=8192
environment:
REACT_APP_VSA_URL: >-
https://xzc-e-n-vsa0000-d-api-02.xzc-e-n-snt-06-ut-ase-01.p.azurewebsites.net
REACT_APP_NOTIFICATIONS_API_SECRET: d8015bf6cab64573b2d7c17bac94bed4
REACT_APP_EVENT_LOG_SECRET: 3431cec7ecbb42bba1957934c751f02d
install:
- cmd: npm ci --ignore-scripts
build_script:
- cmd: |-
npm --no-git-tag-version version "%APPVEYOR_BUILD_VERSION%"
npm run build
test_script:
- cmd: 'npm run test:ci'
artifacts:
- path: ./build
name: dpe
deploy:
- provider: Environment
name: dpe-dev
'on':
branch:
- internal
- tablet
on_finish:
- pwsh: >-
# upload results to AppVeyor
$wc = New-Object 'System.Net.WebClient'
$wc.UploadFile("https://ci.appveyor.com/api/testresults/junit/$($env:APPVEYOR_JOB_ID)",
(Resolve-Path .\coverage\junit\junit.xml))
# upload coverage results to CodeCov
$env:PATH = 'C:\msys64\usr\bin;' + $env:PATH
Invoke-WebRequest -Uri 'https://codecov.io/bash' -OutFile codecov.sh
bash codecov.sh -s "./coverage/jest/"
This is the exact message I'm getting in AppVeyor, since the build isn't creating it isn't running the test and saying it was successful.
For Linux builds the prefix must be sh: or no prefix at all:
build_script:
- sh: |-
npm --no-git-tag-version version "$APPVEYOR_BUILD_VERSION"
npm run build

How to deploy react app in ubuntu server with bitbucket pipeline

I want to build and deploy my react app from my master branch I have managed to automate build but unable to transfer it into my server find my pipeline code below, I receive below error
pipelines:
default:
- step:
name: Build Title
script:
- npm install
- npm run build
- mkdir packaged
- tar -czvf packaged/package-${BITBUCKET_BUILD_NUMBER}.tar.gz -C build .
artifacts:
- packaged/**
- step:
name: Deploy to Web
image: alpine
trigger: manual
deployment: production
script:
- mkdir upload
- tar -xf packaged/package-${BITBUCKET_BUILD_NUMBER}.tar.gz -C upload
- apk update && apk add openssh rsync
- rsync -a -e "ssh -o StrictHostKeyChecking=no" --delete upload/ $USERNAME#$SERVER:html/temp/react-${BITBUCKET_BUILD_NUMBER}
- ssh -o StrictHostKeyChecking=no $USERNAME#$SERVER "rm -r html/www"
- ssh -o StrictHostKeyChecking=no $USERNAME#$SERVER "mv 'html/temp/react-${BITBUCKET_BUILD_NUMBER}' 'var/www/html/deploy'"
- ssh -o StrictHostKeyChecking=no $USERNAME#$SERVER "chmod -R u+rwX,go+rX,go-w html/www"
Error Log
+ rsync -a -e "ssh -o StrictHostKeyChecking=no" --delete upload/ $USERNAME#$SERVER:html/temp/react-${BITBUCKET_BUILD_NUMBER}
load pubkey "/opt/atlassian/pipelines/agent/ssh/id_rsa": invalid format
rsync: mkdir "/$USERNAME/html/temp/react-15" failed: No such file or directory (2)
rsync error: error in file IO (code 11) at main.c(675) [Receiver=3.1.2]
I noticed. this to happen only on Alpine-based images. For example, Debian images work fine. It also happens on Buddy, not just on Bitbucket. I expect this is upstream Alpine bug/issue.
I was using that same script as well, below is what ended up working for me after a lot of banging my head against the screen, updating the image and adding upload artifacts seemed to be the kicker.
default:
- step:
name: Build React Project
script:
- npm install
- npm run-script build
- mkdir packaged
- tar -czvf packaged/package-${BITBUCKET_BUILD_NUMBER}.tar.gz -C build .
artifacts:
- packaged/**
- step:
name: Deploy to Web
image: atlassian/default-image:latest
trigger: manual
deployment: production
script:
- mkdir upload
- tar -xf packaged/package-${BITBUCKET_BUILD_NUMBER}.tar.gz -C upload
- rsync -a --delete upload/ $USERNAME#$SERVER:/home/temp/react-${BITBUCKET_BUILD_NUMBER}
- ssh $USERNAME#$SERVER "rm -r /home/www"
- ssh $USERNAME#$SERVER "mv '/home/temp/react-${BITBUCKET_BUILD_NUMBER}' '/home/www'"
- ssh $USERNAME#$SERVER "chmod -R u +rwX,go+rX,go-w /home/www"
artifacts:
- upload/**

Deploying React App via FTP from Bitbucket to my server

I set this settings in pipelines in Bitbucket. Everything works well, but it doesn't look good when I commit every time Build. But when I don't make it. It says to me that I need to commit for the first time. Have someone best practice/experience?
bitbucket-pipelines.yml
# Check our guides at https://confluence.atlassian.com/x/e8YWN for more examples.
# Only use spaces to indent your .yml configuration.
# -----
# You can specify a custom docker image from Docker Hub as your build environment.
pipelines:
branches:
production:
- step:
name: Build and deploy to FTP
image: node:11.9.0
caches:
- node
script:
- npm install
- npm run build
- apt-get update
- apt-get -qq install git-ftp
- git add /build
- git commit -m "Build"
- git push
- git ftp push --user $FTP_USERNAME --passwd $FTP_PASSWORD ftp://someurl.com/
- git rm /build
- git commit -m "Remove build"
- git push
If I understand properly what you are asking, you are on the page that shows the examples of templates and you are pressing the button "Commit file".
It is kind of confusing what you should do, here, indeed, but actually what you should do is to have a file called bitbucket-pipelines.yaml containing your desired behavior in the root of your repository, and then, pipelines will do the job automatically based on the instructions in this file.

Reusable docker image for AngularJS

We have an AngularJS application. We wrote a dockerfile for it so it's reusable on every system. The dockerfile isn't a best practice and it's maybe some weird build up (build and hosting in same file) for some but it's just created to run our angularjs app locally on each PC of every developer.
Dockerfile:
FROM nginx:1.10
... Steps to install nodejs-legacy + npm
RUN npm install -g gulp
RUN npm install
RUN gulp build
.. steps to move dist folder
We build our image with docker build -t myapp:latest .
Every developer is able to run our app with docker run -d -p 80:80 myapp:latest
But now we're developing other backends. So we have a backend in DEV, a backend in UAT, ...
So there are different URLS which we need to use in /config/xx.json
{
...
"service_base": "https://backend.test.xxx/",
...
}
We don't want to change that URL every time, rebuild the image and start it. We also don't want to declare some URLS (dev, uat, prod, ..) which can be used there. We want to perform our gulp build process with an environment variable instead of a hardcoded URL.
So we we can start our container like this:
docker run -d -p 80:80 --env URL=https://mybackendurl.com app:latest
Is there someone who has experience with this kind of issues? So we'll need an env variable in our json and building it and add the URL later on if that's possible.
EDIT : Better option is to use build args
Instead of passing URL at docker run command, you can use docker build args. It is better to have build related commands to be executed during docker build than docker run.
In your Dockerfile,
ARG URL
And then run
docker build --build-arg URL=<my-url> .
See this stackoverflow question for details
This was my 'solution'. I know it isn't the best docker approach but just for our developers it was a big help.
My dockerfile looks like this:
FROM nginx:1.10
RUN apt-get update && \
apt-get install -y curl
RUN sed -i "s/httpredir.debian.org/`curl -s -D - http://httpredir.debian.org/demo/debian/ | awk '/^Link:/ { print $2 }' | sed -e 's#<http://\(.*\)/debian/>;#\1#g'`/" /etc/apt/sources.list
RUN \
apt-get clean && \
apt-get update && \
apt-get install -y nodejs-legacy && \
apt-get install -y npm
WORKDIR /home/app
COPY . /home/app
RUN npm install -g gulp
RUN npm install
COPY start.sh /
CMD ["./start.sh"]
So after the whole include of the app + npm installation inside my nginx I start my container with the start.sh script.
The content of start.sh:
#!/bin/bash
sed -i 's#my-url#'"$DATA_ACCESS_URL"'#' configs/config.json
gulp build
rm -r /usr/share/nginx/html/
//cp right folders which are created by gulp build to /usr/share/nginx/html
...
//start nginx container
/usr/sbin/nginx -g "daemon off;"
So the build will happen when my container starts. Not the best way of course but it's all for the needs of the developers. Have an easy local frontend.
The sed command will perform a replace on the config file which contains something like:
{
"service_base": "my-url",
}
So my-url will be replaced by my the content of my environment variable which I willd define in my docker run command.
Than I'm able to perform.
docker run -d -p 80:80 -e DATA_ACCESS_URL=https://mybackendurl.com app:latest
And every developer can use the frontend locally and connect with their own backend URL.

Resources