I want to build and deploy my react app from my master branch I have managed to automate build but unable to transfer it into my server find my pipeline code below, I receive below error
pipelines:
default:
- step:
name: Build Title
script:
- npm install
- npm run build
- mkdir packaged
- tar -czvf packaged/package-${BITBUCKET_BUILD_NUMBER}.tar.gz -C build .
artifacts:
- packaged/**
- step:
name: Deploy to Web
image: alpine
trigger: manual
deployment: production
script:
- mkdir upload
- tar -xf packaged/package-${BITBUCKET_BUILD_NUMBER}.tar.gz -C upload
- apk update && apk add openssh rsync
- rsync -a -e "ssh -o StrictHostKeyChecking=no" --delete upload/ $USERNAME#$SERVER:html/temp/react-${BITBUCKET_BUILD_NUMBER}
- ssh -o StrictHostKeyChecking=no $USERNAME#$SERVER "rm -r html/www"
- ssh -o StrictHostKeyChecking=no $USERNAME#$SERVER "mv 'html/temp/react-${BITBUCKET_BUILD_NUMBER}' 'var/www/html/deploy'"
- ssh -o StrictHostKeyChecking=no $USERNAME#$SERVER "chmod -R u+rwX,go+rX,go-w html/www"
Error Log
+ rsync -a -e "ssh -o StrictHostKeyChecking=no" --delete upload/ $USERNAME#$SERVER:html/temp/react-${BITBUCKET_BUILD_NUMBER}
load pubkey "/opt/atlassian/pipelines/agent/ssh/id_rsa": invalid format
rsync: mkdir "/$USERNAME/html/temp/react-15" failed: No such file or directory (2)
rsync error: error in file IO (code 11) at main.c(675) [Receiver=3.1.2]
I noticed. this to happen only on Alpine-based images. For example, Debian images work fine. It also happens on Buddy, not just on Bitbucket. I expect this is upstream Alpine bug/issue.
I was using that same script as well, below is what ended up working for me after a lot of banging my head against the screen, updating the image and adding upload artifacts seemed to be the kicker.
default:
- step:
name: Build React Project
script:
- npm install
- npm run-script build
- mkdir packaged
- tar -czvf packaged/package-${BITBUCKET_BUILD_NUMBER}.tar.gz -C build .
artifacts:
- packaged/**
- step:
name: Deploy to Web
image: atlassian/default-image:latest
trigger: manual
deployment: production
script:
- mkdir upload
- tar -xf packaged/package-${BITBUCKET_BUILD_NUMBER}.tar.gz -C upload
- rsync -a --delete upload/ $USERNAME#$SERVER:/home/temp/react-${BITBUCKET_BUILD_NUMBER}
- ssh $USERNAME#$SERVER "rm -r /home/www"
- ssh $USERNAME#$SERVER "mv '/home/temp/react-${BITBUCKET_BUILD_NUMBER}' '/home/www'"
- ssh $USERNAME#$SERVER "chmod -R u +rwX,go+rX,go-w /home/www"
artifacts:
- upload/**
Related
Starting from my last commit yesterday I am encountering with a strange problem where GitLab CI fails continuously with an error as shown below:
$ ssh -i $SSH_PRIVATE_KEY -o StrictHostKeyChecking=no $SERVER_USER#$SERVER_IP "docker run -d -p 8010:80 --name my_project $TAG_LATEST"
docker: Error response from daemon: No command specified.
See 'docker run --help'.
Cleaning up project directory and file based variables
ERROR: Job failed: exit code 125
This is an React App which should be build and deployed to my server.
This is my Dockerfile
FROM node:16.14.0-alpine as build
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
FROM nginx:1.19-alpine
COPY --from=build /app/build /usr/share/nginx/html
RUN rm /etc/nginx/conf.d/default.conf
COPY nginx/nginx.conf /etc/nginx/conf.d
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
In .gitlab-ci.yml file I have 2 stages build and deploy. For more clarity I would like to share it eaither:
stages:
- build
- deploy
variables:
TAG_LATEST: $CI_REGISTRY_IMAGE/$CI_COMMIT_REF_NAME:latest
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: "/certs"
build_test:
image: docker:latest
stage: build
services:
- docker:19.03.0-dind
script:
- docker build -t $TAG_LATEST .
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN $CI_REGISTRY
- docker push $TAG_LATEST
environment:
name: test
rules:
- if: $CI_COMMIT_BRANCH == "develop"
when: on_success
deploy_test:
image: alpine:latest
stage: deploy
only:
- develop
tags:
- frontend
script:
- chmod og= $SSH_PRIVATE_KEY
- apk update && apk add openssh-client
- ssh -i $SSH_PRIVATE_KEY -o StrictHostKeyChecking=no $SERVER_USER#$SERVER_IP "docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN $CI_REGISTRY"
- ssh -i $SSH_PRIVATE_KEY -o StrictHostKeyChecking=no $SERVER_USER#$SERVER_IP "docker pull $TAG_LATEST"
- ssh -i $SSH_PRIVATE_KEY -o StrictHostKeyChecking=no $SERVER_USER#$SERVER_IP "docker container rm -f my_project || true"
- ssh -i $SSH_PRIVATE_KEY -o StrictHostKeyChecking=no $SERVER_USER#$SERVER_IP "docker run -d -p 8010:80 --name my_project $TAG_LATEST"
environment:
name: test
I did the following steps:
1st tried to run image using docker run command manually in my server. The result was the same!
2nd I pulled project from gitlab to the brand new folder and run first docker build -my_project . command then as I did it in server. It worked on my local machine.
3rd I re-verified my code base with ChatGPT and it found no error.
Currently there's an issue with latest docker release.
Use:
image: docker:20.10.22
instead of :
image: docker:latest
For more details check this link: https://gitlab.com/gitlab-org/gitlab-runner/-/issues/29593#note_1263383415
I got the same error since 3 days.
I also use gitLab CI.
I solved the issue by adding this line in my docker-compose.yml file (in the involved service section)
command: ["nginx", "-g", "daemon off;"]
I not yet find the root cause of the issue.
Hope that will help.
I started working on Github actions to build some sample containers instead of doing my builds locally. Less reliance on my machines, etc. It seems though that building and pushing on my MAC works, but creating an Action to do it will fail.
When doing tests locally, I did make sure I have an updated Dockerfile to ensure that everything is built as needed correctly, but part of me is thinking it is related to the OS building with the Github action, but I am trying to understand it more.
The error I get is:
Error: failed to solve: executor failed running [/bin/sh -c ( /opt/mssql/bin/sqlservr & ) | grep -q "Service Broker manager has started" && /opt/sqlpackage/sqlpackage /a:Import /tsn:. /tdn:${DBNAME} /tu:sa /tp:$SA_PASSWORD /sf:/tmp/db.bacpac && rm /tmp/db.bacpac && pkill sqlservr]: exit code: 1
My workflow action is:
name: Docker Image CI MSSQL
on:
schedule:
- cron: '0 6 * * *'
push:
branches: [ master ]
jobs:
build:
runs-on: ubuntu-latest
steps:
-
name: Checkout
uses: actions/checkout#v3
-
name: Prepare Variables
id: prepare
run: |
DOCKER_IMAGE=fallenreaper/eve-mssql
VERSION=$(date -u +'%Y%m%d')
if [ "${{ github.event_name }}" = "schedule" ]; then
VERSION=nightly
fi
TAGS="${DOCKER_IMAGE}:${VERSION}"
if [[ $VERSION =~ ^[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}$ ]]; then
TAGS="$TAGS --tag ${DOCKER_IMAGE}:latest"
fi
echo ::set-output name=docker_image::${DOCKER_IMAGE}
echo ::set-output name=version::${VERSION}
echo ::set-output name=tags::${TAGS}
-
name: Login to DockerHub
if: success() && github.event_name != 'pull_request'
uses: docker/login-action#v1
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
-
name: Docker Build & Push
if: success() && github.event_name != 'pull_request'
uses: docker/build-push-action#v2
with:
push: true
tags: ${{steps.prepare.outputs.tags}}
context: mssql/.
So i was thinking this would work. The Dockerfile I am building uses mcr.microsoft.com/mssql/server:2017-latest which I figured would work.
Dockerfile:
FROM mcr.microsoft.com/mssql/server:2017-latest
ENV ACCEPT_EULA=Y
ENV SA_PASSWORD=Password123!
ENV MSSQL_PID=Developer
ARG DBNAME=evesde
EXPOSE 1433
RUN apt-get update \
&& apt-get install unzip -y
RUN wget -progress=bar:force -q -O sqlpackage.zip https://go.microsoft.com/fwlink/?linkid=2165213 \
&& unzip -qq sqlpackage.zip -d /opt/sqlpackage \
&& chmod +x /opt/sqlpackage/sqlpackage
RUN wget -o /tmp/db.bacpac https://www.fuzzwork.co.uk/dump/mssql-latest.bacpac
RUN ( /opt/mssql/bin/sqlservr & ) | grep -q "Service Broker manager has started" \
&& /opt/sqlpackage/sqlpackage /a:Import /tsn:. /tdn:${DBNAME} /tu:sa /tp:$SA_PASSWORD /sf:/tmp/db.bacpac \
&& rm /tmp/db.bacpac \
&& pkill sqlservr
EDIT As I keep reading various documents, I am trying to understand and test various methods to see if i can spawn a build. I was thinking that simulating a MAC might be useful, so i had also attempted to use the action: runs-on: macos:latest to see if that would solve it, but i havent seen gains as the run docker-login-action#v1 will fail.
Looking Through each line item I ended up the the following Dockerfile.
FROM mcr.microsoft.com/mssql/server:2017-latest
ENV ACCEPT_EULA=Y
ENV SA_PASSWORD=Password123!
ENV MSSQL_PID=Developer
ENV DBNAME=evesde
EXPOSE 1433
RUN apt-get update \
&& apt-get install unzip -y
RUN wget -progress=bar:force -q -O sqlpackage.zip https://go.microsoft.com/fwlink/?linkid=2165213 \
&& unzip -qq sqlpackage.zip -d /opt/sqlpackage \
&& chmod +x /opt/sqlpackage/sqlpackage
RUN wget -progress=bar:force -q -O /mssql-latest.bacpac https://www.fuzzwork.co.uk/dump/mssql-latest.bacpac
RUN ( /opt/mssql/bin/sqlservr & ) | grep -q "Service Broker manager has started" \
&& /opt/sqlpackage/sqlpackage /a:Import /tsn:. /tdn:$DBNAME /tu:sa /tp:$SA_PASSWORD /sf:/mssql-latest.bacpac \
&& pkill sqlservr
#Cleanup of created bulk files no longer needed.
RUN rm mssql-latest.bacpac sqlpackage.zip
The Main difference is where the bacpac file is stored. It seemed there were hiccups when creating that file. After adjusting the location, and breaking apart the import list, it seemed to work.
Notes: When the file was created in TMP, it seemed to be partially created, and so it was recognizing the existing file but it was corrupt. Not sure if there were size limits, but it was an observation. Putting it in the / directory of the build gave me the access and complete file so i needed to adjust the /sf reference.
Lastly because there were hanging files which no longer were needed, I found it best to just do a little cleanup by deleting both the sqlpackage and bacpac files.
Suggesting to investigate grep -q "Service Broker manager has started" &&
Maybe this fails, because you start /opt/mssql/bin/sqlservr then immediately check that it started. In most cases it takes few seconds to start.
To test if my thesis is correct.Suggesting to insert few sleep 10 or timeout commands in strategic places.
I am setting up a CI/CD with gitlab.
What I want to do:
create a dev tagged image
upload the image to my registry
deploy the dev image on the dev server
create the prod tagged image
upload the image to my registry
deploy the prod image
The application is an application developed in React based on an API.
The only difference between the image tagged dev and the one of prod is the url of the api.
(The dev react application must point to the dev api and the prod one to the prod api)
For the moment, I've done that:
stages:
- docker-build
- deploy
- deploy-prod
docker-build:
stage: docker-build
image: docker:latest
services:
- name: docker:19.03.8-dind
script:
- sudo docker login -u registry_user -p 'MYPASWORD' https://XXXXX.com
- sudo docker build -t XXXXX.com/myapp .
- sudo docker push XXXXX.com/myapp
- sudo docker rmi XXXXX.com/myapp
- sudo docker image prune -f
- sudo docker logout
tags:
- build
deploy:
stage: deploy
image: kroniak/ssh-client
before_script:
- echo "deploying app"
script:
- sudo ln -s docker-compose.yml.dev docker-compose.yml
- sudo docker login -u registry_user -p 'MYPASSWORD' https://XXXXX.com
- sudo docker-compose pull
- sudo docker logout
- sudo docker-compose up -d
tags:
- dev
deploy-prd1:
stage: deploy-prod
image: kroniak/ssh-client
before_script:
- echo "deploying app"
script:
- sudo ln -s docker-compose.yml.prd docker-compose.yml
- sudo docker login -u registry_user -p 'MYPASSWORD' https://XXXX.com
- sudo docker-compose pull
- sudo docker logout
- sudo docker-compose up -d
tags:
- prodg
when: manual
deploy-prd2:
stage: deploy-prod
image: kroniak/ssh-client
before_script:
- echo "deploying app"
script:
- sudo ln -s docker-compose.yml.prd docker-compose.yml
- sudo docker login -u registry_user -p 'MYPASSWORD' https://XXXX.com
- sudo docker-compose pull
- sudo docker logout
- sudo docker-compose up -d
tags:
- prods
when: manual
In my docker-compose.yml.prd and docker-compose.yml.dev, I have:
image: XXXXX.com/myapp:prod or image: XXXXX.com/myapp:dev
The url of the api is defined in a .js file as a constant.
const url="api-XXXX.com"
What is the best way to build 2 different images with 2 different urls?
I'm able to expose 2 services locally, on my computer with the following command docker run -d --name #containerName -e "PORT=8080" -p 8080:8080 -p 5000:5000 #imageName.
On port 5000, I expose my backend using a flask-restful api and on port 8080, I expose my frontend using nginx to serve my react.js application.
When I deploy that on Heroku platform I have 2 problems :
It seems Heroku try to bind Nginx on port 80 but the PORT env var is different, log's output :
Starting process with command /bin/sh -c gunicorn\ --bind\ 0.0.0.0:5000\ app:app\ --daemon\ \&\&\ sed\ -i\ -e\ \'s/\34352/\'\"\34352\"\'/g\'\ /etc/nginx/conf.d/default.conf\ \&\&\ nginx\ -g\ \'daemon\ off\;\'
nginx: [warn] the "user" directive makes sense only if the master process runs with super-user privileges, ignored in /etc/nginx/nginx.conf:1
[emerg] bind() to 0.0.0.0:80 failed (13: Permission denied)
How can I write the -p 8080:8080 -p 5000:5000 part inside the Dockerfile or hack around it since I can't specify the docker run [...] command on Heroku ?
I'm new to Docker and Nginx, so I would be very grateful if you know a better way to achieve my goal. Thanks in advance.
# ------------------------------------------------------------------------------
# Temporary image for react.js app using a multi-stage build
# ref : https://docs.docker.com/develop/develop-images/multistage-build/
# ------------------------------------------------------------------------------
FROM node:latest as build-react
# create a shared folder and define it as current dir
WORKDIR /usr/src/app
ENV PATH /usr/src/app/node_modules/.bin:$PATH
# copy the files required for node packages installation
COPY ./react-client/package.json ./
COPY ./react-client/yarn.lock ./
# install dependencies, copy code and build bundles
RUN yarn install
COPY ./react-client .
RUN yarn build
# ------------------------------------------------------------------------------
# Production image based on ubuntu:latest with nginx & python3
# ------------------------------------------------------------------------------
FROM ubuntu:latest as prod-react
WORKDIR /usr/src/app
# update, upgrade and install packages
RUN apt-get update && apt-get install -y --no-install-recommends apt-utils
RUN apt-get upgrade -y
RUN apt-get install -y nginx curl python3 python3-distutils python3-apt
# install pip
RUN curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py
RUN python3 get-pip.py
# copy flask-api requirements file and install modules
COPY ./flask-api/requirements.txt ./
RUN pip install -r requirements.txt
RUN pip install gunicorn
# copy flask code
COPY ./flask-api/app.py .
# copy built image and config onto nginx directory
COPY --from=build-react /usr/src/app/build /usr/share/nginx/html
COPY ./conf.nginx /etc/nginx/conf.d/default.conf
# ------------------------------------------------------------------------------
# Serve flask-api with gunicorn and react-client with nginx
# Ports :
# - 5000 is used for flask-api
# - 8080 is used by nginx to serve react-client
# You can change them but you'll have to change :
# - for flask-api : conf.nginx, axios calls (5000 -> #newApiPort)
# - for react-client : CORS origins (8080 -> #newClientPort)
#
# To build and run this :
# docker build -t #imageName .
# docker run -d --name #containerName -e "PORT=8080" -p 8080:8080 -p 5000:5000 #imageName
# ------------------------------------------------------------------------------
CMD gunicorn --bind 0.0.0.0:5000 app:app --daemon && \
sed -i -e 's/$PORT/'"$PORT"'/g' /etc/nginx/conf.d/default.conf && \
nginx -g 'daemon off;'
I am trying to set up bitbucket-pipelines.yml file to do the build and then deploy react project. There is my code below.
image: node:10.15.1
pipelines:
default: # Pipelines that are triggered manually via the Bitbucket GUI
- step:
name: Build
script:
- yarn
- yarn build
- step:
name: Deploy
script:
- apt-get update
- apt-get install ncftp
- ncftpput -v -u "$FTP_USERNAME" -p "$FTP_PASSWORD" -R $FTP_HOST $FTP_SITE_ROOT_DEV build/*
- echo Finished uploading /build files to $FTP_HOST$FTP_SITE_ROOT
I am getting the result:
+ ncftpput -v -u "$FTP_USERNAME" -p "$FTP_PASSWORD" -R $FTP_HOST $FTP_SITE_ROOT_DEV build/*
could not stat build/*: No such file or directory.
ncftpput build/*: no valid files were specified.
It says that there is no build file or directory. but yarn build is actually build folder creates: react-scripts build
From Atlassian documentation
Key concepts
A pipeline is made up of a set of steps.
Each step in your pipeline runs a separate Docker container. If you
want, you can use different types of container for each step, by
selecting different images
So, when you try to send it in Deploy Step it's not there because you built it in another container.
To pass files between steps you have to use Artifacts
image: node:10.15.1
pipelines:
default: # Pipelines that are triggered manually via the Bitbucket GUI
- step:
name: Build
script:
- yarn
- yarn build
artifacts: # defining build/ as an artifact
- build/**
- step:
name: Deploy
script:
- apt-get update
- apt-get install ncftp
- ncftpput -v -u "$FTP_USERNAME" -p "$FTP_PASSWORD" -R $FTP_HOST $FTP_SITE_ROOT_DEV build/*
- echo Finished uploading /build files to $FTP_HOST$FTP_SITE_ROOT