Gitlab-ci fails at copying files - reactjs

I'm using gitlab-ci and gitlab runner to deploy my React app to my server.
Here is my code:
image: node:alpine
variables:
PUBLIC_URL: https://example.com
stages:
- build
- deploy
build:
stage: build
tags:
- some-tag
- another-tag
script:
- echo "Building deploy package"
- pwd
- npm install
- mv .env.example .env
- echo ".env file changed!"
- CI='' npm run build
- echo "Build successful"
- ls
artifacts:
expire_in: 1 hour
paths:
- build
only:
- master
deploy_production:
stage: deploy
tags:
- some-tag
- another-tag
script:
- echo "Current Directory:"
- pwd
- ls
- echo "Deploying to server"
- cp -rv ./build/* /dir/path-in-my-server/
- echo "Deployed"
artifacts:
expire_in: 1 hour
paths:
- build
environment:
name: production
url: https://example.com
only:
- master
Every steps works well, but cp -rv ./build/* /dir/path-in-my-server/ is not working. It giving this error:
cp: can't create '/dir/path-in-my-server/asset-manifest.json': No such file or directory
cp: can't create '/dir/path-in-my-server/favicon.ico': No such file or directory
cp: can't create '/dir/path-in-my-server/index.html': No such file or directory
cp: can't create '/dir/path-in-my-server/manifest.json': No such file or directory
cp: can't create directory '/dir/path-in-my-server/static': No such file or directory
Cleaning up file based variables
00:03
ERROR: Job failed: exit code 1
What am I missing?

I fixed this issue by using $PWD env variable instead of relative paths like ./. GitLab seems to have a problem with relative paths in cp and mv commands.
script:
- cp -rv ${PWD}/some-directory/* /your-dest-dir/

Related

env values are not getting added in docker image and app runner

I have buildspec.yml file. I have to get env value from AWS secret manager and add it during codebuild. Later image will be hosted in App runner.
`
version: 0.2
env:
secrets-manager:
REACT_APP_NAME: "AWS_SECRET:AWS_SECRET_VALUE_KEY"
phases:
pre_build:
commands:
- echo Logging in to Amazon ECR...
- aws --version
- $(aws ecr get-login --region $AWS_DEFAULT_REGION --no-include-email)
- REPOSITORY_URI=ecr_image_url
- COMMIT_HASH=$(echo $CODEBUILD_RESOLVED_SOURCE_VERSION | cut -c 1-7)
- IMAGE_TAG=build-$(echo $CODEBUILD_BUILD_ID | awk -F":" '{print $2}')
build:
commands:
- echo Build started on `date`
- echo Building the Docker image...
- docker build -t $REPOSITORY_URI:latest .
- docker tag $REPOSITORY_URI:latest $REPOSITORY_URI:$IMAGE_TAG
post_build:
commands:
- echo Build completed on `date`
- echo Pushing the Docker images...
- docker push $REPOSITORY_URI:latest
- docker push $REPOSITORY_URI:$IMAGE_TAG
- echo Writing image definitions file...
- printf '[{"name":"nodeapp","imageUri":"%s"}]' $REPOSITORY_URI:$IMAGE_TAG > imagedefinitions.json
- cat imagedefinitions.json
artifacts:
files: imagedefinitions.json
`
i have followed AWS documentation to add env value from secret manager and given necesarry permisons. Application hosted is working fine without any error except ENV getting loaded.

Deploy docker image of react app to Elastic beanstalk

I am trying to deploy my create-react-app to elastic bean stalk with docker
I have setup codepipeline with codebuild and elastic beanstalk.
I am getting this error
Stop running the command. Error: Dockerfile and Dockerrun.aws.json are both missing, abort deployment
My Dockerfile looks like this
FROM tiangolo/node-frontend:10 as build-stage
# Create app directory
# RUN mkdir -p /usr/src/app
# WORKDIR /usr/src/app
WORKDIR /app
# # fix npm private module
# ARG NPM_TOKEN
# COPY .npmrc /app/
#COPY package.json package.json
COPY package*.json /app/
COPY Dockerrun.aws.json /app/
RUN npm install
COPY ./ /app/
# RUN CI=true npm test
RUN npm run build
# FROM nginx:1.15
FROM nginx:1.13.3-alpine
# Install app dependencies
# Stage 1, based on Nginx, to have only the compiled app, ready for production with Nginx
COPY --from=build-stage /app/build/ /usr/share/nginx/html
# Copy the default nginx.conf provided by tiangolo/node-frontend
COPY --from=build-stage /nginx.conf /etc/nginx/conf.d/default.conf
RUN ls
EXPOSE 80
I also have a Dockerrun.aws.json
{
"AWSEBDockerrunVersion": "3",
"Image": {
"Name": "something.dkr.ecr.us-east-2.amazonaws.com/subscribili:latest",
"Update": "true"
},
"Ports": [
{
"ContainerPort": "5000"
}
],
"Logging": "/var/log/nginx"
}
my buildspec.yml file looks like this
version: 0.2
phases:
pre_build:
commands:
- $(aws ecr get-login --region $AWS_DEFAULT_REGION --no-include-email)
- REPOSITORY_URI=something.dkr.ecr.us-east-2.amazonaws.com/subscribili
- COMMIT_HASH=$(echo $CODEBUILD_RESOLVED_SOURCE_VERSION | cut -c 1-7)
- IMAGE_TAG=${COMMIT_HASH:=latest}
build:
commands:
- docker build -t $REPOSITORY_URI:latest .
- docker tag $REPOSITORY_URI:latest $REPOSITORY_URI:$IMAGE_TAG
post_build:
commands:
- docker push $REPOSITORY_URI:latest
- docker push $REPOSITORY_URI:$IMAGE_TAG
- printf '[{"name":"nginx","imageUri":"%s"}]' $REPOSITORY_URI:$IMAGE_TAG > imagedefinitions.json
artifacts:
files: imagedefinitions.json
I am sure there is some issue with buildspec file but I am just not sure what.
I have read all the documentation still couldn't figure out how to write the buildspec file Docker.
Is there anything I am missing?
Dockerfile and Dockerrun.aws.json these 2 files need to be in the same directory where the command "COPY Dockerrun.aws.json /app/ " is running. make sure these files exists in that directory and this error should disappear
"eb deploy" command creates a zip file from your code. However, to make it as small as possible, it only takes the file that are commited to git. So, if you did not commit Dockerfile and Dockerrun file, these two files won't be included in the zip.
If you do not want it to behave like this, you can add .ebignore file to your projects root directory. This files commands are the same as the gitignore file; you can copy everything from gitignore to ebignore. If there is a .ebignore, cli will not check if the project is commited to a source control.
Now to check what is included in zip file, watch the .elasticbeanstalk folder after "eb deploy" command. When the zip is prepared, copy it immediately and paste to another folder. Note: the original zip file will be removed after the cli upload that.

Appveyor not running the build script

Hello my build script isn't creating a build script for reason I do not know, the Packjson has the correct script which is
"build": "npm run silentrenew && react-scripts --max_old_space_size=8192 build",
I have double checked my YML file and all the tags
version: '1.0.{build}'
image: Ubuntu
init:
- cmd: set NODE_OPTIONS=--max-old-space-size=8192
environment:
REACT_APP_VSA_URL: >-
https://xzc-e-n-vsa0000-d-api-02.xzc-e-n-snt-06-ut-ase-01.p.azurewebsites.net
REACT_APP_NOTIFICATIONS_API_SECRET: d8015bf6cab64573b2d7c17bac94bed4
REACT_APP_EVENT_LOG_SECRET: 3431cec7ecbb42bba1957934c751f02d
install:
- cmd: npm ci --ignore-scripts
build_script:
- cmd: |-
npm --no-git-tag-version version "%APPVEYOR_BUILD_VERSION%"
npm run build
test_script:
- cmd: 'npm run test:ci'
artifacts:
- path: ./build
name: dpe
deploy:
- provider: Environment
name: dpe-dev
'on':
branch:
- internal
- tablet
on_finish:
- pwsh: >-
# upload results to AppVeyor
$wc = New-Object 'System.Net.WebClient'
$wc.UploadFile("https://ci.appveyor.com/api/testresults/junit/$($env:APPVEYOR_JOB_ID)",
(Resolve-Path .\coverage\junit\junit.xml))
# upload coverage results to CodeCov
$env:PATH = 'C:\msys64\usr\bin;' + $env:PATH
Invoke-WebRequest -Uri 'https://codecov.io/bash' -OutFile codecov.sh
bash codecov.sh -s "./coverage/jest/"
This is the exact message I'm getting in AppVeyor, since the build isn't creating it isn't running the test and saying it was successful.
For Linux builds the prefix must be sh: or no prefix at all:
build_script:
- sh: |-
npm --no-git-tag-version version "$APPVEYOR_BUILD_VERSION"
npm run build

How to deploy react app in ubuntu server with bitbucket pipeline

I want to build and deploy my react app from my master branch I have managed to automate build but unable to transfer it into my server find my pipeline code below, I receive below error
pipelines:
default:
- step:
name: Build Title
script:
- npm install
- npm run build
- mkdir packaged
- tar -czvf packaged/package-${BITBUCKET_BUILD_NUMBER}.tar.gz -C build .
artifacts:
- packaged/**
- step:
name: Deploy to Web
image: alpine
trigger: manual
deployment: production
script:
- mkdir upload
- tar -xf packaged/package-${BITBUCKET_BUILD_NUMBER}.tar.gz -C upload
- apk update && apk add openssh rsync
- rsync -a -e "ssh -o StrictHostKeyChecking=no" --delete upload/ $USERNAME#$SERVER:html/temp/react-${BITBUCKET_BUILD_NUMBER}
- ssh -o StrictHostKeyChecking=no $USERNAME#$SERVER "rm -r html/www"
- ssh -o StrictHostKeyChecking=no $USERNAME#$SERVER "mv 'html/temp/react-${BITBUCKET_BUILD_NUMBER}' 'var/www/html/deploy'"
- ssh -o StrictHostKeyChecking=no $USERNAME#$SERVER "chmod -R u+rwX,go+rX,go-w html/www"
Error Log
+ rsync -a -e "ssh -o StrictHostKeyChecking=no" --delete upload/ $USERNAME#$SERVER:html/temp/react-${BITBUCKET_BUILD_NUMBER}
load pubkey "/opt/atlassian/pipelines/agent/ssh/id_rsa": invalid format
rsync: mkdir "/$USERNAME/html/temp/react-15" failed: No such file or directory (2)
rsync error: error in file IO (code 11) at main.c(675) [Receiver=3.1.2]
I noticed. this to happen only on Alpine-based images. For example, Debian images work fine. It also happens on Buddy, not just on Bitbucket. I expect this is upstream Alpine bug/issue.
I was using that same script as well, below is what ended up working for me after a lot of banging my head against the screen, updating the image and adding upload artifacts seemed to be the kicker.
default:
- step:
name: Build React Project
script:
- npm install
- npm run-script build
- mkdir packaged
- tar -czvf packaged/package-${BITBUCKET_BUILD_NUMBER}.tar.gz -C build .
artifacts:
- packaged/**
- step:
name: Deploy to Web
image: atlassian/default-image:latest
trigger: manual
deployment: production
script:
- mkdir upload
- tar -xf packaged/package-${BITBUCKET_BUILD_NUMBER}.tar.gz -C upload
- rsync -a --delete upload/ $USERNAME#$SERVER:/home/temp/react-${BITBUCKET_BUILD_NUMBER}
- ssh $USERNAME#$SERVER "rm -r /home/www"
- ssh $USERNAME#$SERVER "mv '/home/temp/react-${BITBUCKET_BUILD_NUMBER}' '/home/www'"
- ssh $USERNAME#$SERVER "chmod -R u +rwX,go+rX,go-w /home/www"
artifacts:
- upload/**

How to deploy react project to ftp using Bitbucket Pipelines?

I am trying to set up bitbucket-pipelines.yml file to do the build and then deploy react project. There is my code below.
image: node:10.15.1
pipelines:
default: # Pipelines that are triggered manually via the Bitbucket GUI
- step:
name: Build
script:
- yarn
- yarn build
- step:
name: Deploy
script:
- apt-get update
- apt-get install ncftp
- ncftpput -v -u "$FTP_USERNAME" -p "$FTP_PASSWORD" -R $FTP_HOST $FTP_SITE_ROOT_DEV build/*
- echo Finished uploading /build files to $FTP_HOST$FTP_SITE_ROOT
I am getting the result:
+ ncftpput -v -u "$FTP_USERNAME" -p "$FTP_PASSWORD" -R $FTP_HOST $FTP_SITE_ROOT_DEV build/*
could not stat build/*: No such file or directory.
ncftpput build/*: no valid files were specified.
It says that there is no build file or directory. but yarn build is actually build folder creates: react-scripts build
From Atlassian documentation
Key concepts
A pipeline is made up of a set of steps.
Each step in your pipeline runs a separate Docker container. If you
want, you can use different types of container for each step, by
selecting different images
So, when you try to send it in Deploy Step it's not there because you built it in another container.
To pass files between steps you have to use Artifacts
image: node:10.15.1
pipelines:
default: # Pipelines that are triggered manually via the Bitbucket GUI
- step:
name: Build
script:
- yarn
- yarn build
artifacts: # defining build/ as an artifact
- build/**
- step:
name: Deploy
script:
- apt-get update
- apt-get install ncftp
- ncftpput -v -u "$FTP_USERNAME" -p "$FTP_PASSWORD" -R $FTP_HOST $FTP_SITE_ROOT_DEV build/*
- echo Finished uploading /build files to $FTP_HOST$FTP_SITE_ROOT

Resources