env values are not getting added in docker image and app runner - reactjs

I have buildspec.yml file. I have to get env value from AWS secret manager and add it during codebuild. Later image will be hosted in App runner.
`
version: 0.2
env:
secrets-manager:
REACT_APP_NAME: "AWS_SECRET:AWS_SECRET_VALUE_KEY"
phases:
pre_build:
commands:
- echo Logging in to Amazon ECR...
- aws --version
- $(aws ecr get-login --region $AWS_DEFAULT_REGION --no-include-email)
- REPOSITORY_URI=ecr_image_url
- COMMIT_HASH=$(echo $CODEBUILD_RESOLVED_SOURCE_VERSION | cut -c 1-7)
- IMAGE_TAG=build-$(echo $CODEBUILD_BUILD_ID | awk -F":" '{print $2}')
build:
commands:
- echo Build started on `date`
- echo Building the Docker image...
- docker build -t $REPOSITORY_URI:latest .
- docker tag $REPOSITORY_URI:latest $REPOSITORY_URI:$IMAGE_TAG
post_build:
commands:
- echo Build completed on `date`
- echo Pushing the Docker images...
- docker push $REPOSITORY_URI:latest
- docker push $REPOSITORY_URI:$IMAGE_TAG
- echo Writing image definitions file...
- printf '[{"name":"nodeapp","imageUri":"%s"}]' $REPOSITORY_URI:$IMAGE_TAG > imagedefinitions.json
- cat imagedefinitions.json
artifacts:
files: imagedefinitions.json
`
i have followed AWS documentation to add env value from secret manager and given necesarry permisons. Application hosted is working fine without any error except ENV getting loaded.

Related

React setting environment variables in the right way with docker and codebuild

I have a codebuild project with an environment variable named "GATEWAY_URI" set on aws console as plaintext.
I am using the following buildspec file;
version: 0.2
env:
variables:
AWS_REGION: "eu-central-1"
phases:
pre_build:
commands:
- echo logging in to ecr...
- >
aws ecr get-login-password --region $AWS_REGION \
| docker login --username AWS --password-stdin $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com
- |
if expr "$CODEBUILD_WEBHOOK_TRIGGER" == "branch/main" >/dev/null && expr "$CODEBUILD_WEBHOOK_HEAD_REF" == "refs/heads/main" >/dev/null; then
DOCKER_TAG=prod
else
DOCKER_TAG=${CODEBUILD_RESOLVED_SOURCE_VERSION}
fi
- echo "Docker tag:" $DOCKER_TAG
- git_diff="$(git diff --name-only HEAD HEAD~1)"
- echo "$git_diff"
- buildWeb=false
- |
case $git_diff in
*Web*)
buildWeb=true
docker pull $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/web:builder || true
docker pull $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/web:$DOCKER_TAG || true
;;
esac
build:
commands:
- echo building images....
- |
if [ "$buildWeb" = true ]; then
docker build \
--target builder \
--cache-from $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/web:builder \
-f Web/Dockerfile.prod \
-t $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/web:builder \
--build-arg NODE_ENV=production \
--build-arg REACT_APP_GATEWAY_URI=$GATEWAY_URI \
./Web
docker build \
--cache-from $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/web:$DOCKER_TAG \
-f Web/Dockerfile.prod \
-t $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/web:$DOCKER_TAG \
./Web
fi
post_build:
commands:
- |
if [ "$buildWeb" = true ]; then
docker push $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/web:builder
docker push $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/web:$DOCKER_TAG
fi
- chmod +x ./deploy.sh
- bash deploy.sh enter code here
The "GATEWAY_URI" is reachable in buildspec I echo it before. I am passing it to docker build command as --build-arg and in docker file, I am setting it as env before "npm run build" so CRA will be able to catch it. I using the following multistage docker file
###########
# BUILDER #
###########
FROM public.ecr.aws/docker/library/node:17-alpine as builder
# set working directory
WORKDIR /usr/src/app
# add `/usr/src/app/node_modules/.bin` to $PATH
ENV PATH /usr/src/app/node_modules/.bin:$PATH
# install and cache app dependencies
COPY package.json .
COPY package-lock.json .
RUN npm ci
RUN npm install react-scripts#5.0.0 --silent
# set environment variables
ARG REACT_APP_GATEWAY_URI
ENV REACT_APP_GATEWAY_URI $REACT_APP_GATEWAY_URI
ARG NODE_ENV
ENV NODE_ENV $NODE_ENV
# create build
COPY . .
RUN npm run build
#########
# FINAL #
#########
FROM public.ecr.aws/docker/library/nginx:stable-alpine
# update nginx conf
RUN rm -rf /etc/nginx/conf.d
COPY conf /etc/nginx
# copy static files
COPY --from=builder /usr/src/app/build /usr/share/nginx/html
# expose port
EXPOSE 80
# run nginx
CMD ["nginx", "-g", "daemon off;"]
At the end of the day when the build passes, the "REACT_APP_GATEWAY_URI" variable under process.env is becoming an empty string.
Note: When I used the same dockerfile with docker-compose on my local system it's working fine and I can see "REACT_APP_GATEWAY_URI" on process.env. So my suspicion is that am I using the "- |" syntax right in the buildspec file. I mean, can I execute multiple docker build commands under it consecutively like that?
Edit: After adding the same --build-arg for "GATEWAY_URI" also to the docker build command for the FINAL stage, it is now working as expected. So, I think it was replaced with an empty string between the builder stage and the final stage. But I thought that I need that variable only on the build stage.

How to tag correctly docker image build with gitlab-ci?

I am setting up a CI/CD with gitlab.
What I want to do:
create a dev tagged image
upload the image to my registry
deploy the dev image on the dev server
create the prod tagged image
upload the image to my registry
deploy the prod image
The application is an application developed in React based on an API.
The only difference between the image tagged dev and the one of prod is the url of the api.
(The dev react application must point to the dev api and the prod one to the prod api)
For the moment, I've done that:
stages:
- docker-build
- deploy
- deploy-prod
docker-build:
stage: docker-build
image: docker:latest
services:
- name: docker:19.03.8-dind
script:
- sudo docker login -u registry_user -p 'MYPASWORD' https://XXXXX.com
- sudo docker build -t XXXXX.com/myapp .
- sudo docker push XXXXX.com/myapp
- sudo docker rmi XXXXX.com/myapp
- sudo docker image prune -f
- sudo docker logout
tags:
- build
deploy:
stage: deploy
image: kroniak/ssh-client
before_script:
- echo "deploying app"
script:
- sudo ln -s docker-compose.yml.dev docker-compose.yml
- sudo docker login -u registry_user -p 'MYPASSWORD' https://XXXXX.com
- sudo docker-compose pull
- sudo docker logout
- sudo docker-compose up -d
tags:
- dev
deploy-prd1:
stage: deploy-prod
image: kroniak/ssh-client
before_script:
- echo "deploying app"
script:
- sudo ln -s docker-compose.yml.prd docker-compose.yml
- sudo docker login -u registry_user -p 'MYPASSWORD' https://XXXX.com
- sudo docker-compose pull
- sudo docker logout
- sudo docker-compose up -d
tags:
- prodg
when: manual
deploy-prd2:
stage: deploy-prod
image: kroniak/ssh-client
before_script:
- echo "deploying app"
script:
- sudo ln -s docker-compose.yml.prd docker-compose.yml
- sudo docker login -u registry_user -p 'MYPASSWORD' https://XXXX.com
- sudo docker-compose pull
- sudo docker logout
- sudo docker-compose up -d
tags:
- prods
when: manual
In my docker-compose.yml.prd and docker-compose.yml.dev, I have:
image: XXXXX.com/myapp:prod or image: XXXXX.com/myapp:dev
The url of the api is defined in a .js file as a constant.
const url="api-XXXX.com"
What is the best way to build 2 different images with 2 different urls?

Gitlab-ci fails at copying files

I'm using gitlab-ci and gitlab runner to deploy my React app to my server.
Here is my code:
image: node:alpine
variables:
PUBLIC_URL: https://example.com
stages:
- build
- deploy
build:
stage: build
tags:
- some-tag
- another-tag
script:
- echo "Building deploy package"
- pwd
- npm install
- mv .env.example .env
- echo ".env file changed!"
- CI='' npm run build
- echo "Build successful"
- ls
artifacts:
expire_in: 1 hour
paths:
- build
only:
- master
deploy_production:
stage: deploy
tags:
- some-tag
- another-tag
script:
- echo "Current Directory:"
- pwd
- ls
- echo "Deploying to server"
- cp -rv ./build/* /dir/path-in-my-server/
- echo "Deployed"
artifacts:
expire_in: 1 hour
paths:
- build
environment:
name: production
url: https://example.com
only:
- master
Every steps works well, but cp -rv ./build/* /dir/path-in-my-server/ is not working. It giving this error:
cp: can't create '/dir/path-in-my-server/asset-manifest.json': No such file or directory
cp: can't create '/dir/path-in-my-server/favicon.ico': No such file or directory
cp: can't create '/dir/path-in-my-server/index.html': No such file or directory
cp: can't create '/dir/path-in-my-server/manifest.json': No such file or directory
cp: can't create directory '/dir/path-in-my-server/static': No such file or directory
Cleaning up file based variables
00:03
ERROR: Job failed: exit code 1
What am I missing?
I fixed this issue by using $PWD env variable instead of relative paths like ./. GitLab seems to have a problem with relative paths in cp and mv commands.
script:
- cp -rv ${PWD}/some-directory/* /your-dest-dir/

How to deploy react project to ftp using Bitbucket Pipelines?

I am trying to set up bitbucket-pipelines.yml file to do the build and then deploy react project. There is my code below.
image: node:10.15.1
pipelines:
default: # Pipelines that are triggered manually via the Bitbucket GUI
- step:
name: Build
script:
- yarn
- yarn build
- step:
name: Deploy
script:
- apt-get update
- apt-get install ncftp
- ncftpput -v -u "$FTP_USERNAME" -p "$FTP_PASSWORD" -R $FTP_HOST $FTP_SITE_ROOT_DEV build/*
- echo Finished uploading /build files to $FTP_HOST$FTP_SITE_ROOT
I am getting the result:
+ ncftpput -v -u "$FTP_USERNAME" -p "$FTP_PASSWORD" -R $FTP_HOST $FTP_SITE_ROOT_DEV build/*
could not stat build/*: No such file or directory.
ncftpput build/*: no valid files were specified.
It says that there is no build file or directory. but yarn build is actually build folder creates: react-scripts build
From Atlassian documentation
Key concepts
A pipeline is made up of a set of steps.
Each step in your pipeline runs a separate Docker container. If you
want, you can use different types of container for each step, by
selecting different images
So, when you try to send it in Deploy Step it's not there because you built it in another container.
To pass files between steps you have to use Artifacts
image: node:10.15.1
pipelines:
default: # Pipelines that are triggered manually via the Bitbucket GUI
- step:
name: Build
script:
- yarn
- yarn build
artifacts: # defining build/ as an artifact
- build/**
- step:
name: Deploy
script:
- apt-get update
- apt-get install ncftp
- ncftpput -v -u "$FTP_USERNAME" -p "$FTP_PASSWORD" -R $FTP_HOST $FTP_SITE_ROOT_DEV build/*
- echo Finished uploading /build files to $FTP_HOST$FTP_SITE_ROOT

Bitbucket Pipeline Deploy issue to Google App Engine

I'm trying to deploy a golang app to app engine. Now I'm able to do it via the gcloud CLI on my mac, and this works fine (running gcloud app deploy app.yaml). However, I'm getting the following error on Bitbucket Pipelines:
+ gcloud --quiet --verbosity=error app deploy app.yaml --promote
You are about to deploy the following services:
- some-project/default/20171128t070345 (from [/go/src/bitbucket.org/acme/some-app/app.yaml])
Deploying to URL: [https://project-url.appspot.com]
Beginning deployment of service [default]...
ERROR: (gcloud.app.deploy) Staging command [/tmp/google-cloud-sdk/platform/google_appengine/goroot/bin/go-app-stager /go/src/bitbucket.org/acme/some-app/app.yaml /tmp/tmpLbUCA5] failed with return code [1].
------------------------------------ STDOUT ------------------------------------
------------------------------------ STDERR ------------------------------------
2017/11/28 07:03:45 failed analyzing /go/src/bitbucket.org/acme/some-app: cannot find package "github.com/gorilla/context" in any of:
($GOROOT not set)
/go/src/github.com/gorilla/context (from $GOPATH)
GOPATH: /go
--------------------------------------------------------------------------------
Here's my bitbucket-pipelines.yaml content:
image: golang:onbuild
pipelines:
branches:
develop:
- step:
script: # Modify the commands below to build your repository.
# Downloading the Google Cloud SDK
- curl -o /tmp/google-cloud-sdk.tar.gz https://dl.google.com/dl/cloudsdk/channels/rapid/downloads/google-cloud-sdk-155.0.0-linux-x86_64.tar.gz
- tar -xvf /tmp/google-cloud-sdk.tar.gz -C /tmp/
- /tmp/google-cloud-sdk/install.sh -q
- source /tmp/google-cloud-sdk/path.bash.inc
- PACKAGE_PATH="${GOPATH}/src/bitbucket.org/${BITBUCKET_REPO_OWNER}/${BITBUCKET_REPO_SLUG}"
- mkdir -pv "${PACKAGE_PATH}"
- tar -cO --exclude-vcs --exclude=bitbucket-pipelines.yml . | tar -xv -C "${PACKAGE_PATH}"
- cd "${PACKAGE_PATH}"
- go get -v
- go get -u github.com/golang/dep/cmd/dep
- go build -v
- go install
- go test -v
- echo $GOOGLE_CLIENT_SECRET | base64 --decode --ignore-garbage > ./gcloud-api-key.json
- gcloud auth activate-service-account --key-file gcloud-api-key.json
- gcloud components install app-engine-go
#- GOROOT="/tmp/go"
# Linking to the Google Cloud project
- gcloud config set project $CLOUDSDK_CORE_PROJECT
# Deploying the application
- gcloud --quiet --verbosity=error app deploy app.yaml --promote
- echo $GCLOUD_API_KEYFILE | base64 --decode --ignore-garbage > ./gcloud-api-key.json
#- gcloud auth activate-service-account --key-file gcloud-api-key.json
And, though it shouldn't be an issue since deploying to the cloud works fine, my app.yaml file as well:
runtime: go
api_version: go1
handlers:
- url: /.*
script: _go_app
nobuild_files:
- vendor
skip_files:
- |
^(.*/)?(
(#.*#)|
(.*\.mapping)|
(.*\.po)|
(.*\.pot)|
(.*\.py[co])|
(.*\.sw?)|
(.*\.yaml)|
(.*_test\.go)|
(.*~)|
(LICENSE)|
(Makefile.*)|
(\..*)|
(vendor/.*)|
)$
I'm fairly certain my issue is with how my bitbucket yaml file or the docker image I'm starting with, but I'm stuck. Any thoughts?
Is github.com/gorilla/context only used within your test files?
go get, will not by default get test dependencies.
You can exclusively add go get github.com/gorilla/context to your pipeline script.

Resources