Only execute script if branch is master in TravisCI - reactjs

I'm new to TravisCI and this may be a very silly question, but I'm trying to write the travis config in a way that it only deploys to Firebase when the current branch is master.
That is, only when code is pushed to master or when a PR is merged with master, the firebase deploy command executes. The deploy command should be not be executed when other branches are pushed to, or when PRs are made.
Here's what I have so far:
language: node_js
node_js: 12.16.1
script: echo "Running travis-ci"
install:
- npm install -g firebase-tools
- npm i react-scripts
script:
- yarn add react
- yarn test
- if [ "$TRAVIS_BRANCH" = "master" ]; then yarn build; fi
- if [ "$TRAVIS_BRANCH" = "master" ]; then firebase deploy --project testproj8876 --token $FIREBASE_TOKEN; fi
branches:
only:
- master
Since I'm not too familiar with the conventions yet, any improvements/suggestions would also be greatly appreciated.

Google Firebase is supported directly by Travis. See here.
Thereby, I recommend using the solution described in the link above.
deploy:
provider: firebase
token:
secure: "YOUR ENCRYPTED token"
As for your condition, you can check one of my .travis.yml file here and the documentation there (Conditional Deployments)
The following part is what you need:
deploy:
cleanup: false
on:
branch:
- master
If you still have questions, feel free to ask.

Related

How to run Eslint only on changed files when a pull request is raised, in bitbucket pipelines?

We are building a bitbucket pipeline for a React Project. We have pre-commit hooks setup which does linting only on staged files.
How can I achieve the same in bitbucket pipelines, when a pull request is raised it get the diff files, and check the linting for those files only.
As described in this thread, as long as you are using a Docker node image, in which you can install and run ESLint, you should be able to add it to your BitBucket pipeline.
Make sure your pipeline is set to run only on PR:
pipelines:
pull-requests:
'**': #this runs as default for any branch not elsewhere defined
- step:
Example: bitbucket-pipelines.yml (to be tweaked to your specific project)
pipelines:
pull-requests:
'**':
- step:
name: Eslint for pull-request
image: node:12.14
condition:
changesets:
includePaths:
- resources/assets/js/**
script:
- printenv
- CAN_RUN_PIPELINE=$(test -f code_review/eslint/index.js && echo true || echo false)
- if [ "$CAN_RUN_PIPELINE" != true ]; then exit; fi
- git checkout origin/$BITBUCKET_PR_DESTINATION_BRANCH
- git merge origin/$BITBUCKET_BRANCH --no-edit
- git reset $BITBUCKET_PR_DESTINATION_COMMIT
- git add .
- git status
- npm i -g eslint
- npm install --only=dev
- npm install axios dotenv --save-dev
- eslint --format json -c code_review/eslint/rules.js -o ./code_review/eslint/output/output.json --ext .js,.jsx,.vue ./resources || true
- node ./code_review/eslint/index.js $REVIEW_AUTH_STRING

Failure in build using Travis, AWS Elasticbeanstalk and Docker

I am facing an issue while building my React project using GitHub as a repository, Travis as CI with AWS ElasticBeanStalk as a service to run my app using Docker. I am able to run my test suite but after that, it is not deploying my app on AWS and also not getting any error in Travis console except below:
Below is my Travis .yml file configuration:
language: generic
services:
- docker
before_install:
- docker build -t heet1996/my-profile -f Dockerfile.dev .
script:
- docker run heet1996/my-profile npm run test -- --coverage
deploy:
provider: elasticbeanstalk
region: "us-east-1"
app: "My-profile"
env: "MyProfile-env"
bucket_name: "elasticbeanstalk-us-east-1-413920612934"
bucket_path: "My-profile"
on:
branch: master
access_key_id: $AWS_ACCESS_KEY
secret_access_key: "$AWS_SECRET_KEY"
Let me know if you need more information
A couple things you could try:
Your script command needs to set the environment var CI=true
So
script:
- docker run heet1996/my-profile npm run test -- --coverage
Becomes
script:
- docker run -e CI=true heet1996/my-profile npm run test -- --coverage
Also AWS needs the access variables to be named differently.
Change
access_key_id: $AWS_ACCESS_KEY
secret_access_key: "$AWS_SECRET_KEY"
To
access_key_id: "$AWS_ACCESS_KEY_ID"
secret_access_key: "$AWS_SECRET_ACCESS_KEY"
Using the option --coverage, your test will hang, waiting for input. Hence the message: "...no output has been received in the last 10m0s...".
At a certain point, --coverage was probably able to stop tests (as some used for that purpose), but I guess it was not meant for that and subsequent versions of docker removed that behavior.
Your test must conclude and the conclusion be a success for the deployment by Travis to begin.
Use instead the option --watchAll=false. So you should have:
...
script:
- docker run heet1996/my-profile npm run test -- --watchAll=false
...
That would take care of the obvious issue of your test never concluding (that could be the only issue). Afterward, make sure that your tests are successful. Then, you can worry about other issues such as authentication on AWS, etc...

Can I pass environment variables from Gitlab .gitlab-ci.yml to a React app?

I'm trying to set environment variables dynamically using the gitlab CI pipeline.
What I am trying to achieve is to inject the right API keys and URLs depending on the stage I am deploying to (stage, prod).
In my React app I access the variables using process.env.REACT_APP_APPSYNC_URL as decribed in the react documentation.
So far I have tried setting the variables in the gitlab UI and referencing them in my .gitlab-ci.yml file (see code below).
Unfortunately I cannot access the variables this way, so I would be very thankful for any help.
I'm just getting started on CI/CD and different environments, so if I am generally using a bad approach here please let me know!
Here's the .gitlab-ci.yml:
image: nikolaik/python-nodejs:latest
stages:
- install
- test
- deploy
install:
stage: install
script:
- npm install
- npm run build
artifacts:
untracked: true
only:
- stage
- master
test:
stage: test
dependencies:
- install
script:
- npm run test
artifacts:
untracked: true
only:
- stage
- master
deployDev:
stage: deploy
only:
- stage
dependencies:
- install
- test
script:
- pip3 install awscli
- aws configure set aws_access_key_id "$DEV_AWS_KEY"
- aws configure set aws_secret_access_key "$DEV_AWS_SECRET"
- aws s3 sync ./build/ s3://example.dev
variables:
REACT_APP_COGNITO_REGION: $DEV_COGNITO_REGION
REACT_APP_COGNITO_USER_POOL_ID: $DEV_COGNITO_USER_POOL_ID
REACT_APP_COGNITO_APP_CLIENT_ID: $DEV_COGNITO_APP_CLIENT_ID
REACT_APP_COGNITO_IDENTITY_POOL_ID: $DEV_COGNITO_IDENTITY_POOL_ID
REACT_APP_APPSYNC_URL: $DEV_APPSYNC_URL
REACT_APP_APPSYNC_REGION: $DEV_APPSYNC_REGION
REACT_APP_APPSYNC_AUTHENTIACTION_TYPE: $DEV_APPSYNC_AUTHENTIACTION_TYPE
deployProd:
stage: deploy
only:
- master
dependencies:
- install
- test
script:
- pip3 install awscli
- aws configure set aws_access_key_id "$PROD_AWS_KEY"
- aws configure set aws_secret_access_key "$PROD_AWS_SECRET"
- aws s3 sync ./build/ s3://example.com
Cheers!
This line from CRA docs is important: The environment variables are embedded during the build time. So set the variables before running build command.
image: node:10.16.0-alpine
stages:
- build
- deploy
build_app:
stage: build
script:
- export REACT_APP_SECRET_API_KEY=$API_KEY # set REACT_APP variables before build command
- yarn install
- yarn build
artifacts:
name: "$CI_PIPELINE_ID"
paths:
- build
when: on_success
deploy_app:
stage: deploy
dependencies:
- build_app
script:
- echo "Set deployment variables"
- echo "Deployment scripts"

How to have dynamic version name at run time when deploying google app engine in Travis CI?

I am studying to automate the build and deployment of my google app engine application in Travis, so far it allows me to have static or predefined version name during deployment in .travis.yml.
Is there any way to make it dynamically generated at runtime? Like for example below in my .travis.yml file, I have deployment for production and staging version of the application, both are named or labeled as production and qa-staging, and I would like to suffix the version names with a timestamp or anything as long as it would be unique every successful build and deployment.
language: node_js
node_js:
- "10"
before_install:
- openssl aes-256-cbc -K $encrypted_c423808ed406_key -iv $encrypted_c423808ed406_iv
-in gae-creds.json.enc -out gae-creds.json -d
- chmod +x test.sh
- cat gae-creds.json
install:
- npm install
script:
- "./test.sh"
deploy:
- provider: gae
skip_cleanup: true
keyfile: gae-creds.json
project: traviscicd
no_promote: true
version: qa-staging
on:
branch: staging
- provider: gae
skip_cleanup: true
keyfile: gae-creds.json
project: traviscicd
version: production
on:
branch: master
Have you tried with https://yaml.org/type/timestamp.html ?
Im not sure if the context is the correct but seems a good and elegant option for your yaml file.
Perhaps you can use go generate to generate a version string that can be included? You need to run go generate as part of the build process for it to work, though.

circleci (v2.0) using npm when yarn is the run command

There's documentation on setting up yarn for circleci v1 but not v2 because it appears as though they've got yarn baked into the v2 api, however, in my config.yml i explicitly run yarn to install my deps yet, when i review the build logs it shows that npm is used for all my yarn commands... I obviously need to override this / install yarn? Unfortunately it appears that the v2 docs don't touch on this and my google-foo isn't being fruitful...
what's more interesting is that another one of my projects IS using yarn with almost the exact same config... what gives?
heres my current config.yml
# Javascript Node CircleCI 2.0 configuration file
#
# Check https://circleci.com/docs/2.0/language-javascript/ for more details
#
version: 2
jobs:
build:
docker:
# specify the version you desire here
- image: circleci/node:7.10
# Specify service dependencies here if necessary
# CircleCI maintains a library of pre-built images
# documented at https://circleci.com/docs/2.0/circleci-images/
# - image: circleci/mongo:3.4.4
working_directory: ~/repo
steps:
- checkout
# Download and cache dependencies
- restore_cache:
keys:
- v1-dependencies-{{ checksum "package.json" }}
# fallback to using the latest cache if no exact match is found
- v1-dependencies-
- run: yarn
- save_cache:
paths:
- node_modules
key: v1-dependencies-{{ checksum "package.json" }}
# run tests!
- run: yarn test
- run: echo "ALL GOOD IN THE HOOD"
- deploy:
name: Deploy on deploy branch
command: |
if [ "${CIRCLE_BRANCH}" == "deploy" ]; then
./node_modules/.bin/firebase ...
fi
I figured out the problem. My circleci folder was misspelled. I omitted the . and it was using a default config... sigh...

Resources