This is the template of my database pipeline
#MySql
stages:
- build
- deploy
- reset-data
build:
stage: build
script:
- docker build
deploy:
stage: deploy
script:
- docker push
reset-data:
stage: reset-data
when: manual
script:
- kubectl delete
- kubectl apply
This is the template of my end-to-end test pipeline.
#E2E
stages:
- build
- deploy
- reset-data
- test
build:
stage: build
script:
- docker build
deploy:
stage: deploy
script:
- docker push
reset-data:
stage: reset-data
#Two things I want to achieve here
#1) Call reset-data job from #MySql pipeline
trigger:
project: /compass/environment/mysql-data/
#2) Change parameter when `manual` to `always`
test:
stage: test
script:
- npx cypress run
I am trying to call the specific job from one project to another GitLab project. Can anyone suggest to me how to achieve this? I want to change the parameter of the parent job as well. Please look at the comments of reset-data job in #E2E pipeline.
Change when: manual to only: triggers then set whichever conditions you would like on the triggered job. You can pass variables into the downstream job (and use them to define further rules) using the inherit: variables keyword.
See e.g. this question for another example: whitelist some inherited variables (but not all) in a GitLab multi-project pipeline
Related
I'm new to git actions and was wondering if the following is possible. I'm running a junit test that is creating a salesforce record and returning the record Id, I would like to save the Id from the test log output and send it to to slack. Wondering if its possible to set the record Id as a variable create another job to send it to slack?
this is my workflow for creating the record
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v1
- name: Set up JDK 14
uses: actions/setup-java#v1
with:
java-version: 14
cache: maven
- name: Build project with Maven
run: mvn -B package --file pom.xml -Dtest="StageSFDCTestData#insertNewAccountTwo"
I want to configure CI/CD from Cloud Repositories that builds my CMS (Directus) when I push to main repository.
In the build-time, the project needs to access Cloud SQL. But I get this error:
I tried this database configuration with gcloud app deploy and it connects Cloud SQL and runs.
cloudbuild.yaml (It crashes at second step, so I didn't add other steps for simplicity):
steps:
- name: node:16
entrypoint: npm
args: ['install']
dir: cms
- name: node:16
entrypoint: npm
args: ['run', 'start']
dir: cms
env:
- 'NODE_ENV=PRODUCTION'
- 'EXTENSIONS_PATH="./extensions"'
- 'DB_CLIENT=pg'
- 'DB_HOST=/cloudsql/XXX:europe-west1:XXX'
- 'DB_PORT="5432"'
- 'DB_DATABASE="XXXXX"'
- 'DB_USER="postgres"'
- 'DB_PASSWORD="XXXXXX"'
- 'KEY="XXXXXXXX"'
- 'SECRET="XXXXXXXXXXXX"'
Node-pg (node library) adds /.s.PGSQL.5432 at the end automatically. That's why it is not written in DB_HOST.
IAM roles:
How can I solve this error? I read so many answers in Stackoverflow but none of them helped me. I found this article but I didn't fully understand how to implement it in my case (https://cloud.google.com/sql/docs/postgres/connect-build).
Without your full Cloud Build yaml, it's hard to say for sure - but, it looks like you aren't following the steps in the documentation correctly.
Roughly what you should be doing is:
Downloading the cloud_sql_proxy into your container space
In a follow up step, start the cloud_sql_proxy then (in the same step) run your script, connecting to the proxy via either tcp or unix socket.
I don't see your yaml describing the proxy at all.
I am facing an issue while building my React project using GitHub as a repository, Travis as CI with AWS ElasticBeanStalk as a service to run my app using Docker. I am able to run my test suite but after that, it is not deploying my app on AWS and also not getting any error in Travis console except below:
Below is my Travis .yml file configuration:
language: generic
services:
- docker
before_install:
- docker build -t heet1996/my-profile -f Dockerfile.dev .
script:
- docker run heet1996/my-profile npm run test -- --coverage
deploy:
provider: elasticbeanstalk
region: "us-east-1"
app: "My-profile"
env: "MyProfile-env"
bucket_name: "elasticbeanstalk-us-east-1-413920612934"
bucket_path: "My-profile"
on:
branch: master
access_key_id: $AWS_ACCESS_KEY
secret_access_key: "$AWS_SECRET_KEY"
Let me know if you need more information
A couple things you could try:
Your script command needs to set the environment var CI=true
So
script:
- docker run heet1996/my-profile npm run test -- --coverage
Becomes
script:
- docker run -e CI=true heet1996/my-profile npm run test -- --coverage
Also AWS needs the access variables to be named differently.
Change
access_key_id: $AWS_ACCESS_KEY
secret_access_key: "$AWS_SECRET_KEY"
To
access_key_id: "$AWS_ACCESS_KEY_ID"
secret_access_key: "$AWS_SECRET_ACCESS_KEY"
Using the option --coverage, your test will hang, waiting for input. Hence the message: "...no output has been received in the last 10m0s...".
At a certain point, --coverage was probably able to stop tests (as some used for that purpose), but I guess it was not meant for that and subsequent versions of docker removed that behavior.
Your test must conclude and the conclusion be a success for the deployment by Travis to begin.
Use instead the option --watchAll=false. So you should have:
...
script:
- docker run heet1996/my-profile npm run test -- --watchAll=false
...
That would take care of the obvious issue of your test never concluding (that could be the only issue). Afterward, make sure that your tests are successful. Then, you can worry about other issues such as authentication on AWS, etc...
I'm new to TravisCI and this may be a very silly question, but I'm trying to write the travis config in a way that it only deploys to Firebase when the current branch is master.
That is, only when code is pushed to master or when a PR is merged with master, the firebase deploy command executes. The deploy command should be not be executed when other branches are pushed to, or when PRs are made.
Here's what I have so far:
language: node_js
node_js: 12.16.1
script: echo "Running travis-ci"
install:
- npm install -g firebase-tools
- npm i react-scripts
script:
- yarn add react
- yarn test
- if [ "$TRAVIS_BRANCH" = "master" ]; then yarn build; fi
- if [ "$TRAVIS_BRANCH" = "master" ]; then firebase deploy --project testproj8876 --token $FIREBASE_TOKEN; fi
branches:
only:
- master
Since I'm not too familiar with the conventions yet, any improvements/suggestions would also be greatly appreciated.
Google Firebase is supported directly by Travis. See here.
Thereby, I recommend using the solution described in the link above.
deploy:
provider: firebase
token:
secure: "YOUR ENCRYPTED token"
As for your condition, you can check one of my .travis.yml file here and the documentation there (Conditional Deployments)
The following part is what you need:
deploy:
cleanup: false
on:
branch:
- master
If you still have questions, feel free to ask.
I'm trying to set environment variables dynamically using the gitlab CI pipeline.
What I am trying to achieve is to inject the right API keys and URLs depending on the stage I am deploying to (stage, prod).
In my React app I access the variables using process.env.REACT_APP_APPSYNC_URL as decribed in the react documentation.
So far I have tried setting the variables in the gitlab UI and referencing them in my .gitlab-ci.yml file (see code below).
Unfortunately I cannot access the variables this way, so I would be very thankful for any help.
I'm just getting started on CI/CD and different environments, so if I am generally using a bad approach here please let me know!
Here's the .gitlab-ci.yml:
image: nikolaik/python-nodejs:latest
stages:
- install
- test
- deploy
install:
stage: install
script:
- npm install
- npm run build
artifacts:
untracked: true
only:
- stage
- master
test:
stage: test
dependencies:
- install
script:
- npm run test
artifacts:
untracked: true
only:
- stage
- master
deployDev:
stage: deploy
only:
- stage
dependencies:
- install
- test
script:
- pip3 install awscli
- aws configure set aws_access_key_id "$DEV_AWS_KEY"
- aws configure set aws_secret_access_key "$DEV_AWS_SECRET"
- aws s3 sync ./build/ s3://example.dev
variables:
REACT_APP_COGNITO_REGION: $DEV_COGNITO_REGION
REACT_APP_COGNITO_USER_POOL_ID: $DEV_COGNITO_USER_POOL_ID
REACT_APP_COGNITO_APP_CLIENT_ID: $DEV_COGNITO_APP_CLIENT_ID
REACT_APP_COGNITO_IDENTITY_POOL_ID: $DEV_COGNITO_IDENTITY_POOL_ID
REACT_APP_APPSYNC_URL: $DEV_APPSYNC_URL
REACT_APP_APPSYNC_REGION: $DEV_APPSYNC_REGION
REACT_APP_APPSYNC_AUTHENTIACTION_TYPE: $DEV_APPSYNC_AUTHENTIACTION_TYPE
deployProd:
stage: deploy
only:
- master
dependencies:
- install
- test
script:
- pip3 install awscli
- aws configure set aws_access_key_id "$PROD_AWS_KEY"
- aws configure set aws_secret_access_key "$PROD_AWS_SECRET"
- aws s3 sync ./build/ s3://example.com
Cheers!
This line from CRA docs is important: The environment variables are embedded during the build time. So set the variables before running build command.
image: node:10.16.0-alpine
stages:
- build
- deploy
build_app:
stage: build
script:
- export REACT_APP_SECRET_API_KEY=$API_KEY # set REACT_APP variables before build command
- yarn install
- yarn build
artifacts:
name: "$CI_PIPELINE_ID"
paths:
- build
when: on_success
deploy_app:
stage: deploy
dependencies:
- build_app
script:
- echo "Set deployment variables"
- echo "Deployment scripts"