How to pass date arg to my cloudbuild yaml - google-app-engine

My cloudbuild.yaml exists of the following:
- name: 'gcr.io/cloud-builders/gsutil'
args: ['-m', 'cp', '-r', '/workspace/api-testing/target/cucumber-html-reports', 'gs://testing-reports/$BUILD_ID']
- name: 'gcr.io/cloud-builders/gsutil'
args: ['-m', 'rm', '-r', 'gs://studio360-testing-reports/latest']
- name: 'gcr.io/cloud-builders/gsutil'
args: ['-m', 'cp', '-r', '/workspace/api-testing/target/cucumber-html-reports', 'gs://testing-reports/latest']
This way I always have my latest report seperated from the older ones. But can I pass a {date} arg or something into my first line so I can have a visual order of all the older reports?
(Because there is no way to rank the files by last modified in the gcp storage/bucket)
Thanks

Change the first block to this:
- name: 'gcr.io/cloud-builders/gsutil'
args: ['-m', 'cp', '-r', '/workspace/api-testing/target/cucumber-html-reports', 'gs://testing-reports/${_DATE}_$BUILD_ID']
Then run this:
gcloud builds submit . --substitutions _DATE=$(date +%F_%H:%M:%S)
Then you would have something like this in the bucket:
gs://testing-reports/2020-02-13_14:01:40_8a6a7ed0-62e0-43ed-8f97-aa6eca9c2834
Explanation here and here.
EDIT:
For automatic builds started by Cloud Build triggers, use this cloudbuild.yaml:
steps:
- name: 'gcr.io/cloud-builders/gsutil'
entrypoint: 'bash'
args:
- '-c'
- |
gsutil -m cp -r $FILENAME gs://$BUCKET/$FILENAME-$(date +%F_%H:%M:%S)-$BUILD_ID
This allows the builder to use bash to execute gsutil, so the bash command "date" can be used inside the command.
Good explanation of the syntax by Googler here, and info about entrypoint here.

Pretty sure you should be able to bash out and do something like this:
- name: 'gcr.io/cloud-builders/gsutil'
entrypoint: 'bash'
args:
- -c
- |
gsutil -m cp -r /workspace/api-testing/target/cucumber-html-reports gs://testing-reports/$BUILD_ID-$(date +%m-%d-%Y)
To my knowledge, you can't run system commands in sub. variables or env. variables. (or at least I haven't been able to figure out how)

Related

Github action check if a file already exists

I have the following git action which allows me to download an image.
I have to make sure if the file already exists to skip the "Commit file" and the "Push changes"
How can I check if the file already exists if it already exists nothing is done.
on:
workflow_dispatch:
name: Scrape File
jobs:
build:
name: Build
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
name: Check out current commit
- name: Url
run: |
URL=$(node ./action.js)
echo $URL
echo "URL=$URL" >> $GITHUB_ENV
- uses: suisei-cn/actions-download-file#v1
id: downloadfile
name: Download the file
with:
url: ${{ env.URL }}
target: assets/
- run: ls -l 'assets/'
- name: Commit files
run: |
git config --local user.email "41898282+github-actions[bot]#users.noreply.github.com"
git config --local user.name "github-actions[bot]"
git add .
git commit -m "Add changes" -a
- name: Push changes
uses: ad-m/github-push-action#master
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
branch: ${{ github.ref }}
There are a few options here - you can go directly with bash and do something like this:
if test -f "$FILE"; then
#file exists
fi
or use one of the existing actions like this:
- name: Check file existence
id: check_files
uses: andstor/file-existence-action#v1
with:
files: "assets/${{ env.URL }}"
- name: File exists
if: steps.check_files.outputs.files_exists == 'true'
run: echo "It exists !"
WARNING: Linux (and maybe MacOS) only solution ahead!
I was dealing with a very similar situation some time earlier and developed a method to not just check for added files, but also will be useful if you wanted to check for modified or deleted files or directories as well.
Warning:
This solution works only if the file is added/modified/deleted in git repository.
Introduction:
The command git status --short will return list of untracked, , deleted and modified files. For example:-
D deleted_foo
M modified_foo
?? untracked_dir_foo/
?? untracked_file_foo
A tracked_n_added_foo
Note that we run the same command as git status -s.
Understanding `git status -s` output:
When you read the output, you will see some lines in this form:
** filename
** dirname/
Note that here ** represent the first word of the line (ones like D, ?? etc.).
Here is a summary of all ** in the lines:
**
Meaning
D
File/dir has been deleted.
M
File/dir has been modified.
??
File/dir has been added but not tracked using git add [FILENAME].
A
File/dir has been added and also tracked using git add [FILENAME].
NOTE: Take care of the spaces! Using, for example, M instead of M in the following solution will not work as expected!
Solution:
Shell part of solution:
We can grep the output of git status -s to check whether a file/dir was added/modified/deleted.
The shell part of the solution goes like this:
if git status -s | grep -x "** [FILENAME]"; then
# Do whatever you wanna on match
else
# Do whatever you wanna on no-match
fi
Note: Get desired ** from the table above and replace [FILENAME] with filename.
For example, to check whether a file named foo was modified, use:
git status -s | grep -x " M foo"
Explanation: We use git status -s to get the output and pipe the output to grep. We also use command line option -x with grep so as to match whole line.
Workflow part of solution:
A very simple solution will go like this:
...
- name: Check for file
id: file_check
run: |
if git status -s | grep -x "** [FILENAME]"; then
echo "check_result=true" >> $GITHUB_OUTPUT
else
echo "check_result=false" >> $GITHUB_OUTPUT
fi
...
- name: Run dependent step
if: steps.file_check.outputs.check_result == 'true'
run: |
# Do whatever you wanna do on file found to be
# added/modified/deleted, based on what you set '**' to
...

Google Cloud Build: Moving files

I want to move the file index.js from the root of the project to the dist/project_name. This is the step from cloudbuild.yaml:
- name: 'gcr.io/cloud-builders/docker'
entrypoint: /bin/bash
args: ['-c', 'mv', 'index.js', 'dist/project_name']
But the step is failing with the next error:
Already have image (with digest): gcr.io/cloud-builders/docker
mv: missing file operand
Try 'mv --help' for more information.
How I can fix this issue?
Because you're using bash -c, I think you need to encapsulate the entire "script" in a string:
args: ['-c', 'mv index.js dist/project_name']
My personal preference (and it's just that), is to not embed JSON ([...]) in YAML. This makes the result in this case slightly clearer and makes it easier to embed a multiline script:
args:
- bash
- -c
- |
mv index js dist/project_name
NOTE tools like YAMLlint will do this for you too.

How can I pass environment variables to mongo docker-entrypoint-initdb.d?

I am trying to do the following tutorial:
https://itnext.io/docker-mongodb-authentication-kubernetes-node-js-75ff995151b6
However, in there, they use raw values for the mongo init.js file that is placed within docker-entrypoint-initdb.d folder.
I would like to use environment variables that come from my CI/CD system (Gitlab). Does anyone know how to pass environment variables to the init.js file? I have tried several things like for example use init.sh instead for the shell but without any success.
If I run manually the init shell version, I can have it working because I call mongo with --eval and pass the values, however, the docker-entrypoint-blabla is called automatically, so I do not have control of how this is called and I do not know what I could do for achieving what I want.
Thank you in advance and regards.
you can make use of a shell script to retrieve env variables and create the user.
initdb.d/init-mongo.sh
set -e
mongo <<EOF
use $MONGO_INITDB_DATABASE
db.createUser({
user: '$MONGO_INITDB_USER',
pwd: '$MONGO_INITDB_PWD',
roles: [{
role: 'readWrite',
db: '$MONGO_INITDB_DATABASE'
}]
})
EOF
docker-compose.yml
version: "3.7"
services:
mongodb:
container_name: "mongodb"
image: mongo:4.4
hostname: mongodb
restart: always
volumes:
- ./data/mongodb/mongod.conf:/etc/mongod.conf
- ./data/mongodb/initdb.d/:/docker-entrypoint-initdb.d/
- ./data/mongodb/data/db/:/data/db/
environment:
- MONGO_INITDB_ROOT_USERNAME=root
- MONGO_INITDB_ROOT_PASSWORD=root
- MONGO_INITDB_DATABASE=development
- MONGO_INITDB_USER=mongodb
- MONGO_INITDB_PWD=mongodb
ports:
- 27017:27017
command: [ "-f", "/etc/mongod.conf" ]
Now you can connect to development database using mongodb as user and password credentials.
Use shell script (e.g mongo-init.sh) to access variables. Can still run JavaScript code inside as below.
set -e
mongo <<EOF
use admin
db.createUser({
user: '$MONGO_ADMIN_USER',
pwd: '$MONGO_ADMIN_PASSWORD',
roles: [{
role: 'readWrite',
db: 'dummydb'
}]
})
EOF
Shebang line is not necessary at the beginning as this file will be sourced.
Until recently, I simply used a .sh shell script in the docker-entrypoint-initdb.d directory to access ENV variables, much like #Lazaro answer.
It is now possible to access environment variables from javascript files using process.env, provided the file is run with the newer mongosh instead of mongo, which is now deprecated.
However, according to the Docs (see 'Initializing a fresh instance'), mongosh is only used for .js files in docker-entrypoint-initdb.d if using version 6 or greater. I can confirm this is working using the mongo:6 image tag.
You can use envsubs.
If command not found : here. Install it on your runners host if you use shell runners, else, within the docker image used by the runner, or directly in your script.
(NB: Your link isn't free, so I can't adapt to your situation :p )
Example:
init.js.template:
console.log('$GREET $PEOPLE $PUNCTUATION')
console.log('Pipeline from $CI_COMMIT_BRANCH')
gitlab_ci.yml:
variables:
GREET: "hello"
PEOPLE: "world"
PUNCTUATION: "!"
# ...
script:
- (envsubst < path/to/init.js.template) > path/to/init.js
- cat path/to/init.js
Output:
$ (envsubst < init.js.template) > init.js
$ cat init.js
console.log('hello world !')
console.log('Pipeline from master')
At the end the answer is that you can use a .sh file instead of a .js file within the docker-entrypoint-initdb.d folder. Within the sh script, you can use directly environment variables. However, I could not do that at the beginning because I had a typo and environment variables were not created properly.
I prefer this method because it allows you to keep a normal .js file which you lint instead of embedding the .js file into a string.
Create a dockerfile like so:
FROM mongo:5.0.9
USER mongodb
WORKDIR /docker-entrypoint-initdb.d
COPY env_init_mongo.sh env_init_mongo.sh
WORKDIR /writing
COPY mongo_init.js mongo_init.js
WORKDIR /db/data
At the top of your mongo_init.js file, you can just define variables you need
db_name = DB_NAME
schema_version = SCHEMA_VERSION
and then in your env_init_mongo.sh file, you can just replace the strings you need with environment variables or add lines to the top of the file:
mongo_init="/writing/mongo_init.js"
sed "s/SCHEMA_VERSION/$SCHEMA_VERSION/g" -i $mongo_init
sed "s/DB_NAME/${MONGO_INITDB_DATABASE}/g" -i $mongo_init
sed "1s/^/use ${MONGO_INITDB_DATABASE}\n/" -i $mongo_init # add to top of file
mongo < $mongo_init

Running conditional builds on gcloud app engine

I have the following in my cloudbuild.yml file
steps:
- name: gcr.io/cloud-builders/npm
args: ['install', 'app']
- name: 'gcr.io/cloud-builders/gcloud'
args: ['app', 'deploy', 'app/${_GAE_APP_YAML}.yaml']
#Following will deploy only if the branch is develop to avoid having two testnet environments
- name: 'gcr.io/cloud-builders/gcloud'
entrypoint: 'bash'
args:
- '-c'
- |
[[ "$BRANCH_NAME" == "develop" ]] && gcloud app deploy app/${_GAE_APP_TESTNET_YAML}.yaml
timeout: 1800s
Basically, I want the first and the second step to execute everytime. However, I want the third step to execute only if the BRANCH_NAME=develop
All the steps run successfully if BRANCH_NAME=develop. However, when I commit to master (BRANCH_NAME is not develop), I get the following error:
Finished Step #1
Starting Step #2
Step #2: Already have image (with digest): gcr.io/cloud-builders/gcloud
Finished Step #2
ERROR
ERROR: build step 2 "gcr.io/cloud-builders/gcloud" failed: exit status 1
I tried to login to the container on my local and test it like this
$ docker run --rm -it --entrypoint bash gcr.io/cloud-builders/gcloud
root#ac7edd78bea4:/# export BRANCH_NAME=develop
root#ac7edd78bea4:/# echo $BRANCH_NAME
develop
root#ac7edd78bea4:/# [[ "$BRANCH_NAME" == "develop" ]] && echo "kousgubh"
kousgubh
/# [[ "$BRANCH_NAME" == "ddfevelop" ]] && echo "kousgubh" //Doesn't print anything
So, the condition seems fine. What am I missing?
I feel like there's a better way to do this, though I can't think of it at the moment.
A quick-n-dirty answer to your question is to invert the logic a bit:
[[ "$BRANCH_NAME" != "develop" ]] || gcloud app deploy app/${_GAE_APP_TESTNET_YAML}.yaml
This works because when $BRANCH_NAME == "develop", the first expression evaluates to true and the second expression is not run (|| is a short-circuiting OR). When $BRANCH_NAME != "develop", the first expression is false, so the second expression is evaluated.

Ansible playbook copy failed - msg: could not find src

I am new to ansible and I am trying to clopy a file from one directory to another directory on a remote RH machine using ansible.
---
- hosts: all
user: root
sudo: yes
tasks:
- name: touch
file: path=/home/user/test1.txt state=touch
- name: file
file: path=/home/user/test1.txt mode=777
- name: copy
copy: src=/home/user/test1.txt dest=/home/user/Desktop/test1.txt
But it throws error as below
[root#nwb-ansible ansible]# ansible-playbook a.yml -i hosts
SSH password:
PLAY [all] ********************************************************************
GATHERING FACTS ***************************************************************
ok: [auto-0000000190]
TASK: [touch] *****************************************************************
changed: [auto-0000000190]
TASK: [file] ******************************************************************
ok: [auto-0000000190]
TASK: [copy] ******************************************************************
failed: [auto-0000000190] => {"failed": true}
msg: could not find src=/home/user/test1.txt
FATAL: all hosts have already failed -- aborting
PLAY RECAP ********************************************************************
to retry, use: --limit #/root/a.retry
auto-0000000190 : ok=3 changed=1 unreachable=0 failed=1
[root#nwb-ansible ansible]#
The file has created in the directory and both the file and the directory has got permissions 777.
I am getting the same error message if I try to just copy already existing file using ansible.
I have tried as non-root user as well but no success.
Thanks a lot in advance,
Angel
Luckily this is a simple fix, all you need to do after the copy is add
remote_src: yes
If you have ansible >=2.0 you could use remote_src, like this:
---
- hosts: all
user: root
sudo: yes
tasks:
- name: touch
file: path=/home/user/test1.txt state=touch
- name: file
file: path=/home/user/test1.txt mode=777
- name: copy
copy: src=/home/user/test1.txt dest=/home/user/Desktop/test1.txt remote_src=yes
This don't support to recursive copy.
What is your ansible version? Newer version of ansible supports what you want. If you cannot upgrade ansible, try cp command for simple file copy. cp -r copies recursively.
- name: copy
shell: cp /home/user/test1.txt /home/user/Desktop/test1.txt

Resources