How to get a rundeck job PID and echo it - pid

How to get a rundeck jobs PID and then echo it.
Simple task of echo a rundeck jobs PID

ls &
echo $!
When i tried to execute this commands on Local Command option, result is the an error: 'ls cannot access '&': No such file

Rundeck encapsulates the jobs in Java threads, a good way to track the Rundeck threads is to use visualvm, anyway, you can obtain the Steps PIDs in the same UNIX/Linux way. The best approach is to do it on scripts steps, for example:
ls &
echo $!
Job Definition example:
- defaultTab: nodes
description: ''
executionEnabled: true
id: 0aeaa0f4-d090-4083-b0a5-2878c5f558d1
loglevel: INFO
name: HelloWorld
nodeFilterEditable: false
plugins:
ExecutionLifecycle: null
scheduleEnabled: true
sequence:
commands:
- fileExtension: .sh
interpreterArgsQuoted: false
script: |-
ls &
echo $!
scriptInterpreter: /bin/bash
keepgoing: false
strategy: node-first
uuid: 0aeaa0f4-d090-4083-b0a5-2878c5f558d1
Result.
Update: same job but using Command step:
- defaultTab: nodes
description: ''
executionEnabled: true
id: 8c59758e-32f3-4166-92b0-50d818074368
loglevel: INFO
name: HelloWorld
nodeFilterEditable: false
plugins:
ExecutionLifecycle: null
scheduleEnabled: true
sequence:
commands:
- exec: ls & echo $!
keepgoing: false
strategy: node-first
uuid: 8c59758e-32f3-4166-92b0-50d818074368
Result.

Related

Github action check if a file already exists

I have the following git action which allows me to download an image.
I have to make sure if the file already exists to skip the "Commit file" and the "Push changes"
How can I check if the file already exists if it already exists nothing is done.
on:
workflow_dispatch:
name: Scrape File
jobs:
build:
name: Build
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
name: Check out current commit
- name: Url
run: |
URL=$(node ./action.js)
echo $URL
echo "URL=$URL" >> $GITHUB_ENV
- uses: suisei-cn/actions-download-file#v1
id: downloadfile
name: Download the file
with:
url: ${{ env.URL }}
target: assets/
- run: ls -l 'assets/'
- name: Commit files
run: |
git config --local user.email "41898282+github-actions[bot]#users.noreply.github.com"
git config --local user.name "github-actions[bot]"
git add .
git commit -m "Add changes" -a
- name: Push changes
uses: ad-m/github-push-action#master
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
branch: ${{ github.ref }}
There are a few options here - you can go directly with bash and do something like this:
if test -f "$FILE"; then
#file exists
fi
or use one of the existing actions like this:
- name: Check file existence
id: check_files
uses: andstor/file-existence-action#v1
with:
files: "assets/${{ env.URL }}"
- name: File exists
if: steps.check_files.outputs.files_exists == 'true'
run: echo "It exists !"
WARNING: Linux (and maybe MacOS) only solution ahead!
I was dealing with a very similar situation some time earlier and developed a method to not just check for added files, but also will be useful if you wanted to check for modified or deleted files or directories as well.
Warning:
This solution works only if the file is added/modified/deleted in git repository.
Introduction:
The command git status --short will return list of untracked, , deleted and modified files. For example:-
D deleted_foo
M modified_foo
?? untracked_dir_foo/
?? untracked_file_foo
A tracked_n_added_foo
Note that we run the same command as git status -s.
Understanding `git status -s` output:
When you read the output, you will see some lines in this form:
** filename
** dirname/
Note that here ** represent the first word of the line (ones like D, ?? etc.).
Here is a summary of all ** in the lines:
**
Meaning
D
File/dir has been deleted.
M
File/dir has been modified.
??
File/dir has been added but not tracked using git add [FILENAME].
A
File/dir has been added and also tracked using git add [FILENAME].
NOTE: Take care of the spaces! Using, for example, M instead of M in the following solution will not work as expected!
Solution:
Shell part of solution:
We can grep the output of git status -s to check whether a file/dir was added/modified/deleted.
The shell part of the solution goes like this:
if git status -s | grep -x "** [FILENAME]"; then
# Do whatever you wanna on match
else
# Do whatever you wanna on no-match
fi
Note: Get desired ** from the table above and replace [FILENAME] with filename.
For example, to check whether a file named foo was modified, use:
git status -s | grep -x " M foo"
Explanation: We use git status -s to get the output and pipe the output to grep. We also use command line option -x with grep so as to match whole line.
Workflow part of solution:
A very simple solution will go like this:
...
- name: Check for file
id: file_check
run: |
if git status -s | grep -x "** [FILENAME]"; then
echo "check_result=true" >> $GITHUB_OUTPUT
else
echo "check_result=false" >> $GITHUB_OUTPUT
fi
...
- name: Run dependent step
if: steps.file_check.outputs.check_result == 'true'
run: |
# Do whatever you wanna do on file found to be
# added/modified/deleted, based on what you set '**' to
...

Combine Ansible async with retries

Ok, now I have a really tricky case with Ansible.
I need to run my task asynchronously with retries (i.e. with until loop) and then fail task if timeout exceeds. So both two parameters must control my play: retries count (play fails if retry count exceeded) and timeout (play fails if timeout exceeded).
I can implement each strategy separately:
- name: My shell task with retries
shell: set -o pipefail && ./myscript.sh 2>&1 | tee -a "{{mylogfile}}"
args:
chdir: "{{myscript_dir}}/"
executable: /bin/bash
register: my_job
until: my_job is succeeded
retries: "{{test_retries}}"
delay: 0
or with async:
- name: My async shell task
shell: set -o pipefail && ./myscript.sh 2>&1 | tee -a "{{mylogfile}}"
args:
chdir: "{{myscript_dir}}/"
executable: /bin/bash
register: my_job
async: "{{test_timeout}}"
poll: 0
- name: Tracking for async shell task
wait_for:
path: "{{mylogfile}}"
search_regex: '^.*Done in \S+'
timeout: "{{test_timeout}}"
ignore_errors: yes
register: result
The second task parses previous task log until the job is finished - i.e. searches "Done in x seconds" string. Maybe it's not the best practice and I should use async_status but can't find how to set timeout with it (it's only have retries for checking job status which is pretty silly for me).
Sooo... Can I combine both strategies to control my task both with retries count and timeout?
UPD: I tried to run both until and async for shell module and surprisingly my play doesn't fail but retries doesn't work. The task was just started as fire-and-forget task and executed only one time without retries. So this is not an option.
- name: My shell task with retries and async
shell: set -o pipefail && ./myscript.sh 2>&1 | tee -a "{{mylogfile}}"
args:
chdir: "{{myscript_dir}}/"
executable: /bin/bash
register: my_job
until: my_job is succeeded
retries: "{{test_retries}}"
delay: 0
async: "{{test_timeout}}"
poll: 0
- name: My async shell task
wait_for:
path: "{{mylogfile}}"
search_regex: '^.*Done in \S+'
timeout: "{{test_timeout}}"
ignore_errors: yes
register: result
Have you tried using retries and delay together?
- name: Checking the build status 1
async_status:
jid: "{{ job1.ansible_job_id }}"
register: job_result
until: job_result.finished
retries: 10
delay: 1
when: job1.ansible_job_id is defined
This will check the async status 10 times with a 1 second interval and it will fail if during the 10 seconds the "until" conditions is not satisfied

Running conditional builds on gcloud app engine

I have the following in my cloudbuild.yml file
steps:
- name: gcr.io/cloud-builders/npm
args: ['install', 'app']
- name: 'gcr.io/cloud-builders/gcloud'
args: ['app', 'deploy', 'app/${_GAE_APP_YAML}.yaml']
#Following will deploy only if the branch is develop to avoid having two testnet environments
- name: 'gcr.io/cloud-builders/gcloud'
entrypoint: 'bash'
args:
- '-c'
- |
[[ "$BRANCH_NAME" == "develop" ]] && gcloud app deploy app/${_GAE_APP_TESTNET_YAML}.yaml
timeout: 1800s
Basically, I want the first and the second step to execute everytime. However, I want the third step to execute only if the BRANCH_NAME=develop
All the steps run successfully if BRANCH_NAME=develop. However, when I commit to master (BRANCH_NAME is not develop), I get the following error:
Finished Step #1
Starting Step #2
Step #2: Already have image (with digest): gcr.io/cloud-builders/gcloud
Finished Step #2
ERROR
ERROR: build step 2 "gcr.io/cloud-builders/gcloud" failed: exit status 1
I tried to login to the container on my local and test it like this
$ docker run --rm -it --entrypoint bash gcr.io/cloud-builders/gcloud
root#ac7edd78bea4:/# export BRANCH_NAME=develop
root#ac7edd78bea4:/# echo $BRANCH_NAME
develop
root#ac7edd78bea4:/# [[ "$BRANCH_NAME" == "develop" ]] && echo "kousgubh"
kousgubh
/# [[ "$BRANCH_NAME" == "ddfevelop" ]] && echo "kousgubh" //Doesn't print anything
So, the condition seems fine. What am I missing?
I feel like there's a better way to do this, though I can't think of it at the moment.
A quick-n-dirty answer to your question is to invert the logic a bit:
[[ "$BRANCH_NAME" != "develop" ]] || gcloud app deploy app/${_GAE_APP_TESTNET_YAML}.yaml
This works because when $BRANCH_NAME == "develop", the first expression evaluates to true and the second expression is not run (|| is a short-circuiting OR). When $BRANCH_NAME != "develop", the first expression is false, so the second expression is evaluated.

Ansible playbook copy failed - msg: could not find src

I am new to ansible and I am trying to clopy a file from one directory to another directory on a remote RH machine using ansible.
---
- hosts: all
user: root
sudo: yes
tasks:
- name: touch
file: path=/home/user/test1.txt state=touch
- name: file
file: path=/home/user/test1.txt mode=777
- name: copy
copy: src=/home/user/test1.txt dest=/home/user/Desktop/test1.txt
But it throws error as below
[root#nwb-ansible ansible]# ansible-playbook a.yml -i hosts
SSH password:
PLAY [all] ********************************************************************
GATHERING FACTS ***************************************************************
ok: [auto-0000000190]
TASK: [touch] *****************************************************************
changed: [auto-0000000190]
TASK: [file] ******************************************************************
ok: [auto-0000000190]
TASK: [copy] ******************************************************************
failed: [auto-0000000190] => {"failed": true}
msg: could not find src=/home/user/test1.txt
FATAL: all hosts have already failed -- aborting
PLAY RECAP ********************************************************************
to retry, use: --limit #/root/a.retry
auto-0000000190 : ok=3 changed=1 unreachable=0 failed=1
[root#nwb-ansible ansible]#
The file has created in the directory and both the file and the directory has got permissions 777.
I am getting the same error message if I try to just copy already existing file using ansible.
I have tried as non-root user as well but no success.
Thanks a lot in advance,
Angel
Luckily this is a simple fix, all you need to do after the copy is add
remote_src: yes
If you have ansible >=2.0 you could use remote_src, like this:
---
- hosts: all
user: root
sudo: yes
tasks:
- name: touch
file: path=/home/user/test1.txt state=touch
- name: file
file: path=/home/user/test1.txt mode=777
- name: copy
copy: src=/home/user/test1.txt dest=/home/user/Desktop/test1.txt remote_src=yes
This don't support to recursive copy.
What is your ansible version? Newer version of ansible supports what you want. If you cannot upgrade ansible, try cp command for simple file copy. cp -r copies recursively.
- name: copy
shell: cp /home/user/test1.txt /home/user/Desktop/test1.txt

Ansible - Loops with_fileglob - become_user not working -- running action on source machine

Env is: Ansible 1.9.4 or 1.9.2, Linux CentOS 6.5
I have a role build where:
$ cat roles/build/defaults/main.yml:
---
build_user: confman
build_group: confman
tools_dir: ~/tools
$ cat roles/build/tasks/main.yml
- debug: msg="User is = {{ build_user }} -- {{ tools_dir }}"
tags:
- koba
- name: Set directory ownership
file: path="{{ tools_dir }}" owner={{ build_user }} group={{ build_group }} mode=0755 state=directory recurse=yes
become_user: "{{ build_user }}"
tags:
- koba
- name: Set private key file access
file: path="{{ item }}" owner={{ build_user }} group={{ build_group }} mode=0600 state=touch
with_fileglob:
- "{{ tools_dir }}/vmwaretools-lib-*/lib/insecure_private_key"
# with_items:
# - ~/tools/vmwaretools/lib/insecure_private_key
become_user: "{{ build_user }}"
tags:
- koba
In my workspace: hosts file (inventory) contains:
[ansible_servers]
server01.project.jenkins
site.yml (playbook) contains:
---
- hosts: ansible_servers
sudo: yes
roles:
- build
I'm running the following command:
$ ansible-playbook site.yml -i hosts -u confman --private-key ${DEPLOYER_KEY_FILE} -t koba
I'm getting the following error and for some reason, become_user in Ansible while using Ansible loop: with_fileglob is NOT using ~ (home directory) of confman user (which is set in variable {{ build_user }}, instead of that, it's picking my own user ID (c123456).
In the console output for debug action, it's clear that the user (due to become_user) is confman and value of tools_dir variable is ~/tools.
PLAY [ansible_servers] ********************************************************
GATHERING FACTS ***************************************************************
ok: [server01.project.jenkins]
TASK: [build | debug msg="User is = {{ build_user }} -- {{ tools_dir }}"] *****
ok: [server01.project.jenkins] => {
"msg": "User is = confman -- ~/tools"
}
TASK: [build | Set directory ownership] ***************************************
changed: [server01.project.jenkins]
TASK: [build | Set private key file access] ***********************************
failed: [server01.project.jenkins] => (item=/user/home/c123456/tools/vmwaretools-lib-1.0.8-SNAPSHOT/lib/insecure_private_key) => {"failed": true, "item": "/user/home/c123456/tools/vmwaretools-lib-1.0.8-SNAPSHOT/lib/insecure_private_key", "parsed": false}
BECOME-SUCCESS-ajtxlfymjcquzuolgfrrxbssfolqgrsg
Traceback (most recent call last):
File "/tmp/ansible-tmp-1449615824.69-82085663620220/file", line 1994, in <module>
main()
File "/tmp/ansible-tmp-1449615824.69-82085663620220/file", line 372, in main
open(path, 'w').close()
IOError: [Errno 2] No such file or directory: '/user/home/c123456/tools/vmwaretools-lib-1.0.8-SNAPSHOT/lib/insecure_private_key'
OpenSSH_5.3p1, OpenSSL 1.0.1e-fips 11 Feb 2013
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: Applying options for *
debug1: auto-mux: Trying existing master
debug1: mux_client_request_session: master session id: 2
debug1: mux_client_request_session: master session id: 2
Shared connection to server01.project.jenkins closed.
As per the error above, the file it's trying for variable item is /user/home/c123456/tools/vmwaretools-lib-1.0.8-SNAPSHOT/lib/insecure_private_key but there's no such file inside my user ID's home directory. But, this file does exist for user confman's home directory.
i.e. the following file exists.
/user/home/confman/tools/vmwaretools-lib-1.0.7-SNAPSHOT/lib/insecure_private_key
/user/home/confman/tools/vmwaretools-lib-1.0.7/lib/insecure_private_key
/user/home/confman/tools/vmwaretools-lib-1.0.8-SNAPSHOT/lib/insecure_private_key
All, I want is to iterate of these files in ~confman/tools/vmwaretools-lib-*/.. location containing the private key file and change the permission but using "with_fileglob" become_user to set the user during an action is NOT working.
If I comment out the with_fileglob section and use/uncomment with_items section in the tasks/main.yml, then it (become_user) works fine and picks ~confman (instead of ~c123456) and gives the following output:
TASK: [build | Set private key file access] ***********************************
changed: [server01.project.jenkins] => (item=~/tools/vmwaretools/lib/insecure_private_key)
One strange thing I found is, there is no user c123456 on the target machine (server01.project.jenkins) and that's telling me that with_fileglob is using the source/local/master Ansible machine (where I'm running ansible-playbook command) to find the GLOB Pattern (instead of finding / running it over SSH on server01.project.jenkins server), It's true that on local/source Ansible machine, I'm logged in as c123456. Strange thing is, in the OUTPUT, it still shows the target machine but pattern path is coming from source machine as per the output above.
failed: [server01.project.jenkins]
Any idea! what I'm missing here? Thanks.
PS:
- I don't want to set tools_dir: "~{{ build_user }}/tools" or hardcode it as a user can pass tools_dir variable at command line (while running ansible-playbook command using -e / --extra-vars "tools_dir=/production/slave/tools"
Further researching it, I found with_fileglob is for List of local files to iterate over, described using shell fileglob notation (e.g., /playbooks/files/fooapp/*) then, what should I use to iterate over on target/remote server (server01.project.jenkins in my case) using pattern match (fileglob)?
Using with_fileglob, it'll always run on the local/source/master machine where you are running ansible-playbook/ansible. Ansible docs for Loops doesn't clarifies this info (http://docs.ansible.com/ansible/playbooks_loops.html#id4) but I found this clarification here: https://github.com/lorin/ansible-quickref
Thus, while looking for the pattern, it's picking the ~ for user c123456.
Console output is showing [server01.project.jenkins] as it's a different processing/step to read what's there in the inventory/hosts file.
I tried to use with_lines as well as per this post: ansible: Is there something like with_fileglobs for files on remote machine?
But, when I tried the following, it still didn't work i.e. read the pattern on local machine instead of target machine (Ansible docs tells with_items doesn't run on local machine but on the controlling machine):
file: path="{{ item }}" ....
with_items: ls -1 {{ tools_dir }}/vmwaretools-lib-*/lib/insecure_private_key
become_user: {{ build_user }}
Finally to solve the issue, I just went on the plain OS command round using shell (again, this might not be a very good solution if the target env is not a Linux type OS) but for now I'm good.
- name: Set private key file access
shell: "chmod 0400 {{ tools_dir }}/vmtools-lib-*/lib/insecure_private_key"
become_user: "{{ build_user }}"
tags:
- koba

Resources