Below is my playbook to print the file.
I used couple of approaches but the file is not printed as is i.e. the new line formatting is gone when ansible prints the file contents.
- name: List startup files
shell: cat /tmp/test.txt
register: readws
- debug:
msg: "/tmp/test.txt on {{ inventory_hostname }} is: {{ readws.stdout_lines }}"
- debug:
msg: "/tmp/test.txt on {{ inventory_hostname }} is: {{ lookup('file', '/tmp/test.txt') }}"
cat /tmp/test.txt
i
m
good
Expected Ansible output:
TASK [debug] *****************************************************************************************
ok: [localhost] => {
"msg": "/tmp/test.txt on localhost is:
i
m
good
"
}
Ansible output:
TASK [List startup files] ******************************************************************
changed: [localhost]
TASK [debug] *****************************************************************************************
ok: [localhost] => {
"msg": "/tmp/test.txt on localhost is: [u'i', u'm ', u'good']"
}
TASK [debug] *****************************************************************************************
ok: [localhost] => {
"msg": "/tmp/test.txt on localhost is: i\nm \ngood"
}
Can you please suggest ?
You cannot really get what you require (unless maybe if you change the output callback plugin...).
The closest you can get is by displaying a list of lines like in the following example:
- name: Show file content
vars:
my_file: /tmp/test.txt
msg_content: |-
{{ my-file }} on localhost is:
{{ lookup('file', my_file) }}
debug:
msg: "{{ msg_content.split('\n') }}"
This might help who might come looking for simpler way to display a file in ansible.
stdout_lines prints the file with reasonable formatting.
- name: display the file needed
shell: cat /tmp/test.txt
register: test_file
- debug:
var: test_file.stdout_lines
Related
I'm trying to look for a pattern in Postgres configuration files, and
only wants to find changed files that match the search pattern.
---
- name: Query postgres pattern matched files
hosts: pghosts
gather_facts: false
tasks:
- name: "Find postgres conf files"
find:
paths: "/srv/postgresql/config/conf.d"
patterns: "*.conf"
file_type: "file"
register: search_files
- name: "grep postgres conf files with searchpattern"
shell: "grep -i 'pg_show_plans' {{ item.path | basename }}"
args:
chdir: "/srv/postgresql/config/conf.d"
loop: "{{ search_files.files }}"
loop_control:
label: "{{ item.path | basename }}"
ignore_errors: true
- name: "find changed files"
debug:
msg: "{{ search_files.files |map(attribute='path') }}"
These are the changed files:
changed: [pgsql14.techlab.local] => (item=00_global_default.conf)
...ignoring
changed: [pgsql13.techlab.local] => (item=00_global_default.conf)
...ignoring
How do I get only those filenames, which realy have changed, and passed the pattern test.
Thanks a lot for your help
I do now have simplified the code which works for me:
- name: read the postgres conf file
shell: cat /srv/postgresql/config/conf.d/00_global_default.conf
register: user_accts1
- name: a task that only happens if the string exists
when: user_accts1.stdout.find('pg_show_plans') != -1
ansible.builtin.copy:
content: "{{ user_accts1.stdout }}"
dest: 'fileresults/{{ inventory_hostname }}.00_global_default.out'
delegate_to: localhost
- name: read the postgres conf file
shell: cat /srv/postgresql/config/conf.d/01_sizing_specific.conf
register: user_accts2
- name: a task that only happens if the string exists
when: user_accts2.stdout.find('pg_show_plans') != -1
ansible.builtin.copy:
content: "{{ user_accts2.stdout }}"
dest: 'fileresults/{{ inventory_hostname }}.01_sizing_specific.out'
delegate_to: localhost
- name: read the postgres conf file
shell: cat /srv/postgresql/config/conf.d/02_local_overrides.conf
register: user_accts3
- name: a task that only happens if the string exists
when: user_accts3.stdout.find('pg_show_plans') != -1
ansible.builtin.copy:
content: "{{ user_accts3.stdout }}"
dest: 'fileresults/{{ inventory_hostname }}.02_local_overrides.out'
delegate_to: localhost
I'm attempting to get an until loop working for an import_tasks, and to break the loop when it meets a condition within the import_tasks. Not sure if this is even possible, or if there's a better way to acheive this? Making it slightly more tricky is one of the tasks in include_tasks is a powershell script which returns a status message which is used to satisfy the until requirement.
So yeah, the goal is to run check_output.yml until there is no longer any RUNNING status reported by the script.
main.yml:
- import_tasks: check_output.yml
until: "'RUNNING' not in {{outputStatus}}"
retries: 100
delay: 120
check_output.yml
---
- name: "Output Check"
shell: ./get_output.ps1" # this will return `RUNNING` in std_out if it's still running
args:
executable: /usr/bin/pwsh
register: output
- name: "Debug output"
debug: outputStatus=output.stdout_lines
For the record, this works just fine if I don't use import_tasks and just use an until loop on the "Output Check" task. The problem with that approach is you have to run Ansible with -vvv to get the status message for each of the loops which causes a ton of extra, unwanted debug messages. I'm trying to get the same status for each loop without having to add verbosity.
Ansible version is 2.11
Q: "(Wait) until there is no longer any RUNNING status reported by the script."
A: An option might be the usage of the module wait_for.
For example, create the script on the remote host. The script takes two parameters. The PID of a process to be monitored and DELAY monitoring interval in seconds. The script writes the status to /tmp/$PID.status
# cat /root/bin/get_status.sh
#!/bin/sh
PID=$1
DELAY=$2
ps -p $PID > /dev/null 2>&1
STATUS=$?
while [ "$STATUS" -eq "0" ]
do
echo "$PID is RUNNING" > /tmp/$PID.status
sleep $DELAY
ps -p $PID > /dev/null 2>&1
STATUS=$?
done
echo "$PID is NONEXIST" > /tmp/$PID.status
exit 0
Start the script asynchronously
- command: "/root/bin/get_status.sh {{ _pid }} {{ _delay }}"
async: "{{ _timeout }}"
poll: 0
register: get_status
In the next step wait for the process to stop running
- wait_for:
path: "/tmp/{{ _pid }}.status"
search_regex: NONEXIST
retries: "{{ _retries }}"
delay: "{{ _delay }}"
After the condition passed or the module wait_for reached the timeout run async_status to make sure the script terminated
- async_status:
jid: "{{ get_status.ansible_job_id }}"
register: job_result
until: job_result.finished
retries: "{{ _retries }}"
delay: "{{ _delay }}"
Example of a complete playbook
- hosts: test_11
vars:
_timeout: 60
_retries: "{{ (_timeout/_delay|int)|int }}"
tasks:
- debug:
msg: |-
time: {{ '%H:%M:%S'|strftime }}
_pid: {{ _pid }}
_delay: {{ _delay }}
_timeout: {{ _timeout }}
_retries: {{ _retries }}
when: debug|d(false)|bool
- command: "/root/bin/get_status.sh {{ _pid }} {{ _delay }}"
async: "{{ _timeout }}"
poll: 0
register: get_status
- debug:
var: get_status
when: debug|d(false)|bool
- wait_for:
path: "/tmp/{{ _pid }}.status"
search_regex: NONEXIST
retries: "{{ _retries }}"
delay: "{{ _delay }}"
- debug:
msg: "time: {{ '%H:%M:%S'|strftime }}"
when: debug|d(false)|bool
- async_status:
jid: "{{ get_status.ansible_job_id }}"
register: job_result
until: job_result.finished
retries: "{{ _retries }}"
delay: "{{ _delay }}"
- debug:
msg: "time: {{ '%H:%M:%S'|strftime }}"
when: debug|d(false)|bool
- file:
path: "/tmp/{{ _pid }}.status"
state: absent
when: _cleanup|d(true)|bool
On the remote, start a process to be monitored. For example,
root#test_11:/ # sleep 60 &
[1] 28704
Run the playbook. Fit _timeout and _delay to your needs
shell> ansible-playbook pb.yml -e debug=true -e _delay=3 -e _pid=28704
PLAY [test_11] *******************************************************************************
TASK [debug] *********************************************************************************
ok: [test_11] =>
msg: |-
time: 09:06:34
_pid: 28704
_delay: 3
_timeout: 60
_retries: 20
TASK [command] *******************************************************************************
changed: [test_11]
TASK [debug] *********************************************************************************
ok: [test_11] =>
get_status:
ansible_job_id: '331975762819.28719'
changed: true
failed: 0
finished: 0
results_file: /root/.ansible_async/331975762819.28719
started: 1
TASK [wait_for] ******************************************************************************
ok: [test_11]
TASK [debug] *********************************************************************************
ok: [test_11] =>
msg: 'time: 09:07:27'
TASK [async_status] **************************************************************************
changed: [test_11]
TASK [debug] *********************************************************************************
ok: [test_11] =>
msg: 'time: 09:07:28'
TASK [file] **********************************************************************************
changed: [test_11]
PLAY RECAP ***********************************************************************************
test_11: ok=8 changed=3 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
I'm able to get the timestamp of a file using Ansible stat module.
- stat:
path: "/var/test.log"
register: filedets
- debug:
msg: "{{ filedets.stat.mtime }}"
The above prints mtime as 1594477594.631616 which is difficult to understand.
I wish to know how can I put a when condition check to see if the file is less than 20 hours old?
You can also achieve this kind of tasks without going in the burden to do any computation via find and its age parameter:
In your case, you will need a negative value for the age:
Select files whose age is equal to or greater than the specified time.
Use a negative age to find files equal to or less than the specified time.
You can choose seconds, minutes, hours, days, or weeks by specifying the first letter of any of those words (e.g., "1w").
Source: https://docs.ansible.com/ansible/latest/modules/find_module.html#parameter-age
Given the playbook:
- hosts: all
gather_facts: no
tasks:
- file:
path: /var/test.log
state: touch
- find:
paths: /var
pattern: 'test.log'
age: -20h
register: test_log
- debug:
msg: "The file is exactly 20 hours old or less"
when: test_log.files | length > 0
- file:
path: /var/test.log
state: touch
modification_time: '202007102230.00'
- find:
paths: /var
pattern: 'test.log'
age: -20h
register: test_log
- debug:
msg: "The file is exactly 20 hours old or less"
when: test_log.files | length > 0
This gives the recap:
PLAY [all] **********************************************************************************************************
TASK [file] *********************************************************************************************************
changed: [localhost]
TASK [find] *********************************************************************************************************
ok: [localhost]
TASK [debug] ********************************************************************************************************
ok: [localhost] => {
"msg": "The file is exactly 20 hours old or less"
}
TASK [file] *********************************************************************************************************
changed: [localhost]
TASK [find] *********************************************************************************************************
ok: [localhost]
TASK [debug] ********************************************************************************************************
skipping: [localhost]
PLAY RECAP **********************************************************************************************************
localhost : ok=5 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
- stat:
path: "/var/test.log"
register: filedets
- debug:
msg: "{{ (ansible_date_time.epoch|float - filedets.stat.mtime ) > (20 * 3600) }}"
Does anyone knows how to get the list of remote files under a particular Dir and to iterate over them and list out the contents of each file via ansible?
For example , i have a location /var/spool/cron and it has many files which i need to cat <file> by iterate over each of them.
fileglob and lookup works locally.
Below is the play but not working as expected.
---
- name: Playbook to quick check the cron jobs for user
hosts: all
remote_user: root
gather_facts: False
tasks:
- name: cron state
shell: |
ls /var/spool/cron/
register: cron_status
- debug: var=item
with_items:
- "{{ cron_status.stdout_lines }}"
Try this as an example
---
- name: Playbook
hosts: all
become: root
gather_facts: False
tasks:
- name: run ls-
shell: |
ls -d1 /etc/cron.d/*
register: cron_status
- name: cat files
shell: cat {{ item }}
register: files_cat
with_items:
- "{{ cron_status.stdout_lines }}"
- debug: var=item
with_items:
- "{{ files_cat.results| map(attribute='stdout_lines')|list }}"
Just for the sake you someones Intrest..
Below Play will give you more cleaner stdout
---
- name: hostname
hosts: all
become: root
gather_facts: False
tasks:
- name: checking cron entries
shell: more /var/spool/cron/*
register: cron_status
- debug: var=cron_status.stdout_lines
What should I do if I want to skip the whole loop in Ansible?
According to guidelines,
While combining when with with_items (see Loops), ... when statement is processed separately for each item.
Thereby while running the playbook like that
---
- hosts: all
vars:
skip_the_loop: true
tasks:
- command: echo "{{ item }}"
with_items: [1, 2, 3]
when: not skip_the_loop
I get
skipping: [localhost] => (item=1)
skipping: [localhost] => (item=2)
skipping: [localhost] => (item=3)
Whereas I don't want a condition to be checked every time.
Then I came up with the idea of using inline conditions
- hosts: all
vars:
skip_the_loop: true
tasks:
- command: echo "{{ item }}"
with_items: "{{ [1, 2, 3] if not skip_the_loop else [] }}"
It seems to solve my problem, but then I get nothing as output. And I want only one line saying:
skipping: Loop has been skipped
You should be able to make Ansible evaluate the condition just once with Ansible 2's blocks.
---
- hosts: all
vars:
skip_the_loop: true
tasks:
- block:
- command: echo "{{ item }}"
with_items: [1, 2, 3]
when: not skip_the_loop
This will still show skipped for every item and every host but, as udondan pointed out, if you want to suppress the output you can add:
display_skipped_hosts=True
to your ansible.cfg file.
This can be done easily using include along with condition:
hosts: all
vars:
skip_the_loop: true
tasks:
- include: loop
when: not skip_the_loop
Whereas somewhere in tasks/ there is a file called loop.yml:
- command: echo "{{ item }}"
with_items: [1, 2, 3]