ANSIBLE: Loop on environment variables - loops

I want to play a shell module in Ansible with multiple environment variable.
I want so to loop on a list registered in my vars. The playbook looks like this :
vars:
tcd_environment_variable:
- { abc_variable: "MSG" , abc_value: "HelloWorld" }
- { abc_variable: "REP_USER" , abc_value: "/home/user" }
tasks:
- name: "Test command with environment variables registered"
shell: "echo $MSG >> $REP_USER/test_env.log"
environment:
"{{ item.abc_variable }}": "{{ item.abc_value }}"
loop: "{{ abc_environment_variable }}"
become: yes
become_user: user
I can't make it work, only this works :
tasks:
- name: "Test command with environment variables registered"
shell: "echo $MSG >> $REP_USER/test_env.log"
environment:
REP_USER: /home/user
MSG: "HelloWorld"
become: yes
become_user: user
But I want to loop on Ansible variable.
Thank's for your help

Use items2dict to transform the list to a dictionary, e.g.
- hosts: localhost
vars:
abc_environment_variable:
- {abc_variable: "MSG", abc_value: "HelloWorld"}
- {abc_variable: "REP_USER", abc_value: "/tmp"}
tasks:
- name: "Test command with environment variables registered"
shell: "echo $MSG >> $REP_USER/test_env.log"
environment: "{{ abc_environment_variable|
items2dict(key_name='abc_variable',
value_name='abc_value') }}"
gives
shell> cat /tmp/test_env.log
HelloWorld
See "Setting the remote environment". Fit the parameters and escalation to your needs.

Related

Ansible find only files which have changed

I'm trying to look for a pattern in Postgres configuration files, and
only wants to find changed files that match the search pattern.
---
- name: Query postgres pattern matched files
hosts: pghosts
gather_facts: false
tasks:
- name: "Find postgres conf files"
find:
paths: "/srv/postgresql/config/conf.d"
patterns: "*.conf"
file_type: "file"
register: search_files
- name: "grep postgres conf files with searchpattern"
shell: "grep -i 'pg_show_plans' {{ item.path | basename }}"
args:
chdir: "/srv/postgresql/config/conf.d"
loop: "{{ search_files.files }}"
loop_control:
label: "{{ item.path | basename }}"
ignore_errors: true
- name: "find changed files"
debug:
msg: "{{ search_files.files |map(attribute='path') }}"
These are the changed files:
changed: [pgsql14.techlab.local] => (item=00_global_default.conf)
...ignoring
changed: [pgsql13.techlab.local] => (item=00_global_default.conf)
...ignoring
How do I get only those filenames, which realy have changed, and passed the pattern test.
Thanks a lot for your help
I do now have simplified the code which works for me:
- name: read the postgres conf file
shell: cat /srv/postgresql/config/conf.d/00_global_default.conf
register: user_accts1
- name: a task that only happens if the string exists
when: user_accts1.stdout.find('pg_show_plans') != -1
ansible.builtin.copy:
content: "{{ user_accts1.stdout }}"
dest: 'fileresults/{{ inventory_hostname }}.00_global_default.out'
delegate_to: localhost
- name: read the postgres conf file
shell: cat /srv/postgresql/config/conf.d/01_sizing_specific.conf
register: user_accts2
- name: a task that only happens if the string exists
when: user_accts2.stdout.find('pg_show_plans') != -1
ansible.builtin.copy:
content: "{{ user_accts2.stdout }}"
dest: 'fileresults/{{ inventory_hostname }}.01_sizing_specific.out'
delegate_to: localhost
- name: read the postgres conf file
shell: cat /srv/postgresql/config/conf.d/02_local_overrides.conf
register: user_accts3
- name: a task that only happens if the string exists
when: user_accts3.stdout.find('pg_show_plans') != -1
ansible.builtin.copy:
content: "{{ user_accts3.stdout }}"
dest: 'fileresults/{{ inventory_hostname }}.02_local_overrides.out'
delegate_to: localhost

How do I use Ansible to add an IP to a Linux hosts file on 18 VMs that iterates from 10.x.x.66..10.x.x.83

So, I have a playbook using a hosts file template to update or revert hosts files on 18 specific Linux VMs. The entry which goes at the end of the file looks like:
10.x.x.66 fooconnect
This above example would be on the 1st of 18 VMs, the 18th VM would look like:
10.x.x.83 fooconnect
Normally, that hostname resolves to a VIP. However, we found during some load testing that it may be beneficial to point each front-end VM to a back-end VM directly. So, my goal is to have a playbook that can update what the hostname resolves to with the above mentioned range, or revert it back to the VIP (reverting back is done using a template only--this part works fine).
What I am unsure about is how to implement this in Ansible. Is there a way to loop through the IPs using jinja2 template "for loops?" Or maybe using lineinfile with some loop magic?
Here is my Ansible role example. For the moment I am using a dirty shell command to create my IP list...open to suggestions for a better way to implement this.
- name: Add a line to a hosts file using a template
template:
src: "{{ srcfile }}"
dest: "{{ destfile }}"
owner: "{{ own_var }}"
group: "{{ grp_var }}"
mode: "{{ mode_var }}"
backup: yes
- name: Get the IPs
shell: "COUNTER=66;for i in {66..83};do echo 10.x.x.$i;((COUNTER++));done"
register: pobs_ip
- name: Add a line
lineinfile:
path: /etc/hosts
line: "{{item}} fooconnect" #Ideally would want "item" to just be one IP and not
insertafter: EOF #the entire list as it would be like this.
loop: "{{pobsips}}"
VARs file:
pobsips:
- "{{pobs_ip.stdout}}"
Instead of using a shell task, we can improvise it and create the range of IP addresses using set_fact with range. Once we have the range of IP addresses in a "list", we can loop lineinfile with that and achieve this.
Example:
- name: create a range of IP addresses in a variable my_range
set_fact:
my_range: "{{ my_range|default([]) + [ '10.1.1.' ~ item ] }}"
loop: "{{ range(66, 84)|list }}"
- name: Add a line to /etc/hosts
lineinfile:
path: /etc/hosts
line: "{{ item }} fooconnect"
insertafter: EOF
loop: "{{ my_range }}"
Updated answer:
There is another approach if we want to append only 1 line into the /etc/hosts file of each host with incrementing IP addresses.
For this we can use the ipmath of ipaddr filter to get the next IP address for given IP address.
Use ansible_play_hosts to get the list of hosts on which play is running
Set an index variable index_var and when condition to update file only when the ansible_hostname or inventory_hostname matches.
Run playbook serially and only once on a host per run using serial and run_once flags.
Let's consider an example inventory file like:
[group_1]
host1
host2
host3
host4
...
Then in playbook:
- hosts: group_1
serial: 1
vars:
start_ip: 10.1.1.66
tasks:
- name: Add a line to /etc/hosts
lineinfile:
path: "/tmp/hosts"
line: "{{ start_ip|ipmath(my_idx) }} fooserver"
insertafter: EOF
loop: "{{ ansible_play_hosts }}"
loop_control:
index_var: my_idx
run_once: true
when: item == inventory_hostname

How to pass a register value into a include_role loop

I have an ansible playbook that is reading in a list of files, and setting a register for those values. I want to then pass the list of files into an include_role task. Below is my current code.
- name: Get list of files
command: "sh -c 'find playbooks/vars/files/*.yml'"
register: find_files
- include_vars:
file: "{{ item }}"
loop: "{{ find_files.stdout_lines }}"
register: result
- name: call role
include_role:
name: myRole
loop: "{{ result.results }}"
When the playbook runs, its finds two files in the directory; file1.yml and file2.yml. But when it runs through the include_role loop, its passes file1.yml twice and never passes file2.yml. Trying to determine how I can ensure file2.yml gets passed to the role as well.
I was able to find the correct my issue by constructing an array and then feeding it into include_role and using the find module instead of shell.
- name: Recursively find yml files
find:
paths: ~/vars
recurse: yes
register: find_files
- name: Construct file array
set_fact:
file_arr: "{{ file_arr|default([]) + [file.path] }}"
with_items: "{{ find_files.files }}"
loop_control:
loop_var: file
- name: Topic Management
include_role:
name: kafkaTopicManagement
vars:
kafkaFiles: "{{ item }}"
with_items: "{{ file_arr }}"
This now feeds the the files into include_role.

Error with Ansible include_tasks while adding dynamic hosts in nested loop

Below is the playbook with import_tasks get_hosts.yml for building dynamic hosts in a nested loop. However, I get syntax error running the playbook.
{{ item.split('\t')[0] }} will have ip addresses separated by commas , and then a string seperated by /t
---
- name: "Play 1"
hosts: localhost
tasks:
- name: "Search database"
command: > mysql --user=root --password=p#ssword deployment
--host=localhost -Ns -e "SELECT dest_ip,file_dets FROM deploy_dets"
register: command_result
- name: Add hosts
include_tasks: "{{ playbook_dir }}/gethosts.yml"
dest_ip: "{{ item.split('\t')[0] }}"
groups: dest_nodes
file_dets: "{{ item.split('\t')[1] }}"
ansible_host: localhost
ansible_connection: local
with_items: "{{ command_result.stdout_lines }}"
And below is my get_hosts.yml file
add_host:
name: "{{ item }}"
with_items: "{{ dest_ip.split(',') }}"
Output:
$ ansible-playbook testinclude.yml
ERROR! Syntax Error while loading YAML. did not find expected key
The error appears to be in '/app/deployment/testinclude.yml': line 23, column 8, but may be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
include_tasks: "{{ playbook_dir }}/gethosts.yml"
dest_ip: "{{ item.split('\t')[0] }}"
^ here We could be wrong, but this one looks like it might be an issue with missing quotes. Always quote template expression brackets when they start a value. For instance:
with_items:
- {{ foo }}
Should be written as:
with_items:
- "{{ foo }}"
Can you please suggest
Perhaps you forgot vars parameter, so:
include_tasks: "{{ playbook_dir }}/gethosts.yml"
vars: # <------------------------------------------- HERE
dest_ip: "{{ item.split('\t')[0] }}"
groups: dest_nodes

How to get the list of remote files under a Dir and iterate over to list out the contents via ansible

Does anyone knows how to get the list of remote files under a particular Dir and to iterate over them and list out the contents of each file via ansible?
For example , i have a location /var/spool/cron and it has many files which i need to cat <file> by iterate over each of them.
fileglob and lookup works locally.
Below is the play but not working as expected.
---
- name: Playbook to quick check the cron jobs for user
hosts: all
remote_user: root
gather_facts: False
tasks:
- name: cron state
shell: |
ls /var/spool/cron/
register: cron_status
- debug: var=item
with_items:
- "{{ cron_status.stdout_lines }}"
Try this as an example
---
- name: Playbook
hosts: all
become: root
gather_facts: False
tasks:
- name: run ls-
shell: |
ls -d1 /etc/cron.d/*
register: cron_status
- name: cat files
shell: cat {{ item }}
register: files_cat
with_items:
- "{{ cron_status.stdout_lines }}"
- debug: var=item
with_items:
- "{{ files_cat.results| map(attribute='stdout_lines')|list }}"
Just for the sake you someones Intrest..
Below Play will give you more cleaner stdout
---
- name: hostname
hosts: all
become: root
gather_facts: False
tasks:
- name: checking cron entries
shell: more /var/spool/cron/*
register: cron_status
- debug: var=cron_status.stdout_lines

Resources