How to get such a structure?
My code doesn't work
user:
- username: test1
home: /home/test1
outbox: Outbox
inbox: Inbox
subfolders:
- _test1
- _test2
- _test3
test1
-------test1_test1
----------- Inbox
----------- Outbox
-------test1_test2
----------- Inbox
----------- Outbox
-------test1_test3
----------- Inbox
----------- Outbox
- name: Creating sub-folders
file:
path: "{{ item.0.home }}/{{ item.0.username }}{{ item.0.subfolders }}/{{ item.0.inbox }}"
mode: 0775
owner: "{{ item.0.username }}"
group: "{{ web_user }}"
state: directory
with_subelements:
- "{{ user }}"
- subfolders
when: subfolders is defined
Maybe someone will come in handy :)
file:
path: "{{ item.0.home }}/{{ item.0.username }}{{ item.1 }}/{{ item.0.inbox }}"
mode: 0775
owner: "{{ item.0.username }}"
group: "{{ web_user }}"
state: directory
with_subelements:
- "{{ user }}"
- subfolders
Related
I try to loop over the geerlingguy.nginx Role to create nginx VHosts. But I don't get it done:
Playbook.yml
- hosts: some.server
become: true
roles:
- geerlingguy.nginx
tasks:
- name: looping vhosts
include_tasks: vhosts.yml
loop:
- { name: 'vhost1.bla.com', state: 'present' }
- { name: 'vhost1.bla.com', state: 'present' }
For this Server I create a Host_vars File:
host_vars.yml
nginx_worker_processes: "auto"
nginx_worker_connections: 768
nginx_extra_http_options: |
gzip on;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
ssl_prefer_server_ciphers on;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
nginx_vhosts:
- listen: "443 ssl http2"
server_name: '{{ item.name }}'
server_name_redirect: " {{ item.name }} "
root: "/var/www/{{ item.name }}"
index: "index.php index.html index.htm"
access_log: "/var/www/{{ item.name }}/logs/access_{{ item.name }}.log"
error_log: "/var/www/{{ item.name }}/logs/erro_{{ item.name }}.log"
state: "{{ item.state }}"
template: "{{ nginx_vhost_template }}"
filename: "{{ item.name }}"
extra_parameters: |
ssl_certificate /etc/ssl/certs/ssl-cert-snakeoil.pem;
ssl_certificate_key /etc/ssl/private/ssl-cert-snakeoil.key;
ssl_protocols TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
This is the vhost.yml from the geerlingguy.nginx Role:
- name: Remove default nginx vhost config file (if configured).
file:
path: "{{ nginx_default_vhost_path }}"
state: absent
when: nginx_remove_default_vhost | bool
notify: restart nginx
- name: Ensure nginx_vhost_path exists.
file:
path: "{{ nginx_vhost_path }}"
state: directory
mode: 0755
notify: reload nginx
- name: Add managed vhost config files.
template:
src: "{{ item.template|default(nginx_vhost_template) }}"
dest: "{{ nginx_vhost_path }}/{{ item.filename|default(item.server_name.split(' ')[0] ~ '.conf') }}"
force: true
owner: root
group: "{{ root_group }}"
mode: 0644
when: item.state|default('present') != 'absent'
with_items: "{{ nginx_vhosts }}"
notify: reload nginx
tags:
- skip_ansible_lint
- name: Remove managed vhost config files.
file:
path: "{{ nginx_vhost_path }}/{{ item.filename|default(item.server_name.split(' ')[0] ~ '.conf') }}"
state: absent
when: item.state|default('present') == 'absent'
with_items: "{{ nginx_vhosts }}"
notify: reload nginx
tags:
- skip_ansible_lint
- name: Remove legacy vhosts.conf file.
file:
path: "{{ nginx_vhost_path }}/vhosts.conf"
state: absent
notify: reload nginx
So, when I run the playbook I got:
fatal: [some.server]: FAILED! => {
"msg": "[{'listen': '443 ssl http2', 'server_name': '{{ item.name }}'... HIGH:!aNULL:!MD5;\\n'}]: 'item' is undefined
I try it in different ways but always get the same error, would be greate if someone could help me.
Your approach doesn't work, you won't get anything out of a loop at this point. Furthermore, it is not possible to define a variable or data structure and have the Jinja logic evaluate it later.
The implementation of geerlingguy provides that the variable nginx_vhosts is defined. This variable must be a list of dicts, and this list is then automatically processed.
You have two main options:
Option 1
You create nginx_vhosts as a list of dicts for all your virtual hosts.
nginx_vhosts:
- listen: "443 ssl http2"
server_name: "vhost1.bla.com"
server_name_redirect: "www.vhost1.bla.com"
root: "/var/www/vhost1.bla.com"
index: "index.php index.html index.htm"
error_page: ""
access_log: "/var/www/vhost1.bla.com/logs/access_vhost1.bla.com.log"
error_log: "/var/www/vhost1.bla.com/logs/error_vhost1.bla.com.log"
state: "present"
template: "{{ nginx_vhost_template }}"
filename: "vhost1.bla.com.conf"
extra_parameters: |
ssl_certificate /etc/ssl/certs/ssl-cert-snakeoil.pem;
ssl_certificate_key /etc/ssl/private/ssl-cert-snakeoil.key;
ssl_protocols TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
- listen: "443 ssl http2"
server_name: "vhost2.bla.com"
server_name_redirect: "www.vhost2.bla.com"
root: "/var/www/vhost2.bla.com"
index: "index.php index.html index.htm"
error_page: ""
access_log: "/var/www/vhost2.bla.com/logs/access_vhost2.bla.com.log"
error_log: "/var/www/vhost2.bla.com/logs/error_vhost2.bla.com.log"
state: "present"
template: "{{ nginx_vhost_template }}"
filename: "vhost2.bla.com.conf"
extra_parameters: |
ssl_certificate /etc/ssl/certs/ssl-cert-snakeoil.pem;
ssl_certificate_key /etc/ssl/private/ssl-cert-snakeoil.key;
ssl_protocols TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
Option 2
A bit more complicated, but I think that was your wish, with the loop.
Create a separate file for your tasks myvhost.yml with the following content:
---
- name: create directories
file:
path: "{{ item }}"
state: directory
with_items:
- "/var/www/{{ vhost.name }}"
- "/var/www/{{ vhost.name }}/logs"
- name: define nginx_vhosts variable
set_fact:
nginx_vhosts:
- listen: "443 ssl http2"
server_name: '{{ vhost.name }}'
# server_name_redirect: " {{ vhost.name }} "
root: "/var/www/{{ vhost.name }}"
index: "index.php index.html index.htm"
access_log: "/var/www/{{ vhost.name }}/logs/access_{{ vhost.name }}.log"
error_log: "/var/www/{{ vhost.name }}/logs/erro_{{ vhost.name }}.log"
state: "{{ vhost.state }}"
# template: "{{ nginx_vhost_template }}"
filename: "{{ vhost.name }}"
extra_parameters: |
ssl_certificate /etc/ssl/certs/ssl-cert-snakeoil.pem;
ssl_certificate_key /etc/ssl/private/ssl-cert-snakeoil.key;
ssl_protocols TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
- name: include vhosts.yml from geerlingguy
include_role:
name: geerlingguy.nginx
tasks_from: vhosts
Here you set the variable nginx_vhosts with new values, a list with a single dict. Then you perform the import the tasks vhosts of the role from geerlingguy.
In your playbook, on the other hand, you import your new myvhost.yml with the loop.
- name: looping vhosts
include_tasks: myvhost.yml
loop:
- { name: 'vhost1.bla.com', state: 'present' }
- { name: 'vhost2.bla.com', state: 'present' }
loop_control:
loop_var: vhost
Explanation of the changes
For your loop you have to rename the loop variable, otherwise there will be conflicts with loops in the vhosts.yml of geerlingguy (I had overlooked this at the beginning), see loop_var: vhost. After renaming the loop variable, you must of course also change the name in myvhost.yml from item to vhost.
Before running your task looping vhosts, the geerlingguy.nginx role should have been run once, e.g. if it is listed in your playbook under roles:.
Another change I made in myvhost.yml. Instead of include_tasks use better include_role with tasks_from: vhosts.
I commented out the server_name_redirect: setting for now, because it creates nginx config files that crash nginx. If you really need this setting you have to analyze this in more detail.
Furthermore the certificate files (ssl-cert-snakeoil) must exist before creating the VHosts.
A complete playbook might look like this:
---
- hosts: nginx
become: true
roles:
- geerlingguy.nginx
tasks:
- name: looping vhosts
include_tasks: myvhost.yml
loop:
- { name: 'vhost1.bla.com', state: 'present' }
- { name: 'vhost2.bla.com', state: 'present' }
loop_control:
loop_var: vhost
I have got the following YAML files:
---
U01:
ip: 1.1.1.1
U02:
ip: 2.2.2.2
---
U01:
as_bgp: as1
U02:
as_bgp: as2
I am using the following playbook to generate one output per key using the above YAML files
---
- hosts: localhost
gather_facts: no
tasks:
- name: itterate over up nodes
include_vars:
dir: "vars"
name: U
- name: print nodes name
template:
src: test.j2
dest: "outputs/{{item.key}}test.txt"
loop: "{{ lookup('dict', U) }}"
Now, I am using the following simple Jinja2 template
{{item.value.ip}}
{{item.value.as_bgp}}
How can I modify my playbook to get the fololwing outputs (two separate files):
1.1.1.1
as1
2.2.2.2
as2
The only things that works is either using {{item.value.ip}} or {{item.value.as_bgp}} in the Jinja template, it doesn't work for both!
If you do happen to have a really recent version of Ansible (>= 2.12), you could use the hash_behaviour parameter of the module include_vars, along with a loop and the file lookup:
- include_vars:
name: U
hash_behaviour: merge
file: "{{ item }}"
loop: "{{ lookup('fileglob', 'vars/*', wantlist=True) }}"
vars:
U: {}
Another option, for older version, would be to combine the two dictionaries, with the recursive=True option out of the registered output of the include_vars task:
- include_vars:
file: "{{ item }}"
loop: "{{ lookup('fileglob', 'vars/*', wantlist=True) }}"
register: include
- template:
src: test.j2
dest: "outputs/{{item.key}}test.txt"
loop: >-
{{
include.results
| map(attribute='ansible_facts')
| combine(recursive=True)
| dict2items
}}
loop_control:
label: "{{ item.key }}"
Given this couple of tasks:
- include_vars:
name: U
hash_behaviour: merge
file: "{{ item }}"
loop: "{{ lookup('fileglob', 'vars/*', wantlist=True) }}"
vars:
U: {}
- debug:
msg: >-
For `{{ item.key }}`,
the IP is `{{ item.value.ip }}`
and the BGP is `{{ item.value.as_bgp }}`
loop: "{{ U | dict2items }}"
loop_control:
label: "{{ item.key }}"
Or that other couple of tasks:
- include_vars:
file: "{{ item }}"
loop: "{{ lookup('fileglob', 'vars/*', wantlist=True) }}"
register: include
- debug:
msg: >-
For `{{ item.key }}`,
the IP is `{{ item.value.ip }}`
and the BGP is `{{ item.value.as_bgp }}`
loop: >-
{{
include.results
| map(attribute='ansible_facts')
| combine(recursive=True)
| dict2items
}}
loop_control:
label: "{{ item.key }}"
They would all yield:
ok: [localhost] => (item=U01) =>
msg: For `U01`, the IP is `1.1.1.1` and the BGP is `as1`
ok: [localhost] => (item=U02) =>
msg: For `U02`, the IP is `2.2.2.2` and the BGP is `as2`
Given the tree
shell> tree vars/
vars/
├── as_bgp.yml
└── ip.yml
0 directories, 2 files
shell> cat vars/as_bgp.yml
---
U01:
as_bgp: as1
U02:
as_bgp: as2
shell> cat vars/ip.yml
---
U01:
ip: 1.1.1.1
U02:
ip: 2.2.2.2
The option hash_behaviour: merge works as expected (Ansible 2.12)
- name: iterate over files
include_vars:
file: "{{ item }}"
hash_behaviour: merge
name: my_vars
loop: "{{ query('fileglob', 'vars/*') }}"
vars:
my_vars: {}
gives
my_vars:
U01:
as_bgp: as1
ip: 1.1.1.1
U02:
as_bgp: as2
ip: 2.2.2.2
If the parameter hash_behaviour is not available create the names of the variables from the names of the files, e.g.
- name: iterate over files
include_vars:
file: "{{ item }}"
name: "u_{{ item|basename|splitext|first }}"
loop: "{{ query('fileglob', 'vars/*') }}"
will create the variables
query('varnames', '^u_*'):
- u_as_bgp
- u_ip
In the playbook below, extract the variables, and combine the dictionary my_vars
shell> cat test.yml
- hosts: localhost
vars:
my_vars: "{{ query('varnames', '^u_*')|
map('extract', hostvars[inventory_hostname])|
combine(recursive=True) }}"
tasks:
- name: iterate over files
include_vars:
file: "{{ item }}"
name: "u_{{ item|basename|splitext|first }}"
loop: "{{ query('fileglob', 'vars/*') }}"
gives
my_vars:
U01:
as_bgp: as1
ip: 1.1.1.1
U02:
as_bgp: as2
ip: 2.2.2.2
The template is trivial
shell> cat test.txt.j2
{% for k,v in my_vars.items() %}
{{ v.ip }}
{{ v.as_bgp }}
{% endfor %}
The task below
- template:
src: test.txt.j2
dest: test.txt
will create the file
shell> cat test.txt
1.1.1.1
as1
2.2.2.2
as2
If the option recursive=True of the filter combine is not available merge the dictionaries on your own, e.g the variables below help to get the same result
my_vars: "{{ dict(_keys|zip(_vals)) }}"
_groups: "{{ query('varnames', '^u_*')|
map('extract', hostvars[inventory_hostname])|
map('dict2items')|flatten|
groupby('key') }}"
_keys: "{{ _groups|map('first')|list }}"
_vals: "{{ _groups|map('last')|
map('map', attribute='value')|
map('combine')|list }}"
I am running an ansible playbook inside a terraform local-exec provisioner with inline inventory of the remote instance IP.
- name: Install git
apt:
name: git
state: present
update_cache: yes
- name: Clone the git repository
become_user: "{{ SSH_USER }}"
git:
repo: "{{ REPO_URL }}"
dest: "{{ SRC_DIR }}"
- name : Find files with .pub extension
become_user: "{{ SSH_USER }}"
find:
paths: "{{ SRC_DIR }}"
patterns: '*.pub'
register: pub_files
- name: Append the content of all public key files to authorized_keys file.
become_user: "{{ SSH_USER }}"
lineinfile:
path: "{{ DEST_FILE }}"
line: "{{ lookup('file', '{{ item.path }}') }}"
insertafter: EOF
create: "yes"
state: present
# loop: "{{ lookup('fileglob', "{{ SRC_DIR }}/*.pub", wantlist=True) }}"
# with_fileglob: "{{ SRC_DIR }}/*.pub"
with_items: "{{ pub_files.files }}"
- name: Display destinationFile contents
become_user: "{{ SSH_USER }}"
command: cat "{{ DEST_FILE }}"
register: command_output
- name: Print to console
become_user: "{{ SSH_USER }}"
debug:
msg: "{{command_output.stdout}}"
The ansible playbook should clone a git repo and copies the content of it's files to another file.
But when using ansible lookups to read the content of the files (which are cloned in remote host), it always looks for the file in localhost.
Like all templating, lookups execute and are evaluated on the Ansible
control machine.
Thus the above given playbook fails with error:
No such file or directory found
The similar issue occurred when used with_fileglob and loop with fileglob lookup to iterate over the files, as they also does a lookup inside. I replaced that with find module to list files names, register it in a variable and then iterate over it in next step using with_items.
Is there any such alternative to read content of files?
Fetching them back to the ansible control node first works. And note that ansible has an authorized_keys module that simplifies the task of adding the keys.
tasks:
- name: find all the .pub files
find:
paths: "/path/remote"
recurse: no
patterns: "*.pub"
register: files_to_fetch
- debug:
var: files_to_fetch.files
- name: "fetch .pub files from remote host"
fetch:
flat: yes
src: "{{ item.path }}"
dest: ./local/
with_items: "{{ files_to_fetch.files }}"
- name: update SSH keys
authorized_key:
user: user1
key: "{{ lookup('file', item) }}"
state: present
#exclusive: yes
with_fileglob:
- local/*.pub
It worked as i did it using cat.
- name: Install git
become_user: root
apt:
name: git
state: present
update_cache: yes
- name: Clone the git repository
git:
repo: "{{ REPO_URL }}"
dest: "{{ SRC_DIR }}"
- name : Find file names with .pub extension
find:
paths: "{{ SRC_DIR }}"
patterns: '*.pub'
register: pub_files
- name: Get contents of all those .pub files
shell: cat {{ item.path }}
register: file_content
with_items: "{{ pub_files.files }}"
- name: Print file_content to console
debug:
var: item.stdout
with_items:
- "{{ file_content.results }}"
- name: Append the content of all public key files to authorized_keys file.
lineinfile:
path: "{{ DEST_FILE }}"
line: "{{ item.stdout }}"
insertafter: EOF
create: "yes"
state: present
with_items:
- "{{ file_content.results }}"
- name: Display destinationFile contents
command: cat "{{ DEST_FILE }}"
register: command_output
- name: Print to console
debug:
msg: "{{command_output.stdout}}"
In old-style Ansible, I used to use with_items together with dictionaries. To given potential examples:
- name: deploy files
template:
src: "files/{{ item.src }}"
dest: "{{ item.dest }}"
with_items:
- {src: 'foo', dest: "/path/to/somewhere"}
- {src: 'bar', dest: "/somewhere/else"}
- {src: 'baz', dest: "/different/path/"}
- name: Install packages
npm:
name: '{{ item.name }}'
version: '{{ item.version }}'
with_items:
- {src: 'foo', version: '1.0'}
- {src: 'bar', version: '1.5'}
- {src: 'baz', version: '1.2'}
These days we are supposed to use loop. if trying as a drop-in replacement, this would be like
- name: deploy files
template:
src: "files/{{ item.src }}"
dest: "{{ item.dest }}"
loop:
- {src: 'foo', dest: "/path/to/somewhere"}
- {src: 'bar', dest: "/somewhere/else"}
- {src: 'baz', dest: "/different/path/"}
wihch fails with
TASK [deploy files] ************
fatal: [host]: FAILED! => {
"msg": "'src_path' is undefined"
}
Indeed, the migration guide (and StackOverflow answers such as Ansible: iterate over a list of dictionaries - loop vs. with_items) do say and boil down to “use flatten” for dictionaries. But this assumes that the list of dicts is stored in a variable. But what if it is not as it is defined in-line? Do I just have to move the data to a named variable?
EDIT: for an attempt with drop-in replacement.
As it turns out, loop is a drop-in replacement for with_items when using in-line-defined list of dictionaries.
(I had an undefined variable somewhere else which just ''happened'' to be named in a way (src_path) that confused me with the src dict variables… >_>)
I get the checksum and mode (permissions) of a file on a server IP using ansible's stat module.
I have generated a variable file which stores the past file information like below:
cat gc.yaml
---
10.9.9.112:
name:
- /tmp/conf/httpd.conf
- /tmp/conf/extra/httpd-ssl.conf
hash:
- 8g8gf8d8d8ds8s8s7
- 7t7t7t7t7t7t7t7t
mode:
- 0754
- 0755
10.9.9.114:
name:
- /was/conf/httpd.conf
- /was/conf/extra/httpd-ssl.conf
hash:
- 5r5r5r5r5r5r5r5r
- 2o2o2o2o2o2o2o2
mode:
- 0754
- 0750
Is this the correct way to design the variable file? Under the IP I have a duplicate name, mode, and checksum one for httpd.conf and other httpd-ssl.conf. I'm not sure if this is the correct structure. Kindly propose to incase this will not work and I can design the gc.yaml accordingly.
My requirement is to check if the current play's stat for a particular file matches the one in the gc.yaml
My playbook looks like below:
ansible-playbook /app/test.yml -e files_list="/tmp/conf/httpd.conf,/tmp/conf/extra/httpd-ssl.conf"
tasks:
- name: Get stat of the files from `{{ inventory_hostname }}`
stat:
path: "{{ item }}"
register: files_det
with_items: "{{ files_list.split(',') }}"
- debug:
msg: "HERE IS CKSUM_{{ item.stat.checksum }}.HERE IS MODE_{{ item.stat.mode }}"
with_items: "{{ files_det.results }}"
Below is where i wish read corresponding data from gc.yaml and compare with the files_det variable however, I'm not sure how to read the gc.yaml data
- include_vars:
file="{{ playbook_dir }}/gc.yaml"
name=user1
- debug: var=user1
- debug:
msg: "HERE IS THE NAME:{{ item }}"
with_dict: 10.9.9.112
- debug:
msg: "HERE IS THE NAME:{{ item }}.name HERE is the VALUE:{{ item }}.hash"
with_dict: "{{ user1 }}"
Given an iP address, how can we get mode and checksum for each file?
Kindly suggest?
It's not entirely clear what you're trying to accomplish, so I'm making a few assumptions here. I think you'll find things easiest if you restructure your gc.yaml file so that it looks like this:
---
hosts:
- host: 10.9.9.112
files:
- name: /tmp/conf/httpd.conf
hash: 8g8gf8d8d8ds8s8s
mode: 0754
- name: /tmp/conf/extra/httpd-ssl.conf
hash: 7t7t7t7t7t7t7t7t
mode: 0755
- host: 10.9.9.114
files:
- name: /was/conf/httpd.conf
hash: 5r5r5r5r5r5r5r5r
mode: 0754
- name: /was/conf/extra/httpd-ssl.conf
hash: 2o2o2o2o2o2o2o2
mode: 0750
We have a top level hosts key whose value is a list. Each list item is a dictionary with a host key that has a hostname, and a files key that has a list of files.
This structure makes the data useful with Ansible's subelements filter.
For example, given the following playbook:
---
- name: Enable Site
hosts: localhost
gather_facts: false
tasks:
- include_vars:
file: "{{ playbook_dir }}/gc.yaml"
name: user1
- debug:
msg: "file {{ item.1.name }} on host {{ item.0.host }} has hash {{ item.1.hash }}"
loop: "{{ user1.hosts|subelements('files') }}"
loop_control:
label: "{{ item.0.host }}:{{ item.1.name }}"
We get the following output:
PLAY [Enable Site] *******************************************************************
TASK [include_vars] ******************************************************************
ok: [localhost]
TASK [debug] *************************************************************************
ok: [localhost] => (item=10.9.9.112:/tmp/conf/httpd.conf) => {
"msg": "file /tmp/conf/httpd.conf on host 10.9.9.112 has hash 8g8gf8d8d8ds8s8s"
}
ok: [localhost] => (item=10.9.9.112:/tmp/conf/extra/httpd-ssl.conf) => {
"msg": "file /tmp/conf/extra/httpd-ssl.conf on host 10.9.9.112 has hash 7t7t7t7t7t7t7t7t"
}
ok: [localhost] => (item=10.9.9.114:/was/conf/httpd.conf) => {
"msg": "file /was/conf/httpd.conf on host 10.9.9.114 has hash 5r5r5r5r5r5r5r5r"
}
ok: [localhost] => (item=10.9.9.114:/was/conf/extra/httpd-ssl.conf) => {
"msg": "file /was/conf/extra/httpd-ssl.conf on host 10.9.9.114 has hash 2o2o2o2o2o2o2o2"
}
PLAY RECAP ***************************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
The subelements filter provides our loop with a list of tuples such that the first item of the tuple iterates through the items of the hosts list, and the second item iterates through the items in the files list for the current host.
That is, the first time we iterate through the loop, item contains:
- host: 10.9.9.12
files:
- name: /tmp/conf/httpd.conf
hash: 8g8gf8d8d8ds8s8s
mode: 0754
- name: /tmp/conf/extra/httpd-ssl.conf
hash: 7t7t7t7t7t7t7t7t
mode: 0755
- name: /tmp/conf/httpd.conf
hash: 8g8gf8d8d8ds8s8s
mode: 0754
And the second time:
- host: 10.9.9.12
files:
- name: /tmp/conf/httpd.conf
hash: 8g8gf8d8d8ds8s8s
mode: 0754
- name: /tmp/conf/extra/httpd-ssl.conf
hash: 7t7t7t7t7t7t7t7t
mode: 0755
- name: /tmp/conf/extra/httpd-ssl.conf
hash: 7t7t7t7t7t7t7t7t
mode: 0755
And so on.
If you're not going to be looping over the data, but instead want to be able to get the hash of a file given a filename, then structure your data like this instead:
---
10.9.9.112:
/tmp/conf/httpd.conf:
hash: 8g8gf8d8d8ds8s8s
mode: 0754
/tmp/conf/extra/httpd-ssl.conf:
hash: 7t7t7t7t7t7t7t7t
mode: 0755
10.9.9.114:
/was/conf/httpd.conf:
hash: 5r5r5r5r5r5r5r5r
mode: 0754
/was/conf/extra/httpd-ssl.conf:
hash: 2o2o2o2o2o2o2o2
mode: 0750
Now your filenames are dictionary keys, so you can ask for user1[<host>][<filename>], like this:
---
- name: Enable Site
hosts: localhost
gather_facts: false
tasks:
- include_vars:
file: "{{ playbook_dir }}/gc.yaml"
name: user1
- debug:
msg: "file {{ item.file }} on host {{ item.host }} has hash {{ user1[item.host][item.file].hash }}"
loop:
- host: 10.9.9.112
file: /tmp/conf/extra/httpd-ssl.conf
The above results in:
TASK [debug] *****************************************************************************************************************************************************************
ok: [localhost] => (item={'host': '10.9.9.112', 'file': '/tmp/conf/extra/httpd-ssl.conf'}) => {
"msg": "file /tmp/conf/extra/httpd-ssl.conf on host 10.9.9.112 has hash 7t7t7t7t7t7t7t7t"
}