Ansible: Iterate and include multiple variable files - loops

I have hundreds of files (generated through some application) which I am trying to iterate and include them as variable files.
See below files as example. There are many more variables in each of the files, I have toned down to make the example clear.
# cat /tmp/abc/dev1.yml
---
app_name: dev1
instance: dev
port: 1234
server: test1
#
# cat /tmp/abc/dev2.yml
---
app_name: dev2
instance: dev
port: 4567
server: test2
#
# cat /tmp/abc/dev3.yml
---
app_name: dev3
instance: dev
port: 2223
server: test3
#
Now, when I use these tasks in my playbook, I can see the variables (app_name, instance, port, etc) defined in the files (/tmp/abc/*.yml) in the output as ansible_facts.
- action: shell echo "{{ item }}"
with_fileglob: /tmp/abc/*
register: res
- include_vars: "{{ item.item }}"
with_items: res.results
when: item.changed == True
register: task1
This is my output, when I run the playbook.
root#vikas027:~# ansible-playbook -v configs.yml
PLAY [dev] **************************************************************
GATHERING FACTS ***************************************************************
ok: [vikas027.test.com]
TASK: [shell echo "{{ item }}"] ***********************************************
changed: [vikas027.test.com] => (item=/tmp/abc/dev3.yml) => {"changed": true, "cmd": "echo \"/tmp/abc/dev3.yml\"", "delta": "0:00:00.004915", "end": "2015-08-31 20:11:57.702623", "item": "/tmp/abc/dev3.yml", "rc": 0, "start": "2015-08-31 20:11:57.697708", "stderr": "", "stdout": "/tmp/abc/dev3.yml", "warnings": []}
changed: [vikas027.test.com] => (item=/tmp/abc/dev2.yml) => {"changed": true, "cmd": "echo \"/tmp/abc/dev2.yml\"", "delta": "0:00:00.004945", "end": "2015-08-31 20:11:58.130295", "item": "/tmp/abc/dev2.yml", "rc": 0, "start": "2015-08-31 20:11:58.125350", "stderr": "", "stdout": "/tmp/abc/dev2.yml", "warnings": []}
changed: [vikas027.test.com] => (item=/tmp/abc/dev1.yml) => {"changed": true, "cmd": "echo \"/tmp/abc/dev1.yml\"", "delta": "0:00:00.004864", "end": "2015-08-31 20:11:58.440205", "item": "/tmp/abc/dev1.yml", "rc": 0, "start": "2015-08-31 20:11:58.435341", "stderr": "", "stdout": "/tmp/abc/dev1.yml", "warnings": []}
TASK: [include_vars {{ item.item }}] ******************************************
ok: [vikas027.test.com] => (item={u'cmd': u'echo "/tmp/abc/dev3.yml"', u'end': u'2015-08-31 20:11:57.702623', u'stderr': u'', u'stdout': u'/tmp/abc/dev3.yml', u'changed': True, u'rc': 0, 'item': '/tmp/abc/dev3.yml', u'warnings': [], u'delta': u'0:00:00.004915', 'invocation': {'module_name': u'shell', 'module_args': u'echo "/tmp/abc/dev3.yml"'}, 'stdout_lines': [u'/tmp/abc/dev3.yml'], u'start': u'2015-08-31 20:11:57.697708'}) => {"ansible_facts": {"app_name": "dev3", "instance": "dev", "port": 2223, "server": "test3"}, "item": {"changed": true, "cmd": "echo \"/tmp/abc/dev3.yml\"", "delta": "0:00:00.004915", "end": "2015-08-31 20:11:57.702623", "invocation": {"module_args": "echo \"/tmp/abc/dev3.yml\"", "module_name": "shell"}, "item": "/tmp/abc/dev3.yml", "rc": 0, "start": "2015-08-31 20:11:57.697708", "stderr": "", "stdout": "/tmp/abc/dev3.yml", "stdout_lines": ["/tmp/abc/dev3.yml"], "warnings": []}}
ok: [vikas027.test.com] => (item={u'cmd': u'echo "/tmp/abc/dev2.yml"', u'end': u'2015-08-31 20:11:58.130295', u'stderr': u'', u'stdout': u'/tmp/abc/dev2.yml', u'changed': True, u'rc': 0, 'item': '/tmp/abc/dev2.yml', u'warnings': [], u'delta': u'0:00:00.004945', 'invocation': {'module_name': u'shell', 'module_args': u'echo "/tmp/abc/dev2.yml"'}, 'stdout_lines': [u'/tmp/abc/dev2.yml'], u'start': u'2015-08-31 20:11:58.125350'}) => {"ansible_facts": {"app_name": "dev2", "instance": "dev", "port": 4567, "server": "test2"}, "item": {"changed": true, "cmd": "echo \"/tmp/abc/dev2.yml\"", "delta": "0:00:00.004945", "end": "2015-08-31 20:11:58.130295", "invocation": {"module_args": "echo \"/tmp/abc/dev2.yml\"", "module_name": "shell"}, "item": "/tmp/abc/dev2.yml", "rc": 0, "start": "2015-08-31 20:11:58.125350", "stderr": "", "stdout": "/tmp/abc/dev2.yml", "stdout_lines": ["/tmp/abc/dev2.yml"], "warnings": []}}
ok: [vikas027.test.com] => (item={u'cmd': u'echo "/tmp/abc/dev1.yml"', u'end': u'2015-08-31 20:11:58.440205', u'stderr': u'', u'stdout': u'/tmp/abc/dev1.yml', u'changed': True, u'rc': 0, 'item': '/tmp/abc/dev1.yml', u'warnings': [], u'delta': u'0:00:00.004864', 'invocation': {'module_name': u'shell', 'module_args': u'echo "/tmp/abc/dev1.yml"'}, 'stdout_lines': [u'/tmp/abc/dev1.yml'], u'start': u'2015-08-31 20:11:58.435341'}) => {"ansible_facts": {"app_name": "dev1", "instance": "dev", "port": 1234, "server": "test1"}, "item": {"changed": true, "cmd": "echo \"/tmp/abc/dev1.yml\"", "delta": "0:00:00.004864", "end": "2015-08-31 20:11:58.440205", "invocation": {"module_args": "echo \"/tmp/abc/dev1.yml\"", "module_name": "shell"}, "item": "/tmp/abc/dev1.yml", "rc": 0, "start": "2015-08-31 20:11:58.435341", "stderr": "", "stdout": "/tmp/abc/dev1.yml", "stdout_lines": ["/tmp/abc/dev1.yml"], "warnings": []}}
PLAY RECAP ********************************************************************
vikas027.test.com : ok=3 changed=1 unreachable=0 failed=0
root#vikas027:~#
How can I reference variables like app_name, instance, port, etc in other tasks ? I tried using below code and few other combinations in vain.
- debug: msg="{{ task1.app_name }}"
with_items: task1.results

Your variable files, dev1.yml, dev2.yml, etc. all reference the same variable names. Is this on purpose, or just part of your example? I ask because your example as it's currently shown, would result in just the last set of variables being defined, so as far as ansible is concerned it appears that the variables would ultimately just be defined as if you did this:
vars:
app_name: dev3
instance: dev
port: 2223
server: test3
You would just reference the variables by their given names:
- debug: var=app_name
- debug: var=instance
etc.
What I'm guessing you actually want to be doing is having those variable files look something like this:
---
app:
dev1:
instance: "dev"
port: "1234"
server: "host1"
and
---
app:
dev2:
instance: "dev"
port: "4321"
server: "host2"
You would then reference your objects something like this:
# should list "dev1", "dev2", "dev3"...
- debug: msg={{ item }}
with_dict: app
# should list the server names for each device
- debug: var = app[item][server]
with_dict: app

I was working on this whole day today, tried umpteen configuration changes in vain. Finally, it is working the way I wanted it to work.
This is what one needs to do if in a similar situation. Hope this helps someone.
First, register your facts locally. I chose the default /etc/ansible/facts.d/ directory for the same. Here are more details.
Key things to remember:-
Extension should be .fact
File should be executable (I gave 0755)
Format is JSON (I've used yaml-to-json to convert my yaml files to json. You can use ruby or perl one-liners too.)
Then, to iterate the facts registered in the previous step, we need to load/reload the facts in the playbook in order to use them in tasks/playbooks.
- local_action: setup filter=ansible_local
- template: src=nginx_lb.conf.j2 dest=/etc/nginx/conf.d/{{ item.key }}.conf
with_dict: "{{ ansible_local }}"
All variables can now be used in the jinja2 template. For example, port can be referenced as item.value.port.

Related

Assigning IP to multiple hosts

I am trying to assign a range of IP to different hosts, but first I am checking if the IP is already attributed. If it is, it gives me an error, so I would like to avoid that too(actually I can assign the IP, but can't pass that error, I am using ignore_errors: yes for now but I would like a better way). Here is part of my script:
---
- hosts: all
gather_facts: yes
become: yes
vars_files:
- client2
tasks:
- set_fact:
intip: "{{ intip | default([]) + [item] }}"
loop: "{{ ansible_interfaces | map('extract',ansible_facts, 'ipv4') | select('defined') | map(attribute='address') | list }}"
ignore_errors: yes
- name: Check IP address
shell:
"ip a a '{{ start_ip|ipmath(my_idx) }}' dev spanbr"
loop_control:
index_var: my_idx
when: (item == inventory_hostname )
loop: "{{ ansible_play_hosts }}"
ignore_errors: yes
I am using this var file, but maybe there is a better way? (I am trying to use something like the first example, but can't get my head around to use it on every hosts)
First vars file (didn't try yet):
client:
- interface:
- local_ip: 10.10.10.10
- name: eth1
- interface:
- local_ip: 10.10.10.11
- name: eth1
Second file:
interface:
- config:
- name: eth0
- config:
- name: eth1
start_ip: 10.10.10.10
I can get only one interface getting the IP and ignoring the address, but as the when conditional is not a loop it only check one interface:
Output:
TASK [Check IP address] ************************************************************************************************************************
skipping: [host2] => (item=host1)
failed: [host1] (item=host1) => {"ansible_index_var": "my_idx", "ansible_loop_var": "item", "changed": true, "cmd": "ip a a '10.10.10.10' dev spanbr", "delta": "0:00:00.005429", "end": "2022-10-20 08:49:43.800954", "item": "host1", "msg": "non-zero return code", "my_idx": 0, "rc": 2, "start": "2022-10-20 08:49:43.795525", "stderr": "RTNETLINK answers: File exists", "stderr_lines": ["RTNETLINK answers: File exists"], "stdout": "", "stdout_lines": []}
skipping: [host1] => (item=host2)
...ignoring
failed: [host2] (item=host2) => {"ansible_index_var": "my_idx", "ansible_loop_var": "item", "changed": true, "cmd": "ip a a '10.10.10.11' dev spanbr", "delta": "0:00:00.002691", "end": "2022-10-20 07:49:43.815422", "item": "host2", "msg": "non-zero return code", "my_idx": 1, "rc": 2, "start": "2022-10-20 07:49:43.812731", "stderr": "RTNETLINK answers: File exists", "stderr_lines": ["RTNETLINK answers: File exists"], "stdout": "", "stdout_lines": []}
...ignoring
I would like to use a notify but I need to use the loop on the task itself so that may be an issue...
Any ideas please??
Here is the output of intip (set_fact) if that helps:
TASK [set_fact] ********************************************************************************************************************************
ok: [host1] => (item=192.168.1.100)
ok: [host1] => (item=127.0.0.1)
ok: [host1] => (item=10.10.10.10)
ok: [host1] => (item=169.254.0.1)
ok: [host2] => (item=127.0.0.1)
ok: [host2] => (item=10.10.10.11)
ok: [host2] => (item=192.168.1.101)
Alright, I found a way to do it, by using host_vars and group_vars, and making my script even easier to write :
---
- hosts: all
gather_facts: yes
become: yes
tasks:
- set_fact:
intip: "{{ hostvars[inventory_hostname].ansible_all_ipv4_addresses }}"
- name: Check IP address
shell:
"ip a a '{{ local_ip }}' dev spanbr"
when: local_ip not in intip
Just by creating 2 files in host_vars.
host1.yaml:
ansible_host: "some_ip"
local_ip: 10.10.10.10
and the same for host2.yaml with different values. All is working great and it's making the script much easier to read too.

React is running on Docker however isn't accessible to public IP

Basically, I'm running a webapp with this stack:
Backend: FastAPI (python) which depends on database
Database: MySQL (being connected with via python-connector)
Frontend: React Functional (rn it's dependency is set to backend, but technically no dependencies)
The backend is connecting to the database via localhost, and is being served on the public ip I can access it at just fine at mydomain.com:8000/api
However when I'm trying to access mydomain.com, docker is setup to forward port 3000 to port 80, however it can't request react and, I get the ERR_CONNECTION_REFUSED error
In alot of similar issues people forget to forward port 3000 to 80 however i'm doing this...
Also, when I install npm & the react project on my Ubuntu Server, the frontend is accessible via mydomain.com:3000
In the docker-compose.yml when I change the ports from "REACT_PORT : 80 " To "3000 : 3000 " And that's accessible via our public ip.
Should I just port forward 3000 to 80 on the main linux server? how to do?
Docker Files
Frontend docker file
# pull official base image for node
FROM node:16-buster-slim
# set working directory
WORKDIR /app
# add `/app/node_modules/.bin` to PATH
ENV PATH /app/node_modules/.bin:$PATH
# install dependencies
COPY package.json ./
COPY package-lock.json ./
RUN npm i -g npm#latest
RUN npm install
# copy all src files to the container
COPY . .
# Expose port
EXPOSE 3000
EXPOSE 80
# start the web app
CMD ["npm", "run", "start"]
docker-compose.yml
version: "3.9"
services:
db:
image: mysql:${MYSQL_VERSION}
restart: always
environment:
- MYSQL_DATABASE=${MYSQL_DB}
- MYSQL_USER=${MYSQL_USERNAME}
- MYSQL_PASSWORD=${MYSQL_PASSWORD}
- MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD}
ports:
- "${MYSQL_PORT}:${MYSQL_PORT}"
expose:
- "${MYSQL_PORT}"
volumes:
- db:/var/lib/mysql
networks:
- mysql_network
backend:
container_name: fastapi-backend
build: ./backend/app
volumes:
- ./backend:/code
ports:
- "${FASTAPI_PORT}:${FASTAPI_PORT}"
env_file:
- .env
depends_on:
- db
networks:
- mysql_network
- backend
restart: always
frontend:
container_name: react-frontend
build: ./frontend/client
ports:
#This doesn't work for some reason?
- "${REACT_PORT}:80"
#When I do this, I can access react via public ip just fine..:
- "3000:3000"
depends_on:
- backend
networks:
- backend
restart: always
volumes:
db:
driver: local
networks:
backend:
driver: bridge
mysql_network:
driver: bridge
React/NPM Files
package.json
{
"name": "client",
"version": "0.1.0",
"private": true,
"dependencies": {
"#reduxjs/toolkit": "^1.8.2",
"#testing-library/jest-dom": "^5.16.4",
"#testing-library/react": "^13.1.1",
"#testing-library/user-event": "^13.5.0",
"bootstrap": "^5.1.3",
"react": "^18.0.0",
"react-dom": "^18.0.0",
"react-redux": "^8.0.2",
"react-router-dom": "^6.3.0",
"react-scripts": "5.0.1",
"universal-cookie": "^4.0.4",
"web-vitals": "^2.1.4"
},
"scripts": {
"start": "react-scripts start",
"build": "react-scripts build",
"test": "react-scripts test",
"eject": "react-scripts eject"
},
"eslintConfig": {
"extends": [
"react-app",
"react-app/jest"
]
},
"browserslist": {
"production": [
">0.2%",
"not dead",
"not op_mini all"
],
"development": [
"last 1 chrome version",
"last 1 firefox version",
"last 1 safari version"
]
}
}
Ubuntu UFW status
root#localhost:~/director# ufw status verbose
Status: active
Logging: on (low)
Default: deny (incoming), allow (outgoing), deny (routed)
New profiles: skip
To Action From
-- ------ ----
3306/tcp ALLOW IN Anywhere
22/tcp ALLOW IN Anywhere
80/tcp ALLOW IN Anywhere
443 ALLOW IN Anywhere
3000 ALLOW IN Anywhere
3306/tcp (v6) ALLOW IN Anywhere (v6)
22/tcp (v6) ALLOW IN Anywhere (v6)
80/tcp (v6) ALLOW IN Anywhere (v6)
443 (v6) ALLOW IN Anywhere (v6)
3000 (v6) ALLOW IN Anywhere (v6)
doing curl -l mydomain.com:8000
curl -l domain.com:8000
{"detail":"Not authenticated"}
Our backend is working and public
docker inspect react_frontend
[
{
"Id": "533f96d538bcf28827d5ad5ead69dc97b97c79bfef1d1e31e3847f15ceb0621f",
"Created": "2022-06-27T04:00:18.459838372Z",
"Path": "docker-entrypoint.sh",
"Args": [
"npm",
"start"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 57578,
"ExitCode": 0,
"Error": "",
"StartedAt": "2022-06-27T04:00:20.253120387Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:698f3ce60c6ba12f39dcf371bd4be556c1cb4aac9c7f95fd4589c079ec4e337b",
"ResolvConfPath": "/var/lib/docker/containers/533f96d538bcf28827d5ad5ead69dc97b97c79bfef1d1e31e3847f15ceb0621f/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/533f96d538bcf28827d5ad5ead69dc97b97c79bfef1d1e31e3847f15ceb0621f/hostname",
"HostsPath": "/var/lib/docker/containers/533f96d538bcf28827d5ad5ead69dc97b97c79bfef1d1e31e3847f15ceb0621f/hosts",
"LogPath": "/var/lib/docker/containers/533f96d538bcf28827d5ad5ead69dc97b97c79bfef1d1e31e3847f15ceb0621f/533f96d538bcf28827d5ad5ead69dc97b97c79bfef1d1e31e3847f15ceb0621f-json.log",
"Name": "/react-frontend",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "docker-default",
"ExecIDs": null,
"HostConfig": {
"Binds": [],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "scheduleplatform_backend",
"PortBindings": {
"80/tcp": [
{
"HostIp": "",
"HostPort": "3000"
}
]
},
"RestartPolicy": {
"Name": "always",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": [],
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "private",
"Dns": null,
"DnsOptions": null,
"DnsSearch": null,
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": false,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": null,
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"ConsoleSize": [
0,
0
],
"Isolation": "",
"CpuShares": 0,
"Memory": 0,
"NanoCpus": 0,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": null,
"BlkioDeviceReadBps": null,
"BlkioDeviceWriteBps": null,
"BlkioDeviceReadIOps": null,
"BlkioDeviceWriteIOps": null,
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": null,
"DeviceCgroupRules": null,
"DeviceRequests": null,
"KernelMemory": 0,
"KernelMemoryTCP": 0,
"MemoryReservation": 0,
"MemorySwap": 0,
"MemorySwappiness": null,
"OomKillDisable": null,
"PidsLimit": null,
"Ulimits": null,
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": [
"/proc/asound",
"/proc/acpi",
"/proc/kcore",
"/proc/keys",
"/proc/latency_stats",
"/proc/timer_list",
"/proc/timer_stats",
"/proc/sched_debug",
"/proc/scsi",
"/sys/firmware"
],
"ReadonlyPaths": [
"/proc/bus",
"/proc/fs",
"/proc/irq",
"/proc/sys",
"/proc/sysrq-trigger"
]
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/f9449860ca28f3641e74ef010fa6bb54383a33a975dce28519f5cfe609ff61d2-init/diff:/var/lib/docker/overlay2/7d8ad12b25cf6e4e0093e811370194cc6c1f72aa443d7d8f0ab69e97715f4c82/diff:/var/lib/docker/overlay2/b530ee218be07eb3a58df7a2f134b4e23cd22ad04b35dc7cbb75f3f07a206280/diff:/var/lib/docker/overlay2/94b5584c91af3dda4376092abdb9b7ff80937311741b96deca22e4b3aaf3a7c7/diff:/var/lib/docker/overlay2/4d8194094b484dfd94ca7473e3e13bfa5962879fd81942710f0aa096199a87c3/diff:/var/lib/docker/overlay2/aeb4c3cd54338efa1d44e1fc24c8a8f91bf672e47771fd0e308b3f65ab753f07/diff:/var/lib/docker/overlay2/9ddac11aa66b2d843b2391c3ad0798f8e372adfffeaec87882322de663b398e5/diff:/var/lib/docker/overlay2/e874192d15482d34ed8a80ef205b55005c84fcad0a7726822de258963191ec10/diff:/var/lib/docker/overlay2/8b162bdb3415e166a91518df47e1bed0bfff2d89568a2653f74e66cb3931773a/diff:/var/lib/docker/overlay2/a18946af0ee112612099819a9db0a3cbef5d1e2264904e6a8405868ab111634a/diff:/var/lib/docker/overlay2/b94b4e7d1af7280ad586d6f3366a161ec0d5cf9f858f0f20139b2ab0a72acfb7/diff:/var/lib/docker/overlay2/052c70a96ef76a7c32d609b98ec679ad7c6a74a28623b870bd729447bbc6086c/diff",
"MergedDir": "/var/lib/docker/overlay2/f9449860ca28f3641e74ef010fa6bb54383a33a975dce28519f5cfe609ff61d2/merged",
"UpperDir": "/var/lib/docker/overlay2/f9449860ca28f3641e74ef010fa6bb54383a33a975dce28519f5cfe609ff61d2/diff",
"WorkDir": "/var/lib/docker/overlay2/f9449860ca28f3641e74ef010fa6bb54383a33a975dce28519f5cfe609ff61d2/work"
},
"Name": "overlay2"
},
"Mounts": [],
"Config": {
"Hostname": "533f96d538bc",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"3000/tcp": {},
"80/tcp": {}
},
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"PATH=/app/node_modules/.bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"NODE_VERSION=16.15.1",
"YARN_VERSION=1.22.19"
],
"Cmd": [
"npm",
"start"
],
"Image": "scheduleplatform_frontend",
"Volumes": null,
"WorkingDir": "/app",
"Entrypoint": [
"docker-entrypoint.sh"
],
"OnBuild": null,
"Labels": {
"com.docker.compose.config-hash": "a02c886422905cf77d66a479bb9967d4a85bdd8c3ed0369665dae6afc9b34099",
"com.docker.compose.container-number": "1",
"com.docker.compose.oneoff": "False",
"com.docker.compose.project": "scheduleplatform",
"com.docker.compose.project.config_files": "docker-compose.yml",
"com.docker.compose.project.working_dir": "/root/SchedulePlatform",
"com.docker.compose.service": "frontend",
"com.docker.compose.version": "1.29.2"
}
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "014cba17fd5ce070c42b6257a05149d7c78b7556d941d5f790c8c0f27e70a8b1",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"3000/tcp": null,
"80/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "3000"
},
{
"HostIp": "::",
"HostPort": "3000"
}
]
},
"SandboxKey": "/var/run/docker/netns/014cba17fd5c",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"scheduleplatform_backend": {
"IPAMConfig": null,
"Links": null,
"Aliases": [
"533f96d538bc",
"frontend"
],
"NetworkID": "9a6ff7cb0c0d725de46e21e309cd0e5708995faffb00f099c8980127f8a9c68c",
"EndpointID": "c6c22efd535ee4beaa732730143268a4d372d2cecb2537f00013a7b057416852",
"Gateway": "172.20.0.1",
"IPAddress": "172.20.0.3",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:ac:14:00:03",
"DriverOpts": null
}
}
}
}
]
I gave up with docker and now use NGINX to point port 80 to 3000...
Basically, in my docker-compose.yml I point the react port to itself like:
frontend:
container_name: react-frontend
build: ./frontend/client
ports:
- "${REACT_PORT}:${REACT_PORT}"
depends_on:
- backend
networks:
- backend
restart: always
And with NGINX, I edit the config file doing:
nano etc/nginx/nginx.conf
then in the http {} section of this config, I removed include /etc/nginx/sites-enabled/*;
and added
server {
listen 80;
location / {proxy_pass http://localhost:3000/; }
}
and then reloaded nginx, now docker is forwarding at port 3000 and nginx is pointing 80 to that port.
This method works, I still don't understand why Docker wasn't able to port forward that, I even checked my netstat and nothing was running on port 80...
weird...

Get specific parts of the output in Ansible

I need to get only specific parts of the output from Ansible, but it is giving me the whole information. I've tried using filtering unsuccessfully.
This is the playbook I am testing with (it has a loop and is probably what is throwing me off)
- name: PLAYBOOK -> Testing
hosts: esxi
gather_facts: no
vars_files:
- vars.yml
vars:
vmnic:
- vmnic0
- vmnic1
tasks:
- name: Get NIC driver/firmware details - shell
shell: esxcli network nic get -n {{ item }} | grep -e Driver -e Firmware -e Version -e Name
loop: "{{ vmnic }}"
register: nic_details
- name: Output NIC driver/firmware details
debug: var=item.stdout_lines
loop: "{{ nic_details['results'] }}"
This is the output I get for one host, I only need the last bit i.e. Driver Info, Driver, Firmware Version, Version and `Name of each VMNIC.
ok: [srv-pocte02.test.local] => (item={'changed': True, 'end': '2022-05-19 15:50:50.326514', 'stdout': ' Driver Info: \n Driver: igbn\n Firmware Version: 1.61.0:0x8000090e\n Version: 1.4.1\n Name: vmnic0', 'cmd': 'esxcli network nic get -n vmnic0 | grep -e Driver -e Firmware -e Version -e Name', 'stderr': '', 'start': '2022-05-19 15:50:49.515808', 'invocation': {'module_args': {'stdin_add_newline': True, 'argv': None, 'stdin': None, 'removes': None, 'creates': None, 'warn': False, '_uses_shell': True, 'executable': None, 'chdir': None, 'strip_empty_ends': True, '_raw_params': 'esxcli network nic get -n vmnic0 | grep -e Driver -e Firmware -e Version -e Name'}}, 'rc': 0, 'msg': '', 'delta': '0:00:00.810706', 'stdout_lines': [' Driver Info: ', ' Driver: igbn', ' Firmware Version: 1.61.0:0x8000090e', ' Version: 1.4.1', ' Name: vmnic0'], 'stderr_lines': [], 'failed': False, 'item': 'vmnic0', 'ansible_loop_var': 'item'}) => {
"ansible_loop_var": "item",
"item": {
"ansible_loop_var": "item",
"changed": true,
"cmd": "esxcli network nic get -n vmnic0 | grep -e Driver -e Firmware -e Version -e Name",
"delta": "0:00:00.810706",
"end": "2022-05-19 15:50:50.326514",
"failed": false,
"invocation": {
"module_args": {
"_raw_params": "esxcli network nic get -n vmnic0 | grep -e Driver -e Firmware -e Version -e Name",
"_uses_shell": true,
"argv": null,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"stdin_add_newline": true,
"strip_empty_ends": true,
"warn": false
}
},
"item": "vmnic0",
"msg": "",
"rc": 0,
"start": "2022-05-19 15:50:49.515808",
"stderr": "",
"stderr_lines": [],
"stdout": " Driver Info: \n Driver: igbn\n Firmware Version: 1.61.0:0x8000090e\n Version: 1.4.1\n Name: vmnic0",
"stdout_lines": [
" Driver Info: ",
" Driver: igbn",
" Firmware Version: 1.61.0:0x8000090e",
" Version: 1.4.1",
" Name: vmnic0"
]
},
"item.stdout_lines": [
" Driver Info: ",
" Driver: igbn",
" Firmware Version: 1.61.0:0x8000090e",
" Version: 1.4.1",
" Name: vmnic0"
]
}
ok: [srv-pocte02.test.local] => (item={'start': '2022-05-19 15:50:50.867894', 'msg': '', 'cmd': 'esxcli network nic get -n vmnic1 | grep -e Driver -e Firmware -e Version -e Name', 'rc': 0, 'invocation': {'module_args': {'stdin_add_newline': True, 'stdin': None, 'removes': None, 'strip_empty_ends': True, '_uses_shell': True, 'creates': None, 'warn': False, 'chdir': None, 'executable': None, '_raw_params': 'esxcli network nic get -n vmnic1 | grep -e Driver -e Firmware -e Version -e Name', 'argv': None}}, 'changed': True, 'stderr': '', 'end': '2022-05-19 15:50:51.706813', 'stdout': ' Driver Info: \n Driver: igbn\n Firmware Version: 1.61.0:0x8000090e\n Version: 1.4.1\n Name: vmnic1', 'delta': '0:00:00.838919', 'stdout_lines': [' Driver Info: ', ' Driver: igbn', ' Firmware Version: 1.61.0:0x8000090e', ' Version: 1.4.1', ' Name: vmnic1'], 'stderr_lines': [], 'failed': False, 'item': 'vmnic1', 'ansible_loop_var': 'item'}) => {
"ansible_loop_var": "item",
"item": {
"ansible_loop_var": "item",
"changed": true,
"cmd": "esxcli network nic get -n vmnic1 | grep -e Driver -e Firmware -e Version -e Name",
"delta": "0:00:00.838919",
"end": "2022-05-19 15:50:51.706813",
"failed": false,
"invocation": {
"module_args": {
"_raw_params": "esxcli network nic get -n vmnic1 | grep -e Driver -e Firmware -e Version -e Name",
"_uses_shell": true,
"argv": null,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"stdin_add_newline": true,
"strip_empty_ends": true,
"warn": false
}
},
"item": "vmnic1",
"msg": "",
"rc": 0,
"start": "2022-05-19 15:50:50.867894",
"stderr": "",
"stderr_lines": [],
"stdout": " Driver Info: \n Driver: igbn\n Firmware Version: 1.61.0:0x8000090e\n Version: 1.4.1\n Name: vmnic1",
"stdout_lines": [
" Driver Info: ",
" Driver: igbn",
" Firmware Version: 1.61.0:0x8000090e",
" Version: 1.4.1",
" Name: vmnic1"
]
},
"item.stdout_lines": [
" Driver Info: ",
" Driver: igbn",
" Firmware Version: 1.61.0:0x8000090e",
" Version: 1.4.1",
" Name: vmnic1"
]
}
You can use the map filter to extract one field of a list of dictionaries. It can also be used to map one filter to each items of the list, for example, a from_yaml that could help you have dictionaries out of the string you get in stdout.
Given the task:
- debug:
var: nic_details.results | map(attribute="stdout") | map('from_yaml')
This would result in:
nic_details.results | map(attribute="stdout") | map('from_yaml'):
- Driver Info:
Driver: igbn
Firmware Version: 1.61.0:0x8000090e
Version: 1.4.1
Name: vmnic0
- Driver Info:
Driver: igbn
Firmware Version: 1.61.0:0x8000090e
Version: 1.4.1
Name: vmnic1

Loop ansible_host stuck with first item

I am using the module csv-source-of-truth (https://github.com/joelwking/csv-source-of-truth) to get the IP and OS information from a csv file. I was able to register these info into a vsheet and using debug, I can see that I can loop through the contents of the vsheet.
However, when I use ios_command and try to loop through the vsheet, it seems that it gets stuck at the first entry of the vsheet.
This are the contents of the Inventory.csv file:
192.168.68.201,ios
192.168.68.202,ios
Code:
---
- hosts: localhost
gather_facts: false
tasks:
- name: Block
block:
- name: Use CSV
csv_to_facts:
src: '{{playbook_dir}}/NEW/Inventory.csv'
vsheets:
- INFO:
- IP
- OS
- debug:
msg: '{{item.IP}}'
loop: '{{INFO}}'
- name: Show Version
vars:
ansible_host: '{{item.IP}}'
ansible_network_os: '{{item.OS}}'
ansible_user: cisco
ansible_ssh_pass: cisco
ansible_connection: network_cli
ansible_become: yes
ansible_become_method: enable
ios_command:
commands: show version
register: output
loop: '{{INFO}}'
- name: Show the output of looped Show Version
debug:
var: output
- name: Show just the stdout_lines
debug:
var: output.results.{{item}}.stdout_lines
with_sequence: "0-{{output|length - 2}}"
You will notice on the output that it only has results for R1 when you look at the uptime information. i.e. R1 has an uptime of such and such.
PLAY [localhost] **********************************************************************************************************************************************
TASK [Use CSV] ************************************************************************************************************************************************
ok: [localhost]
TASK [debug] **************************************************************************************************************************************************
ok: [localhost] => (item={u'IP': u'192.168.68.201', u'OS': u'ios'}) => {
"msg": "192.168.68.201"
}
ok: [localhost] => (item={u'IP': u'192.168.68.202', u'OS': u'ios'}) => {
"msg": "192.168.68.202"
}
TASK [Show Version] *******************************************************************************************************************************************
ok: [localhost] => (item={u'IP': u'192.168.68.201', u'OS': u'ios'})
ok: [localhost] => (item={u'IP': u'192.168.68.202', u'OS': u'ios'})
TASK [Show the output of looped Show Version] *****************************************************************************************************************
ok: [localhost] => {
"output": {
"changed": false,
"msg": "All items completed",
"results": [
{
"ansible_loop_var": "item",
"changed": false,
"failed": false,
"invocation": {
"module_args": {
"auth_pass": null,
"authorize": null,
"commands": [
"show version"
],
"host": null,
"interval": 1,
"match": "all",
"password": null,
"port": null,
"provider": null,
"retries": 10,
"ssh_keyfile": null,
"timeout": null,
"username": null,
"wait_for": null
}
},
"item": {
"IP": "192.168.68.201",
"OS": "ios"
},
"stdout": [
-- Output removed for brevity
],
"stdout_lines": [
[
"-- Output removed for brevity
"R1 uptime is 1 hour, 34 minutes",
]
]
},
{
"ansible_loop_var": "item",
"changed": false,
"failed": false,
"invocation": {
"module_args": {
"auth_pass": null,
"authorize": null,
"commands": [
"show version"
],
"host": null,
"interval": 1,
"match": "all",
"password": null,
"port": null,
"provider": null,
"retries": 10,
"ssh_keyfile": null,
"timeout": null,
"username": null,
"wait_for": null
}
},
"item": {
"IP": "192.168.68.202",
"OS": "ios"
},
"stdout": [
-- Output removed for brevity
],
"stdout_lines": [
[
-- Output removed for brevity
"R1 uptime is 1 hour, 34 minutes",
]
]
}
]
}
}
TASK [Show just the stdout_lines] *****************************************************************************************************************************
ok: [localhost] => (item=0) => {
"ansible_loop_var": "item",
"item": "0",
"output.results.0.stdout_lines": [
[
-- Output removed for brevity
"R1 uptime is 1 hour, 34 minutes",
]
]
}
ok: [localhost] => (item=1) => {
"ansible_loop_var": "item",
"item": "1",
"output.results.1.stdout_lines": [
[
-- Output removed for brevity
"R1 uptime is 1 hour, 34 minutes",
]
]
}
PLAY RECAP ****************************************************************************************************************************************************
localhost : ok=5 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
Try to create an inventory
- name: Create inventory
add_host:
hostname: '{{ item.IP }}'
groups: temp_group_01
ansible_network_os: '{{ item.OS }}'
ansible_user: cisco
ansible_ssh_pass: cisco
ansible_connection: network_cli
ansible_become: yes
ansible_become_method: enable
loop: '{{ INFO }}'
and delegate to the hosts
- name: Show Version
ios_command:
commands: show version
register: output
delegate_to: '{{ item }}'
loop: '{{ groups['temp_group_01'] }}'
Explanation
From the play below can be seen that the connection does not obey the changed ansible_host and keeps using the first item in the loop.
- hosts: test_01
tasks:
- command: hostname
register: result
vars:
ansible_host: "{{ item }}"
loop:
- test_02
- test_03
- debug:
msg: "{{ result.results|map(attribute='stdout')|list }}"
gives
TASK [command] ******************************************************************************
changed: [test_01] => (item=test_02)
changed: [test_01] => (item=test_03)
TASK [debug] ********************************************************************************
ok: [test_01] => {
"msg": [
"test_02",
"test_02"
]
}
This behavior is very probably caused by the connection plugin because vars works as expected. The play below
- hosts: test_01
tasks:
- command: echo "{{ ansible_host }}"
register: result
vars:
ansible_host: "{{ item }}"
loop:
- test_02
- test_03
- debug:
msg: "{{ result.results|map(attribute='stdout')|list }}"
gives
TASK [command] ******************************************************************************
changed: [test_01] => (item=test_02)
changed: [test_01] => (item=test_03)
TASK [debug] ********************************************************************************
ok: [test_01] => {
"msg": [
"test_02",
"test_03"
]
}
As a result, it's not possible to loop ansible_host. Instead, delegate_to shall be used.

Loop through a registered variable with with_dict in Ansible

How to refer to elements of dictionary of a registered value.
My Ansible playbook look like this :
- command: echo {{ item }}
with_dict:
- foo
- bar
- baz
register: echos
Registered variable "echos" will be a dictionary :
{
"changed": true,
"msg": "All items completed",
"results": [
{
"changed": true,
"cmd": [
"echo",
"foo"
],
"delta": "0:00:00.002780",
"end": "2014-06-08 16:57:52.843478",
"invocation": {
"module_args": "echo foo",
"module_name": "command"
},
"item": "foo",
"rc": 0,
"start": "2014-06-08 16:57:52.840698",
"stderr": "",
"stdout": "foo"
},
{
"changed": true,
"cmd": [
"echo",
"bar"
],
"delta": "0:00:00.002736",
"end": "2014-06-08 16:57:52.911243",
"invocation": {
"module_args": "echo bar",
"module_name": "command"
},
"item": "bar",
"rc": 0,
"start": "2014-06-08 16:57:52.908507",
"stderr": "",
"stdout": "bar"
},
{
"changed": true,
"cmd": [
"echo",
"baz"
],
"delta": "0:00:00.003050",
"end": "2014-06-08 16:57:52.979928",
"invocation": {
"module_args": "echo baz",
"module_name": "command"
},
"item": "baz",
"rc": 0,
"start": "2014-06-08 16:57:52.976878",
"stderr": "",
"stdout": "baz"
}
]
}
Now if i want to refer to "changed" field of "foo" dictionary element of echos dictionary , How do i do that ??
First of all, your example is flawed: with_dict can't iterate over list.
But general approach is as follows:
---
- hosts: localhost
gather_facts: no
tasks:
- command: echo {{ item }}
with_items:
- foo
- bar
- baz
register: echos
# Iterate all results
- debug: msg='name {{ item.item }}, changed {{ item.changed }}'
with_items: '{{ echos.results }}'
# Select 'changed' attribute from 'foo' element
- debug: msg='foo changed? {{ echos.results | selectattr("item","equalto","foo") | map(attribute="changed") | first }}'

Resources