How to use with_sequence in Ansible - loops

I want to run ansible role by iterating it through count of value that I am providing.
Say for example: Below is the Ansible role main.yml, in which I include a yaml file to execute where this included yaml file should execute the number of times with the loop which I have.
Here is my small piece of code where I used with_sequence module. But if you have any other suggestion to run the create_db.yml file multiple times, please share with or how to loop through with the current code that I have.
Could someone help me on this?
---
# tasks file for create db
- hosts: localhost
become: yes
tasks:
- include: create_db.yml
with_sequence: count = 2
I am getting below error while executing the playbook
fatal: [localhost]: FAILED! => {"msg": "unrecognized arguments to with_sequence: [u'_raw_params']"}

As mentioned in the comments, the use case for running the same task file twice without any change in parameters is not clear. But the error...
unrecognized arguments to with_sequence
... indicates that the syntax for specifying count is incorrect. Please note that there should be no spaces around =, i.e. count=2.
tasks:
- include: create_db.yml
with_sequence: count=2

Related

how to find the contents of a file in windows using Ansible

I need to copy the contents of a file in windows to a variable.
I tried as below, but getting error.
- name: test
set_fact:
new_var: "{{lookup('file', 'C:\\temp\\test.csv') }}"
Error is:
"An unhandled exception occurred while running the lookup plugin 'file'. Error was a , original message: could not locate file in lookup: C:\temp\test.csv"
The file is present in the remote windows server. Please let me know what is wrong here or please suggest an alternative way.
I had the same problem, didn't get it to work with the lookup file plugin.
As an alternative I did:
- name: get content
win_shell: 'type C:\Temp\ansible.readme'
register: content
- name: write content
debug:
msg: "{{ content.stdout_lines }} "
The reason the OPs solution doesnt work, is that the nlookup is run on the localhost (the ansible server, where the playbook is stored). nlookup can be used to get file content as either a "file" or as a "template". Template will replace {{ variables }} with a variable. File will just read the file to a variable.
C:\temp\test.csv does not exist in the playbook folder, hence it fails.
amutter's solution works by running a windows command and then passing the output into a variable. The command that he ran is
# This runs the windows command 'type' to read the contents of the file and return the value in the console. The console output is passed into the variable content
- name: get content
win_shell: 'type C:\Temp\ansible.readme'
register: content
# The content.stdout is used to return the whole console output. stdout_lines can be used to use a specific line as it is an array of lines
- name: write content
debug:
msg: "{{ content.stdout }} "

ansible switch users in ansible 2.4.2

I am using ansible 2.4.2.0 and want to use ansible_ssh_user as user1 and then run the commands in reote box as user2. How can we achieve this. I have tried using:
become: yes
become_user: user2
But this is not working ' saying user1 doesnot have provileges to execute commands on remote machine as user2'.
Can someone please help?
I have continuing problems with this and it always ends up being a struggle due to nonsense with permissions
1) First of all refer to this specific bit:
https://docs.ansible.com/ansible/2.5/user_guide/become.html#becoming-an-unprivileged-user
And usually this line in config:
allow_world_readable_tmpfiles = true
should get you around your problem
2) The next option is a bit of a hack and is to simply run it directly as a user:
"sudo -u USER bash -c 'COMMAND TO RUN HERE'"
3) Finally the last option is, do you actually need to run that command as that user? Can it simply be done using sudo: true using things like owner: otheruser
Hope this helps

Ansible loop related issues

I have a playbook which has multiple roles and serial setup so that fist it's running on one machine then on the rest of them. In one of the roles I have the following tasks:
- name: getting dbnodes IP addresses
local_action: shell echo "{% for host in groups['dbnodes'] %}{{ hostvars[host]['ansible_eth0']['ipv4']['address'] }},{% endfor %}"
run_once: true
register: IPS
Basically what I want to do is to gather the IP addresses of all the hosts and register it with IPS for further usage. But the task is failing because of the serial (I think) with the following error.
TASK [dbcluster : getting dbnodes IP addresses] ********************************
fatal: [162.220.52.190]: FAILED! => {"failed": true, "msg": "the field 'action' has an invalid value, which appears to include a variable that is undefined. The error was: 'dict object' has no attribute 'ansible_eth0'\n\nThe error appears to have been in '/root/tenon-delivery/ansible/roles/dbcluster/tasks/main.yml': line 52, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: getting dbnodes IP addresses\n ^ here\n"}
While running ansible dbnode -s setup I can see that the ansible_eth0 has a proper value. I don't understand why it is saying that it's undefined.
Any idea how to gather the facts on all machines in the same time while still having the option several tasks/handlers still being done serialized.
ansible_eth0 fact may be unknown at the time of your task run.
You may want to add fact gathering play at the very top of your playbook:
- hosts: dbnodes
gather_facts: yes
tasks:
- debug: msg="facts gathering"
- hosts: othernodes
tasks:
- name: getting dbnodes IP addresses
...

How to store command output into array in Ansible?

Essentially, I want to be able to handle "wildcard filenames" in Linux using ansible. In essence, this means using the ls command with part of a filename followed by an "*" so that it will list ONLY certain files.
However, I cannot store the output properly in a variable as there will likely be more than one filename returned. Thus, I want to be able to store these results no matter how many there might be in an array during one task. I then want to be able to retrieve all of the results from the array in a later task. Furthermore, since I don't know how many files might be returned, I cannot do a task for each filename, and an array makes more sense.
The reason behind this is that there are files in a random storage location that are changed often, but they always have the same first half. It's their second half of their names that are random, and I don't want to have to hard code that into ansible at all.
I'm not certain at all how to properly implement/manipulate an array in ansible, so the following code is an example of what I'm "trying" to accomplish. Obviously it won't function as intended if more than one filename is returned, which is why I was asking for assistance on this topic:
- hosts: <randomservername>
remote_user: remoteguy
become: yes
become_method: sudo
vars:
aaaa: b
tasks:
- name: Copy over all random file contents from directory on control node to target clients. This is to show how to manipulate wildcard filenames.
copy:
src: /opt/home/remoteguy/copyable-files/testdir/
dest: /tmp/
owner: remoteguy
mode: u=rwx,g=r,o=r
ignore_errors: yes
- name: Determine the current filenames and store in variable for later use, obviously for this exercise we know part of the filenames.
shell: "ls {{item}}"
changed_when: false
register: annoying
with_items: [/tmp/this-name-is-annoying*, /tmp/this-name-is-also*]
- name: Run command to cat each file and then capture that output.
shell: cat {{ annoying }}
register: annoying_words
- debug: msg=Here is the output of the two files. {{annoying_words.stdout_lines }}
- name: Now, remove the wildcard files from each server to clean up.
file:
path: '{{ item }}'
state: absent
with_items:
- "{{ annoying.stdout }}"
I understand the YAML format got a little mussed up, but if it's fixed, this "would" run normally, it just won't give me the output I'm looking for. Thus if there were 50 files, I'd want ansible to be able to manipulate them all, and/or be able to delete them all.. etc etc etc.
If anyone here could let me know how to properly utilize an array in the above test code fragment that would be fantastic!
Ansible stores the output of shell and command action modules in stdout and stdout_lines variables. The latter contains separate lines of the standard output in a form of a list.
To iterate over the elements, use:
with_items:
- "{{ annoying.stdout_lines }}"
You should remember that parsing ls output might cause problems in some cases.
Can you try as below.
- name: Run command to cat each file and then capture that output.
shell: cat {{ item.stdout_lines }}
register: annoying_words
with_items:
- "{{ annoying.results }}"
annoying.stdout_lines is already a list.
From doc of stdout_lines
When stdout is returned, Ansible always provides a list of strings, each containing one item per line from the original output.
To assign the list to another variable do:
..
register: annoying
- set_fact:
varName: "{{annoying.stdout_lines}}"
# print first element on the list
- debug: msg="{{varName | first}}"

Ansible read after write file operations in playbooks

I am working on a project using Ansible which requires me to write some data to a file using one playbook and then read the data from the same file using another playbook.
The playbook will be something like this
test1.yml
---
- hosts: localhost
connection: local
gather_facts: no
tasks:
- name: Writing data to test file
local_action: shell echo "data:" {{ 100 |random(step=10) }} > test.txt
- include: test2.yml
and would need to read it using test2.yml
---
- hosts: localhost
connection: local
gather_facts: no
vars_files:
- test.txt
tasks:
- name: Writing data to test file
local_action: shell echo "{{ data }}" > result.txt
However,
The second playbook is not able to read the latest data being posted by the first playbook.
If I view the data written in test.txt and result.txt they both are different. Is there a way to achieve consistency between the results of playbook calls ????
Are those two playbooks called separately? If they are included inside a master playbook, then this would explain it. All includes in the master playbook are resolved before execution, so Ansible would already have read both playbooks and the vars_file before any of them gets executed. You should be able to solve this by dynamically including the vars file during play with the include_vars module.
If I was wrong with my assumption and you're not including the playbooks in a parent playbook: What exactly do you mean by "different"? Is it completely different data or is it a formatting issue? I'm puzzled how data in general could not be consistent between calls. There is no magic in writing to and reading from a file. That should theoretically work.

Resources