I need to copy the contents of a file in windows to a variable.
I tried as below, but getting error.
- name: test
set_fact:
new_var: "{{lookup('file', 'C:\\temp\\test.csv') }}"
Error is:
"An unhandled exception occurred while running the lookup plugin 'file'. Error was a , original message: could not locate file in lookup: C:\temp\test.csv"
The file is present in the remote windows server. Please let me know what is wrong here or please suggest an alternative way.
I had the same problem, didn't get it to work with the lookup file plugin.
As an alternative I did:
- name: get content
win_shell: 'type C:\Temp\ansible.readme'
register: content
- name: write content
debug:
msg: "{{ content.stdout_lines }} "
The reason the OPs solution doesnt work, is that the nlookup is run on the localhost (the ansible server, where the playbook is stored). nlookup can be used to get file content as either a "file" or as a "template". Template will replace {{ variables }} with a variable. File will just read the file to a variable.
C:\temp\test.csv does not exist in the playbook folder, hence it fails.
amutter's solution works by running a windows command and then passing the output into a variable. The command that he ran is
# This runs the windows command 'type' to read the contents of the file and return the value in the console. The console output is passed into the variable content
- name: get content
win_shell: 'type C:\Temp\ansible.readme'
register: content
# The content.stdout is used to return the whole console output. stdout_lines can be used to use a specific line as it is an array of lines
- name: write content
debug:
msg: "{{ content.stdout }} "
Related
I want to use content in .txt file and I tried to debug content using following code.
- name: Find .txt files
find:
paths: "{{output_path}}"
patterns: '*.txt,'
register: file_path
- name: Show content
debug:
msg: "{{lookup('file', item.path)}}"
with_items: "{{file_path.files}}"
But I got this error.
[WARNING]: Unable to find '/path/file.txt' in expected paths (use -vvvvv to see paths)
TASK [Show content] ******************************************************
fatal: [10.0.2.40]: FAILED! => {"msg": "An unhandled exception occurred
while running the lookup plugin 'file'. Error was a <class 'ansible.errors.AnsibleError'>,
original message: could not locate file in lookup: /path/file.txt"}
How I fix this error?
There's nothing inherently wrong with your playbook, except that you have a comma inside the find pattern (this might be just a typo, but you should check it out) if you are running this playbook locally. In the case that you are running this playbook on a remote server, you should try to use a different module like slurp or fetch.
slurp works great if you need to keep the contents of the txt in memory to use it on another task. Bear in mind that ansible will encode the slurp module's output in base64, so you should decode it first when you want to use it. From the module example page:.`
- name: Find out what the remote machine's mounts are
ansible.builtin.slurp:
src: /proc/mounts
register: mounts
- name: Print returned information
ansible.builtin.debug:
msg: "{{ mounts['content'] | b64decode }}"
You can verify what I am saying with the following example:
I tried replicating your situation locally. On a temporary folder, I run the following command to populate it with many .txt files:
echo {001..099} > file_{001..099}.txt
Then I wrote the same playbook that you provided:
#
# show_contents.yml
#
- hosts: localhost
gather_facts: no
tasks:
- name: Find .txt files
find:
paths: "{{ output_files }}"
patterns: "*.txt"
register: file_path
- name: Debug the file_path variable
debug:
var: file_path
- name: Get the contents of the files using debug
debug:
msg: "{{ lookup('file', item.path ) }}"
loop: "{{ file_path.files }}"
If you run this playbook, passing the appropriate output_files variable with --extra-vars, the playbook works fine.
ansible-playbook show_contents.yml --extra-vars "output_files=/tmp/ansible"
You'll see that the playbook runs without an issue. Try using this example to figure out what you are trying to achieve. And then modify the playbook to use some of the previously mentioned modules when working with remote servers.
Essentially, I want to be able to handle "wildcard filenames" in Linux using ansible. In essence, this means using the ls command with part of a filename followed by an "*" so that it will list ONLY certain files.
However, I cannot store the output properly in a variable as there will likely be more than one filename returned. Thus, I want to be able to store these results no matter how many there might be in an array during one task. I then want to be able to retrieve all of the results from the array in a later task. Furthermore, since I don't know how many files might be returned, I cannot do a task for each filename, and an array makes more sense.
The reason behind this is that there are files in a random storage location that are changed often, but they always have the same first half. It's their second half of their names that are random, and I don't want to have to hard code that into ansible at all.
I'm not certain at all how to properly implement/manipulate an array in ansible, so the following code is an example of what I'm "trying" to accomplish. Obviously it won't function as intended if more than one filename is returned, which is why I was asking for assistance on this topic:
- hosts: <randomservername>
remote_user: remoteguy
become: yes
become_method: sudo
vars:
aaaa: b
tasks:
- name: Copy over all random file contents from directory on control node to target clients. This is to show how to manipulate wildcard filenames.
copy:
src: /opt/home/remoteguy/copyable-files/testdir/
dest: /tmp/
owner: remoteguy
mode: u=rwx,g=r,o=r
ignore_errors: yes
- name: Determine the current filenames and store in variable for later use, obviously for this exercise we know part of the filenames.
shell: "ls {{item}}"
changed_when: false
register: annoying
with_items: [/tmp/this-name-is-annoying*, /tmp/this-name-is-also*]
- name: Run command to cat each file and then capture that output.
shell: cat {{ annoying }}
register: annoying_words
- debug: msg=Here is the output of the two files. {{annoying_words.stdout_lines }}
- name: Now, remove the wildcard files from each server to clean up.
file:
path: '{{ item }}'
state: absent
with_items:
- "{{ annoying.stdout }}"
I understand the YAML format got a little mussed up, but if it's fixed, this "would" run normally, it just won't give me the output I'm looking for. Thus if there were 50 files, I'd want ansible to be able to manipulate them all, and/or be able to delete them all.. etc etc etc.
If anyone here could let me know how to properly utilize an array in the above test code fragment that would be fantastic!
Ansible stores the output of shell and command action modules in stdout and stdout_lines variables. The latter contains separate lines of the standard output in a form of a list.
To iterate over the elements, use:
with_items:
- "{{ annoying.stdout_lines }}"
You should remember that parsing ls output might cause problems in some cases.
Can you try as below.
- name: Run command to cat each file and then capture that output.
shell: cat {{ item.stdout_lines }}
register: annoying_words
with_items:
- "{{ annoying.results }}"
annoying.stdout_lines is already a list.
From doc of stdout_lines
When stdout is returned, Ansible always provides a list of strings, each containing one item per line from the original output.
To assign the list to another variable do:
..
register: annoying
- set_fact:
varName: "{{annoying.stdout_lines}}"
# print first element on the list
- debug: msg="{{varName | first}}"
I have several files on a server that I need to download from an ansible playbook, but because the connection has good chances of interruption I would like to check their integrity after download.
I'm considering two approaches:
Store the md5 of those files in ansible as vars
Store the md5 of those files on the server as files with the extension .md5. Such a pair would look like: file.extension and file.extension.md5.
The first approach introduces overhead in maintaining the md5s in ansible. So everytime someone adds a new file, he needs to make sure he adds the md5 in the right place.
But as an advantage, there is a solution for this, using the built in check from get_url action in conjunction with checksum=md5. E.g.:
action: get_url: url=http://example.com/path/file.conf dest=/etc/foo.conf checksum=md5:66dffb5228a211e61d6d7ef4a86f5758
The second approach is more elegant and the narrows the responsibility. When someone adds a new file on the server, he will make sure to add the .md5 as well and won't even need to use the ansible playbooks.
Is there a way to use the checksum approach to match the md5 from a file?
If you wish to go with your method of storing the checksum in files on the server, you can definitely use the get_url checksum arg to validate it.
Download the .md5 file and read it into a var:
- set_fact:
md5_value: "{{ lookup('file', '/etc/myfile.md5') }}"
And then when you download the file, pass the contents of md5_value to get_url:
- get_url:
url: http://example.com
dest: /my/dest/file
checksum: "md5:{{ md5_value }}"
force: true
Note that it is vital to specify a path to a file in dest; if you set this to a directory (and have a filename in url), the behavior changes significantly.
Note also that you probably need the force: true. This will cause a new file to download every time you run it. The checksum is only triggered when files are downloaded. If the file already exists on your host it won't bother to validate the sum of the existing file, which might not be desirable.
To avoid the download every time you could stat to see if the file already exists, see what its sum is, and set the force param conditionally.
- stat:
path: /my/dest/file
register: existing_file
- set_fact:
force_new_download: "{{ existing_file.stat.md5 != md5_value }}"
when: existing_file.stat.exists
- get_url:
url: http://example.com
dest: /my/dest/file
checksum: "md5:{{ md5_value }}"
force: "{{ force_new_download | default ('false') }}"
Also, if you are pulling the sums/artifacts from some sort of web server you can actually get the value of the sum right from the url without having to actually download the file to the host. Here is an example using a Nexus server that would host the artifacts and their sums:
- set_fact:
md5_value: "{{ item }}"
with_url: http://my_nexus_server.com:8081/nexus/service/local/artifact/maven/content?g=log4j&a=log4j&v=1.2.9&r=central&e=jar.md5
This could be used in place of using get_url to download the md5 file and then using lookup to read from it.
With the stat module:
- stat:
path: "path/to/your/file"
register: your_file_info
- debug:
var: your_file_info.stat.md5
The elegant solution will be using the below 3 modules provided by ansible itself
http://docs.ansible.com/ansible/stat_module.html
use the stat module to extract the md5 value and register it in a variable
http://docs.ansible.com/ansible/copy_module.html
while using the copy module to copy the file from the server, register the return value of md5 in another variable
http://docs.ansible.com/ansible/playbooks_conditionals.html
use this conditional module to compare the above 2 variables and print the results whether the file is copied properly or not
Another solution is to use url lookup (tested on ansible-2.3.1.0):
- name: Download
get_url:
url: "http://localhost/file"
dest: "/tmp/file"
checksum: "md5:{{ lookup('url', 'http://localhost/file.md5') }}"
Wrote an ansible module with the help of https://pypi.org/project/checksumdir
The module can be found here
Example:
- get_checksum:
path: path/to/directory
checksum_type: sha1/md5/sha256/sha512
register: checksum
Env is: Ansible 1.9.4 or 1.9.2, Linux CentOS 6.5
I have a role build where:
$ cat roles/build/defaults/main.yml:
---
build_user: confman
build_group: confman
tools_dir: ~/tools
$ cat roles/build/tasks/main.yml
- debug: msg="User is = {{ build_user }} -- {{ tools_dir }}"
tags:
- koba
- name: Set directory ownership
file: path="{{ tools_dir }}" owner={{ build_user }} group={{ build_group }} mode=0755 state=directory recurse=yes
become_user: "{{ build_user }}"
tags:
- koba
- name: Set private key file access
file: path="{{ item }}" owner={{ build_user }} group={{ build_group }} mode=0600 state=touch
with_fileglob:
- "{{ tools_dir }}/vmwaretools-lib-*/lib/insecure_private_key"
# with_items:
# - ~/tools/vmwaretools/lib/insecure_private_key
become_user: "{{ build_user }}"
tags:
- koba
In my workspace: hosts file (inventory) contains:
[ansible_servers]
server01.project.jenkins
site.yml (playbook) contains:
---
- hosts: ansible_servers
sudo: yes
roles:
- build
I'm running the following command:
$ ansible-playbook site.yml -i hosts -u confman --private-key ${DEPLOYER_KEY_FILE} -t koba
I'm getting the following error and for some reason, become_user in Ansible while using Ansible loop: with_fileglob is NOT using ~ (home directory) of confman user (which is set in variable {{ build_user }}, instead of that, it's picking my own user ID (c123456).
In the console output for debug action, it's clear that the user (due to become_user) is confman and value of tools_dir variable is ~/tools.
PLAY [ansible_servers] ********************************************************
GATHERING FACTS ***************************************************************
ok: [server01.project.jenkins]
TASK: [build | debug msg="User is = {{ build_user }} -- {{ tools_dir }}"] *****
ok: [server01.project.jenkins] => {
"msg": "User is = confman -- ~/tools"
}
TASK: [build | Set directory ownership] ***************************************
changed: [server01.project.jenkins]
TASK: [build | Set private key file access] ***********************************
failed: [server01.project.jenkins] => (item=/user/home/c123456/tools/vmwaretools-lib-1.0.8-SNAPSHOT/lib/insecure_private_key) => {"failed": true, "item": "/user/home/c123456/tools/vmwaretools-lib-1.0.8-SNAPSHOT/lib/insecure_private_key", "parsed": false}
BECOME-SUCCESS-ajtxlfymjcquzuolgfrrxbssfolqgrsg
Traceback (most recent call last):
File "/tmp/ansible-tmp-1449615824.69-82085663620220/file", line 1994, in <module>
main()
File "/tmp/ansible-tmp-1449615824.69-82085663620220/file", line 372, in main
open(path, 'w').close()
IOError: [Errno 2] No such file or directory: '/user/home/c123456/tools/vmwaretools-lib-1.0.8-SNAPSHOT/lib/insecure_private_key'
OpenSSH_5.3p1, OpenSSL 1.0.1e-fips 11 Feb 2013
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: Applying options for *
debug1: auto-mux: Trying existing master
debug1: mux_client_request_session: master session id: 2
debug1: mux_client_request_session: master session id: 2
Shared connection to server01.project.jenkins closed.
As per the error above, the file it's trying for variable item is /user/home/c123456/tools/vmwaretools-lib-1.0.8-SNAPSHOT/lib/insecure_private_key but there's no such file inside my user ID's home directory. But, this file does exist for user confman's home directory.
i.e. the following file exists.
/user/home/confman/tools/vmwaretools-lib-1.0.7-SNAPSHOT/lib/insecure_private_key
/user/home/confman/tools/vmwaretools-lib-1.0.7/lib/insecure_private_key
/user/home/confman/tools/vmwaretools-lib-1.0.8-SNAPSHOT/lib/insecure_private_key
All, I want is to iterate of these files in ~confman/tools/vmwaretools-lib-*/.. location containing the private key file and change the permission but using "with_fileglob" become_user to set the user during an action is NOT working.
If I comment out the with_fileglob section and use/uncomment with_items section in the tasks/main.yml, then it (become_user) works fine and picks ~confman (instead of ~c123456) and gives the following output:
TASK: [build | Set private key file access] ***********************************
changed: [server01.project.jenkins] => (item=~/tools/vmwaretools/lib/insecure_private_key)
One strange thing I found is, there is no user c123456 on the target machine (server01.project.jenkins) and that's telling me that with_fileglob is using the source/local/master Ansible machine (where I'm running ansible-playbook command) to find the GLOB Pattern (instead of finding / running it over SSH on server01.project.jenkins server), It's true that on local/source Ansible machine, I'm logged in as c123456. Strange thing is, in the OUTPUT, it still shows the target machine but pattern path is coming from source machine as per the output above.
failed: [server01.project.jenkins]
Any idea! what I'm missing here? Thanks.
PS:
- I don't want to set tools_dir: "~{{ build_user }}/tools" or hardcode it as a user can pass tools_dir variable at command line (while running ansible-playbook command using -e / --extra-vars "tools_dir=/production/slave/tools"
Further researching it, I found with_fileglob is for List of local files to iterate over, described using shell fileglob notation (e.g., /playbooks/files/fooapp/*) then, what should I use to iterate over on target/remote server (server01.project.jenkins in my case) using pattern match (fileglob)?
Using with_fileglob, it'll always run on the local/source/master machine where you are running ansible-playbook/ansible. Ansible docs for Loops doesn't clarifies this info (http://docs.ansible.com/ansible/playbooks_loops.html#id4) but I found this clarification here: https://github.com/lorin/ansible-quickref
Thus, while looking for the pattern, it's picking the ~ for user c123456.
Console output is showing [server01.project.jenkins] as it's a different processing/step to read what's there in the inventory/hosts file.
I tried to use with_lines as well as per this post: ansible: Is there something like with_fileglobs for files on remote machine?
But, when I tried the following, it still didn't work i.e. read the pattern on local machine instead of target machine (Ansible docs tells with_items doesn't run on local machine but on the controlling machine):
file: path="{{ item }}" ....
with_items: ls -1 {{ tools_dir }}/vmwaretools-lib-*/lib/insecure_private_key
become_user: {{ build_user }}
Finally to solve the issue, I just went on the plain OS command round using shell (again, this might not be a very good solution if the target env is not a Linux type OS) but for now I'm good.
- name: Set private key file access
shell: "chmod 0400 {{ tools_dir }}/vmtools-lib-*/lib/insecure_private_key"
become_user: "{{ build_user }}"
tags:
- koba
I am working on a project using Ansible which requires me to write some data to a file using one playbook and then read the data from the same file using another playbook.
The playbook will be something like this
test1.yml
---
- hosts: localhost
connection: local
gather_facts: no
tasks:
- name: Writing data to test file
local_action: shell echo "data:" {{ 100 |random(step=10) }} > test.txt
- include: test2.yml
and would need to read it using test2.yml
---
- hosts: localhost
connection: local
gather_facts: no
vars_files:
- test.txt
tasks:
- name: Writing data to test file
local_action: shell echo "{{ data }}" > result.txt
However,
The second playbook is not able to read the latest data being posted by the first playbook.
If I view the data written in test.txt and result.txt they both are different. Is there a way to achieve consistency between the results of playbook calls ????
Are those two playbooks called separately? If they are included inside a master playbook, then this would explain it. All includes in the master playbook are resolved before execution, so Ansible would already have read both playbooks and the vars_file before any of them gets executed. You should be able to solve this by dynamically including the vars file during play with the include_vars module.
If I was wrong with my assumption and you're not including the playbooks in a parent playbook: What exactly do you mean by "different"? Is it completely different data or is it a formatting issue? I'm puzzled how data in general could not be consistent between calls. There is no magic in writing to and reading from a file. That should theoretically work.