Write variable to a file in Ansible - file

I am pulling JSON via the URI module and want to write the received content out to a file. I am able to get the content and output it to the debugger so I know the content has been received, but I do not know the best practice for writing files.

An important comment from tmoschou:
As of Ansible 2.10, The documentation for ansible.builtin.copy says:
If you need variable interpolation in copied files, use the
ansible.builtin.template module. Using a variable in the content
field will result in unpredictable output.
For more details see this and an explanation
Original answer:
You could use the copy module, with the content parameter:
- copy: content="{{ your_json_feed }}" dest=/path/to/destination/file
The docs here: copy module

Unless you are writing very small files, you should probably use templates.
Example:
- name: copy upstart script
template:
src: myCompany-service.conf.j2
dest: "/etc/init/myCompany-service.conf"

Based on Ramon's answer I run into an error. The problem where spaces in the JSON I tried to write I got it fixed by changing the task in the playbook to look like:
- copy:
content: "{{ your_json_feed }}"
dest: "/path/to/destination/file"
As of now I am not sure why this was needed. My best guess is that it had something to do with how variables are replaced in Ansible and the resulting file is parsed.

We can directly specify the destination file with the dest option now. In the below example, the output json is stored into the /tmp/repo_version_file
- name: Get repository file repo_version model to set ambari_managed_repositories=false
uri:
url: 'http://<server IP>:8080/api/v1/stacks/HDP/versions/3.1/repository_versions/1?fields=operating_systems/*'
method: GET
force_basic_auth: yes
user: xxxxx
password: xxxxx
headers:
"X-Requested-By": "ambari"
"Content-type": "Application/json"
status_code: 200
dest: /tmp/repo_version_file

Related

ansible apt-key module with loop

I'm provisioning a system that requires multiple GPG keys to be added. I'm attempting to streamline the process and follow DRY principals.
I have apt packages installing from a vars list like so:
- name: Install packages
apt: name={{ apt_packages }}
Where my vars.yml looks like this:
apt_packages:
- tilix
- terraform
- ansible
- opera
This works because the apt module accepts comma separated inputs and parses accordingly.
So I'm trying to achieve a similar process when using the apt_key module but I can't seem to get it to work. Here are a couple of attempts I've made:
- name Add keys
apt_key:
url: url="{{ items }}"
loop: "{{ gpg_keys }}"
state: present
and
- name: Add GPG Keys
apt_key:
url: url="{{ gpg_keys }}"
state: present
Both throw different errors.
Is it possible to do something like this using the apt-key module? Obviously I'm trying to avoid having a separate caller for each key I want to add as there will be many keys and I'd like to be able to add additional keys later on by simply appending the list in vars.yml.
You have a few small mistakes in your task.
The right way is this:
- name: Add keys
apt_key:
url: "{{ item }}"
state: present
loop: "{{ gpg_keys }}"
you already have the key url, so prepending url= is incorrect
loop is an argument to the task and not to the apt_key module, so it needs to be indented to the level of apt_key (unlike url which is an argument to the model)
Sidenotes:
You also need to make sure that gpg_keys contains a list, similar to apt_packages.
The name parameter of apt accepts a list, as you define correctly in your vars.yml, no comma-separated string. (You are already doing it right)
Documentation:
apt
apt_key

Ansible - Versionfile check

I want to be able to read a versionfile if it exists, and check its contents. Then return True if the version changed or the file does not exists, False if versionfile exists and the version matches the content.
Basically this:
# setup test data
- set_fact:
version_expected: "0001"
version_path: "/path/to/version"
version_owner: "root"
version_group: "root"
# this block is used to check for version changes
- name: check version change
block:
- name: check version file
stat:
path: "{{version_path}}"
register: version_file
- set_fact:
version_remote: "{{ lookup('file', version_path) | default('') }}"
when: version_file.stat.exists
- set_fact:
version_changed: not version_file.stat.exists or version_remote != version_expected
# test writing new version
- name: write file
copy:
dest: "{{version_path}}"
content: "{{version_expected}}"
owner: "{{version_owner}}"
group: "{{version_group}}"
when: version_changed
My problem is: This is somewhat ugly and becoming quite redundant in my roles.
Is there a more elegant way to do this?
Is there maybe a module for this? (though I found none)
Or should I just write a module for this?
Best regards,
2d4r
EDIT:
im only meaning the "check version change" block, the surrounding code is for debugging only.
To be more specific, I want to download a server binary, but only if my expectet version differs from the content of the versionfile.
I want to write the new version to file, if (and only if) the download was successfull, but that is not part of my question.
EDIT2:
I got this by now:
# roles/_helper/tasks/version_check.yml
- name: check if file exists
stat:
path: "{{version_path}}"
register: version_file
- name: get remote version
slurp:
src: "{{version_path}}"
register: version_changed
when: version_file.stat.exists
# (False if versionfile exists and version is expected; True else)
- name: set return value
set_fact:
version_changed: "{{ not version_file.stat.exists or ((version_changed.content | b64decode) is version_compare(version_expected, 'ne')) }}"
used like this:
# /roles/example/tasks/main.yml
- include_role:
name: _helper
tasks_from: version_check
vars:
version_path: "{{file_version_path}}"
version_expected: "{{file_version_expected}}"
- name: doing awesome things
when: version_changed
block:
- name: download server
[...]
- name: write version
copy:
dest: "{{file_version_path}}"
content: "{{file_version_expected}}"
It kills the redundancy, but is still not what I want.
Sadly I can not register a return value from a role.
Delete everything except for write file task and remove the condition.
Ansible does this automatically for you.
- name: write file
copy:
dest: "{{version_path}}"
content: "{{version_expected}}"
owner: "{{version_owner}}"
group: "{{version_group}}"
After you changed the question, given the information provided, the only thing I can point to is to use slurp module instead of lookup, as an lookup plugins work locally in the control machine.
Compare versions using your logic or built-in version_compare filter/test.

Ansible: how do I avoid code repetitions with file src and with_first_found?

I have several dozen copy actions that do the same dance to find an appropriate file based on the following config file storage hierarchy (can't change it):
{{role_path}}/file.name.{hostname}
{{role_path}}/file.name
/config/current/file.name.{hostname}
/config/current/file.name
/config/legacy/file.name.{hostname}
/config/legacy/file.name
Is there a way to avoid repeating the whole with_first_found clause for every config file as in the following?
- name: Copy /etc/file.name
copy:
src: "{{item}}"
dest: "/etc/file.name"
owner: root
group: root
mode: 0644
with_first_found:
- files:
- files/etc/file.name.{{inventory_hostname}}
- files/etc/file.name
paths:
- "{{ role_path }}"
- /config/current
- /config/legacy
Extract the tasks to a separate file and loop over include using loop_control in outer loop to avoid conflict in item variable.

Parsing application.yml in angularjs

I have a application.yml file in my application
spring:
profiles:
active: default,dev
app:
properties:
lucene:
indexInfoFile: ${spring.jpa.properties.hibernate.search.default.indexBase}/index.properties
reindex: false
storage:
home: ${user.home}/xxx
basePath: ${app.properties.storage.home}/uploads/
staticFilesPrefix: /files/
appUrl: /app/
spring:
profiles: dev
http:
multipart:
max-file-size: 3MB
max-request-Size: 3MB
Now in my controller, I am trying to get the data from yml file and the code for the same is
$http.get('/resources/application.yml').then(function (response) {
console.log('entire data is ', response.data);
console.log('basePath is ', response.data.basePath);
});
Entire Data is printing perfectly ( the whole yml file is getting printed) but when ever I am trying to print a particular property like basePath, max-file-size etc I am getting "undefined error".
My question is how to get a particular property to be printed on the console.
I would not recommend to access the yml file directly in Angular.
The format is difficult to parse (hence your question) and you sooner or later you may not want to expose all your confguration details.
Instead create a rest controller in spring mapped to something like /config
Let spring inject all the configuration values you need using #Value and return a Map or a simple PoJo with exactly the attributes you need.
Spring will convert this to JSON which you can easily be consumed in Angular.

Ansible create directories from a list

I want to create some directories from a list I have in my vars/main.yml.
- app_root:
network_prod: "/var/www/prod/network/app"
database_prod: "/var/www/prod/db/app"
My tasks/main.yml so far has this:
- name: Create application directory structure
file:
path: "{{ item }}"
state: directory
mode: 755
with_items:
- app_root
but doesn't work. I thought this could be achieved using with_dict and I also tried:
- name: Create application directory structure
file:
path: "{{ item.value }}"
state: directory
mode: 755
with_dict:
- app_root
but I got: fatal: [vagrant.dev] => with_dict expects a dict.
I've read all about looping-over-hashes, but this doesn't seem to work.
The reason I'm using this notation is because I use these variables elsewhere as well and I need to know how to call them.
I personally find it a bit easier to convert yaml to json to make sure I'm understanding it properly. Take your example:
- app_root:
network_prod: "/var/www/prod/network/app"
database_prod: "/var/www/prod/db/app"
What you have here is not a list, but a nested dictionary. If you converted this to json it would look like this:
[
{
"app_root": {
"network_prod": "/var/www/prod/network/app",
"database_prod": "/var/www/prod/db/app"
}
}
]
In order to loop through this in Ansible you would need to dereference two levels of a dictionary, the first being app_root and the second being the path elements. Unfortunately I don't think Ansible supports looping through nested dictionaries, only through nested loops.
Your best bet is probably to redo the way you're defining your paths so that you're not creating as complex a data structure. If all you're doing in this case is iterating over a list of paths in order to ensure the directories exist then I'd suggest something like this in your vars/main.yml file:
network_prod: "/var/www/prod/network/app"
database_prod: "/var/www/prod/db/app"
app_root:
- network_prod
- database_prod
Then you can have a task like this:
file: path={{ item }}
state=directory
with_items: app_root
In vars/main.yml, try removing the dash in front of app_root.
app_root:
network_prod: "/var/www/prod/network/app"
database_prod: "/var/www/prod/db/app"
I think that the approach with with_dict was correct and I believe that the only issue here is the dash - in front of app_root variable. Instead of:
- name: Create application directory structure
file:
path: "{{ item.value }}"
state: directory
mode: 755
with_dict:
- app_root
It should be:
- name: Create application directory structure
file:
path: "{{ item.value }}"
state: directory
mode: 755
with_dict: app_root
See the difference on the way the variable app_root is passed to with_dict.
A dash in YAML starts a list and the elements are not treated as variables but as literals, think of it as if you were passing an immutable string 'app_root' to with_dict (not exactly true but it helps me to think it this way) so when_dict fails to parse it because it is given a list instead of the expected dictionary. However, without a dash, with_dict is populated with the variable app_root instead and it will parse it without issues.

Resources