Variable as array/object key in Ansible - arrays

I am writing a playbook that creates a network with a name I assign. Later on in the playbook I need to access the IP that is assigned by this network task, so I get it from the hostvars. So for example it if I was calling the network 'my_website' the value I would be targeting in hostvars would be
server: "{{hostvars.localhost.rax_nfs.results[0].success[0].rax_addresses.my_website[0].addr}}"
This is fine, but I want to name the network based on the contents of a variable passed in by a var file to make it reusable across multiple setups, and then still be able to get that IP back, so
network_label: "{{ my_website }}"
server: "{{hostvars.localhost.rax_nfs.results[0].success[0].rax_addresses.network_label[0].addr}}"
Obviously this doesn't work as it just assigns a string. How do I use network_label as the key inside another variable? Like in php something like
$array[$variable], or $object->$variable
Is this possible?

Not sure what the playbook looks like but does accessing the IP based on the gathered facts not work in your context? Example: {{ ansible_eth0.ipv4.address }}
Alternatively on the task that creates the network you could register that as a variable for later access using that variable name.
You can also access elements of an array in a variable: {{ var_name[0] }}

Related

How to interate over all Postman environment variables?

In one of my pre-request script I need to have my url with all environment variables reaplced. Suddenly the env vars are injected only after the pre-request script. I want to iterate over the env variables and manually replace them. Is it possible?
I can get pm.environment.values, but suddenly this object is not an array. I cant get any values from it with pm.environment.values[0] or use a for(const element of pm.environment.values) on it.
If I could get all environment keys, i could acomlish my aim with pm.environment.get, but I did not found a way to do it.
You can use the .toObject() function. It returns all variables with their values, in the active environment, in a single object:
pm.environment.toObject()
https://learning.postman.com/docs/writing-scripts/script-references/postman-sandbox-api-reference/#using-environment-variables-in-scripts
This will also work for other variable scopes such as Collection, Iteration and Global.

when converting parameters to variables, connections are broken at dev time

How do I get up my connections to show up in the designer ?
In my logicapp.json file I am setting the connection names to be functions of parameters like so:
"variables": {
"servicebus_1_Connection_Name": "[concat('servicebus-',parameters('logicAppName'))]",
"azureblob_1_Connection_Name": "[concat('blob-',parameters('logicAppName'))]"
},
However the designer doesn't like this:
When deploying, the connections show up no problem:
Try using parameters instead of variables for this. Variables need to be instantiated while parameters only need to be declared, bear in mind that you will need to declare the parameter in both the logic app template and in the arm deployment template in the logicapp.json file and also in the logicapp.parameters.json file.
Click anywhere on the logic app, then press F4 to get the properties of the logic app:
Change the resource group to the appropriate one that has your connections!
After going through the above 2 requirements, your connections will be displaying correctly.

Using with_items inside vars_files in an Ansible playbook

I'm currently making the transition from Puppet to Ansible and so far so good. Yet I want to automatize as much as possible.
I'm trying to use a with_items loop inside vars_files to load variable files based on a given list of items. Ansible complains about the syntax and I can't seem to find an example of a similar solution, only examples that use with_items inside tasks and roles.
For example:
vars_files:
- ["vars/{{ item }}-{{ ansible_fqdn }}.yml", "vars/{{ item }}-{{ system_environment }}.yml", "vars/{{ item }}.yml"]
with_items:
- php
- nginx
The goal here is to loop the second line for as long as there are items in with_items using an array to fallback on the next item if it can't find the given file (which works).
Not sure if this is at all possible, but I wanted to ask before taking another direction.
with_items, or in general all loops, are a feature of tasks. vars_files though is no task. So it won't work the way you tried it and the short answer would be: It is not possible.
I don't know of a clean way to solve your exact problem. A custom vars plugin might be an option. But vars plugin work on a global level while your vars seem to be used in a role.
A custom lookup plugin might be a solution if solving this on task level is an option for you. The lookup plugin takes your input, checks for presence of the files and returns an array of the files which need to be include. This then can be used with the include_vars module.
- include_vars: "{{ item }}"
with_my_custom_plugin:
- php
- nginx
An ugly solution would be to combine the with_items loop with a with_first_found loop. Though, since you cannot directly nest loops, you need to work with an include.
- include: include_vars.yml
with_items:
- php
- nginx
And inside include_vars.yml you then can use with_first_found with the include_vars module.
- include_vars: "{{ item }}"
with_first_found:
- vars/{{ item }}-{{ ansible_fqdn }}.yml
- vars/{{ item }}-{{ system_environment }}.yml
- vars/{{ item }}.yml
Putting this in a separate answer to expand on the group and host variables solution I eventually came up with (cc #udondan).
Basically I group all my hosts in my inventory file under several sub and parent groups no matter what. Then I create files for group vars whenever applicable so it follows a certain order of precedence (first is highest and overrides all others, last applies to all hosts and can be overridden down the chain):
task vars > playbook vars > host_vars > web/database-local > local > web/database > all
That way I can define variables for all hosts to use (all), just web/database (mostly production values), all local servers (local group), all local web/database servers, et cetera, or per-host (the standard host_vars). Of course playbook and task vars override these further. All of this following the Ansible guidelines.
An example of a local inventory (replace default with your hostname or IP, add as many as you like per group, x-local can be omitted if this would be a production inventory):
[web-local]
default
[database-local]
default
[local:children]
web-local
database-local
[web:children]
web-local
[database:children]
database-local
Then my group_vars folder with directories for each inventory group and variables split into files to keep it structured (could just have one database-local.yaml file for the database-local group for instance instead of folders and split YAML-files):
group_vars/
all/
always_applied_variables.yaml
swap.yaml
web/
database/
database_only_variables.yaml
database-production/
production_database_variables.yaml
production/
random_production_only_variables.yaml
local/
users.yaml
web-local/
database-local/
local_database_variables.yaml
host_vars/
default/
php.yaml
mysql.yaml
other_specific_host_variables.yaml
Hope this is somewhat clear. I'd be happy to answer any questions.

In Ansible v2, which variable stores the ssh username?

In my playbooks, I SSH in as a non-root user, then use become to become root.
What is the Ansible variable name that stores the user that originally SSH'd into the box?
In Ansible < 2.0, I could use {{ ansible_ssh_user }} to access the username SSH'ing into the box.
Just tried with Ansible 2.0.2 and that returns null. I tried ansible_user as suggested by the FAQ, but that also returns null. I also tried ansible_user_id, but that returns the result of become, not the original user.
You can accesss this via ansible_env.SUDO_USER.
I tried a number of other variables, and almost all of them changed their values as soon as I used become on the remote node.
{{ ansible_user }} does actually work fine for me with Ansible 2.5:
with_items:
- root
- "{{ ansible_user }}"

Passing variable in XML topology in ODI

I am trying to parameterized the topology connection in ODI to load multiple xml of same structure one by one using variable.But i am getting unknown token error.
Jdbc url :jdbc:snps:xml?f=U:/SOTI_CLOUD/#B.xml
{ #B is ODI variable having file name)
Try using #GLOBAL.B if it is a Global variable or #<PROJECT_NAME>.B if it is a project variable.
Also check what is your history setting for that variable. If it is set on "No History", make sure you are declaring/refreshing the variable within the same session in which you want to access that XML file.
Just a hunch…
For the variable to be picked up in JDBC URL you need to launch a separate scenario. Your problem might be different but make sure you have an outer loop with variable declare/refresh and whenever you refresh/increment it you launch a separate scenario (not just an interface) where you load data using such constructed URL.

Resources