I have made role for installing php5-fpm (with other roles: nginx, worldpress, mysql). I want to install php5 set of packages, but have problem with the looping an array of packages. Please some tips how to solve this issue.
Role php5-fpm include:
roles/default/main.yml
roles/tasks/install.yml
default/main.yml:
---
# defaults file for php-fpm
# filename: roles/php5-fpm/defaults/main.yml
#
php5:
packages:
- php5-fpm
- php5-common
- php5-curl
- php5-mysql
- php5-cli
- php5-gd
- php5-mcrypt
- php5-suhosin
- php5-memcache
service:
name: php5-fpm
tasks/install.yml:
# filename: roles/php5-fpm/tasks/install.yml
#
- name: install php5-fpm and family
apt:
name: "{{ item }}"
with_items: php5.packages
notify:
- restart php5-fpm service
I want that "with_items" from install.yml look into defaults/main.yml and take that array of packages
Expand the variable
wrong
with_items: php5.packages
correct
loop: "{{ php5.packages }}"
Quoting from Loops
We added loop in Ansible 2.5. It is not yet a full replacement for with_, but we recommend it for most use cases.
We have not deprecated the use of with_ - that syntax will still be valid for the foreseeable future.
We are looking to improve loop syntax - watch this page and the changelog for updates.
Related
I'm provisioning a system that requires multiple GPG keys to be added. I'm attempting to streamline the process and follow DRY principals.
I have apt packages installing from a vars list like so:
- name: Install packages
apt: name={{ apt_packages }}
Where my vars.yml looks like this:
apt_packages:
- tilix
- terraform
- ansible
- opera
This works because the apt module accepts comma separated inputs and parses accordingly.
So I'm trying to achieve a similar process when using the apt_key module but I can't seem to get it to work. Here are a couple of attempts I've made:
- name Add keys
apt_key:
url: url="{{ items }}"
loop: "{{ gpg_keys }}"
state: present
and
- name: Add GPG Keys
apt_key:
url: url="{{ gpg_keys }}"
state: present
Both throw different errors.
Is it possible to do something like this using the apt-key module? Obviously I'm trying to avoid having a separate caller for each key I want to add as there will be many keys and I'd like to be able to add additional keys later on by simply appending the list in vars.yml.
You have a few small mistakes in your task.
The right way is this:
- name: Add keys
apt_key:
url: "{{ item }}"
state: present
loop: "{{ gpg_keys }}"
you already have the key url, so prepending url= is incorrect
loop is an argument to the task and not to the apt_key module, so it needs to be indented to the level of apt_key (unlike url which is an argument to the model)
Sidenotes:
You also need to make sure that gpg_keys contains a list, similar to apt_packages.
The name parameter of apt accepts a list, as you define correctly in your vars.yml, no comma-separated string. (You are already doing it right)
Documentation:
apt
apt_key
I'm trying to see if there's a way to apply a kustomize patchTransformer to a specific container in a pod other than using its array index. For example, if I have 3 containers in a pod, (0, 1, 2) and I want to patch container "1" I would normally do something like this:
patch: |-
- op: add
path: /spec/containers/1/command
value: ["sh", "-c", "tail -f /dev/null"]
That is heavily dependent on that container order remaining static. If container "1" is removed for whatever reason, the array is reshuffled and container "2" suddenly becomes container "1", making my patch no longer applicable.
Is there a way to patch by name, or target a label/annotation, or some other mechanism?
path: /spec/containers/${NAME_OF_CONTAINER}/command
Any insight is greatly appreciated.
For future readers: you may have seen JSONPath syntax like this floating around the internet, and hoped that you could select a list item and patch it using Kustomize.
/spec/containers[name=my-app]/command
As #Rico mentioned in his answer: This is a limitation with JSON6902 - it only accepts paths using JSONPointer syntax, defined by JSON6901.
So, no, you cannot currently address a list item using [key=value] syntax when using kustomize's patchesJson6902.
However, a solution to the problem that the original question highlights around potential reordering of list items does exist without moving to Strategic Merge Patch (which can depend on CRD authors correctly annotating how list-item merges should be applied).
Simply add another JSON6902 operation to your patches to test that the item remains at the index you specified.
# First, test that the item is still at the list index you expect
- op: test
path: /spec/containers/0/name
value: my-app
# Now that you know your item is still at index-0, it's safe to patch it's command
- op: replace
path: /spec/containers/0/command
value: ["sh", "-c", "tail -f /dev/null"]
The test operation will fail your patch if the value at the specified path does not match what is provided. This way, you can be sure that your other patch operation's dependency on the item's index is still valid!
I use this trick especially when dealing with custom resources, since I:
A) Don't have to give kustomize a whole new openAPI spec, and
B) Don't have to depend on the CRD authors having added the correct extension annotation (like: "x-kubernetes-patch-merge-key": "name") to make sure my strategic merge patches on list items work the way I need them to.
This is more of a Json6902 patch limitation together with the fact that containers are defined in a K8s pod as an Array and not a Hash where something like this would work:
path: /spec/containers/${NAME_OF_CONTAINER}/command
You could just try a StrategicMergePatch. which essentially what kubectl apply does.
cat <<EOF > deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
spec:
selector:
matchLabels:
run: my-app
replicas: 2
template:
metadata:
labels:
run: my-app
spec:
containers:
- name: my-container
image: myimage
ports:
- containerPort: 80
EOF
cat <<EOF > set_command.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-nginx
spec:
template:
spec:
containers:
- name: my-app
command: ["sh", "-c", "tail -f /dev/null"]
EOF
cat <<EOF >./kustomization.yaml
resources:
- deployment.yaml
patchesStrategicMerge:
- set_command.yaml
EOF
✌️
I'm importing an Ansible role in a play and running its 'install' task.The role is meant to create VMs on an hypervisor like Vbox and works fine.
However, I want to use it to create several VMs at the same time, and I must provide 2 variables for this purpose :
- vm_ip : the ip of the vm to be created
- vm_name : the name of the vm to be created
I have already tried almost everything with loops, with_items and other things. For instance, this code doesn't work :
- name: Create VMs
hosts: localhost
tasks:
- import_role:
name: vm_creation
tasks_from: install
vars:
vm_ip: "{{ item.ips }}"
vm_name: "{{ item.names }}"
loop:
- { ips: '192.168.20.4', names: 'test4' }
- { ips: '192.168.20.5', names: 'test5' }
It is supposed to create both .20.4 and .20.5 VM's but the play crashes telling me this :"The task includes an option with an undefined variable. The error was: 'item' is undefined
You appear to have mis-indented the loop directive. In doing so, you have defined a variable named loop rather than actually creating a loop (this is why item is undefined).
You will also need to use include_role rather than import_role. You can read about the difference between include_role and import_role in the documentation.
- name: Create VMs
hosts: localhost
tasks:
- include_role:
name: vm_creation
tasks_from: install
vars:
vm_ip: "{{ item.ips }}"
vm_name: "{{ item.names }}"
loop:
- { ips: '192.168.20.4', names: 'test4' }
- { ips: '192.168.20.5', names: 'test5' }
I want to be able to read a versionfile if it exists, and check its contents. Then return True if the version changed or the file does not exists, False if versionfile exists and the version matches the content.
Basically this:
# setup test data
- set_fact:
version_expected: "0001"
version_path: "/path/to/version"
version_owner: "root"
version_group: "root"
# this block is used to check for version changes
- name: check version change
block:
- name: check version file
stat:
path: "{{version_path}}"
register: version_file
- set_fact:
version_remote: "{{ lookup('file', version_path) | default('') }}"
when: version_file.stat.exists
- set_fact:
version_changed: not version_file.stat.exists or version_remote != version_expected
# test writing new version
- name: write file
copy:
dest: "{{version_path}}"
content: "{{version_expected}}"
owner: "{{version_owner}}"
group: "{{version_group}}"
when: version_changed
My problem is: This is somewhat ugly and becoming quite redundant in my roles.
Is there a more elegant way to do this?
Is there maybe a module for this? (though I found none)
Or should I just write a module for this?
Best regards,
2d4r
EDIT:
im only meaning the "check version change" block, the surrounding code is for debugging only.
To be more specific, I want to download a server binary, but only if my expectet version differs from the content of the versionfile.
I want to write the new version to file, if (and only if) the download was successfull, but that is not part of my question.
EDIT2:
I got this by now:
# roles/_helper/tasks/version_check.yml
- name: check if file exists
stat:
path: "{{version_path}}"
register: version_file
- name: get remote version
slurp:
src: "{{version_path}}"
register: version_changed
when: version_file.stat.exists
# (False if versionfile exists and version is expected; True else)
- name: set return value
set_fact:
version_changed: "{{ not version_file.stat.exists or ((version_changed.content | b64decode) is version_compare(version_expected, 'ne')) }}"
used like this:
# /roles/example/tasks/main.yml
- include_role:
name: _helper
tasks_from: version_check
vars:
version_path: "{{file_version_path}}"
version_expected: "{{file_version_expected}}"
- name: doing awesome things
when: version_changed
block:
- name: download server
[...]
- name: write version
copy:
dest: "{{file_version_path}}"
content: "{{file_version_expected}}"
It kills the redundancy, but is still not what I want.
Sadly I can not register a return value from a role.
Delete everything except for write file task and remove the condition.
Ansible does this automatically for you.
- name: write file
copy:
dest: "{{version_path}}"
content: "{{version_expected}}"
owner: "{{version_owner}}"
group: "{{version_group}}"
After you changed the question, given the information provided, the only thing I can point to is to use slurp module instead of lookup, as an lookup plugins work locally in the control machine.
Compare versions using your logic or built-in version_compare filter/test.
I was trying to create a drupalvm instance running drupal 7 by changing the "core" and "version" as suggested in the readme file, and then running vagrant up, but the issue is that after doing so it keeps on installing drupal8 (default).
Following are the drupal.make.yml file and the config.yml file that I edited before building the machine.
drupal.make.yml
---
api: 2
# Basic Drush Make file for Drupal. Be sure to update the drupal_major_version
# variable inside config.yml if you change the major version in this file.
# Drupal core (major version, e.g. 6.x, 7.x, 8.x).
core: "7.x"
projects:
# Core.
drupal:
type: "core"
download:
# Drupal core branch (e.g. "6.x", "7.x", "8.0.x").
branch: "7.0.x"
working-copy: true
# Other modules.
devel: "1.x-dev"
config.yml
---
# `vagrant_box` can also be set to geerlingguy/centos6, geerlingguy/centos7,
# geerlingguy/ubuntu1204, parallels/ubuntu-14.04, etc.
vagrant_box: geerlingguy/ubuntu1404
vagrant_user: vagrant
vagrant_synced_folder_default_type: nfs
# If you need to run multiple instances of Drupal VM, set a unique hostname,
# machine name, and IP address for each instance.
vagrant_hostname: drupalvm.dev
vagrant_machine_name: drupalvm
vagrant_ip: 192.168.88.88
# Allow Drupal VM to be accessed via a public network interface on your host.
# Vagrant boxes are insecure by default, so be careful. You've been warned!
# See: https://docs.vagrantup.com/v2/networking/public_network.html
vagrant_public_ip: ""
# A list of synced folders, with the keys 'local_path', 'destination', and
# a 'type' of [nfs|rsync|smb] (leave empty for slow native shares). See
# http://docs.drupalvm.com/en/latest/extras/syncing-folders/ for more info.
vagrant_synced_folders:
# The first synced folder will be used for the default Drupal installation, if
# build_makefile: is 'true'.
- local_path: ~/Documents/projectohri/drupalvm
destination: /var/www/drupalvm
type: nfs
create: true
# Memory and CPU to use for this VM.
vagrant_memory: 1024
vagrant_cpus: 2
# The web server software to use. Can be either 'apache' or 'nginx'.
drupalvm_webserver: apache
# Set this to false if you are using a different site deployment strategy and
# would like to configure 'vagrant_synced_folders' and 'apache_vhosts' manually.
build_makefile: true
drush_makefile_path: /vagrant/drupal.make.yml
# Set this to false if you don't need to install drupal (using the drupal_*
# settings below), but instead copy down a database (e.g. using drush sql-sync).
install_site: true
# Settings for building a Drupal site from a makefile (if 'build_makefile:'
# is 'true').
drupal_major_version: 7
drupal_core_path: "/var/www/drupalvm/drupal"
drupal_domain: "drupalvm.dev"
drupal_site_name: "Drupal"
drupal_install_profile: standard
drupal_enable_modules: [ 'devel' ]
drupal_account_name: admin
drupal_account_pass: admin
drupal_mysql_user: drupal
drupal_mysql_password: drupal
drupal_mysql_database: drupal
# Additional arguments or options to pass to `drush site-install`.
drupal_site_install_extra_args: []
# Cron jobs are added to the root user's crontab. Keys include name (required),
# minute, hour, day, weekday, month, job (required), and state.
drupalvm_cron_jobs: []
# - {
# name: "Drupal Cron",
# minute: "*/30",
# job: "drush -r {{ drupal_core_path }} core-cron"
# }
# Drupal VM automatically creates a drush alias file in your ~/.drush folder if
# this variable is 'true'.
configure_local_drush_aliases: true
# Apache VirtualHosts. Add one for each site you are running inside the VM. For
# multisite deployments, you can point multiple servernames at one documentroot.
# View the geerlingguy.apache Ansible Role README for more options.
apache_vhosts:
- servername: "{{ drupal_domain }}"
documentroot: "{{ drupal_core_path }}"
extra_parameters: |
ProxyPassMatch ^/(.*\.php(/.*)?)$ "fcgi://127.0.0.1:9000{{ drupal_core_path }}"
- servername: "adminer.drupalvm.dev"
documentroot: "/opt/adminer"
- servername: "xhprof.drupalvm.dev"
documentroot: "/usr/share/php/xhprof_html"
- servername: "pimpmylog.drupalvm.dev"
documentroot: "/usr/share/php/pimpmylog"
apache_remove_default_vhost: true
apache_mods_enabled:
- expires.load
- ssl.load
- rewrite.load
# Nginx hosts. Each site will get a server entry using the configuration defined
# here. Set the 'is_php' property for document roots that contain PHP apps like
# Drupal.
nginx_hosts:
- server_name: "{{ drupal_domain }}"
root: "{{ drupal_core_path }}"
is_php: true
- server_name: "adminer.drupalvm.dev"
root: "/opt/adminer"
is_php: true
- server_name: "xhprof.drupalvm.dev"
root: "/usr/share/php/xhprof_html"
is_php: true
- server_name: "pimpmylog.drupalvm.dev"
root: "/usr/share/php/pimpmylog"
is_php: true
nginx_remove_default_vhost: true
# MySQL Databases and users. If build_makefile: is true, first database will
# be used for the makefile-built site.
mysql_databases:
- name: "{{ drupal_mysql_database }}"
encoding: utf8
collation: utf8_general_ci
mysql_users:
- name: "{{ drupal_mysql_user }}"
host: "%"
password: "{{ drupal_mysql_password }}"
priv: "{{ drupal_mysql_database }}.*:ALL"
# Comment out any extra utilities you don't want to install. If you don't want
# to install *any* extras, make set this value to an empty set, e.g. `[]`.
installed_extras:
- adminer
- drupalconsole
- mailhog
- memcached
# - nodejs
- pimpmylog
# - redis
# - ruby
# - selenium
# - solr
- varnish
- xdebug
- xhprof
# Add any extra apt or yum packages you would like installed.
extra_packages:
- unzip
# `nodejs` must be in installed_extras for this to work.
nodejs_version: "0.12"
nodejs_npm_global_packages: []
# `ruby` must be in installed_extras for this to work.
ruby_install_gems_user: "{{ vagrant_user }}"
ruby_install_gems: []
# You can configure almost anything else on the server in the rest of this file.
extra_security_enabled: false
drush_version: master
drush_keep_updated: true
drush_composer_cli_options: "--prefer-dist --no-interaction"
firewall_allowed_tcp_ports:
- "22"
- "25"
- "80"
- "81"
- "443"
- "4444"
- "8025"
- "8080"
- "8443"
- "8983"
firewall_log_dropped_packets: false
# PHP Configuration. Currently-supported versions: 5.5, 5.6, 7.0.
php_version: "5.6"
php_memory_limit: "192M"
php_display_errors: "On"
php_display_startup_errors: "On"
php_enable_php_fpm: true
php_realpath_cache_size: "1024K"
php_sendmail_path: "/usr/sbin/ssmtp -t"
php_opcache_enabled_in_ini: true
php_opcache_memory_consumption: "192"
php_opcache_max_accelerated_files: 4096
php_max_input_vars: "4000"
composer_path: /usr/bin/composer
composer_home_path: '/home/vagrant/.composer'
# composer_global_packages:
# - { name: phpunit/phpunit, release: '#stable' }
# Run specified scripts after VM is provisioned. Path is relative to the
# `provisioning/playbook.yml` file.
post_provision_scripts: []
# - "../examples/scripts/configure-solr.sh"
# MySQL Configuration.
mysql_root_password: root
mysql_slow_query_log_enabled: true
mysql_slow_query_time: 2
mysql_wait_timeout: 300
adminer_install_filename: index.php
# Varnish Configuration.
varnish_listen_port: "81"
varnish_default_vcl_template_path: templates/drupalvm.vcl.j2
varnish_default_backend_host: "127.0.0.1"
varnish_default_backend_port: "80"
# Pimp my Log settings.
pimpmylog_install_dir: /usr/share/php/pimpmylog
pimpmylog_grant_all_privs: true
# XDebug configuration. XDebug is disabled by default for better performance.
php_xdebug_default_enable: 0
php_xdebug_coverage_enable: 0
php_xdebug_cli_enable: 1
php_xdebug_remote_enable: 1
php_xdebug_remote_connect_back: 1
# Use PHPSTORM for PHPStorm, sublime.xdebug for Sublime Text.
php_xdebug_idekey: PHPSTORM
php_xdebug_max_nesting_level: 256
# Solr Configuration (if enabled above).
solr_version: "4.10.4"
solr_xms: "64M"
solr_xmx: "128M"
# Selenium configuration.
selenium_version: 2.46.0
# Other configuration.
known_hosts_path: ~/.ssh/known_hosts
7.0.x is not a valid drupal version. Re-read the docs above that link in the drupal.make.yml and change it to "7.x"
Also, be sure to run vagrant destroy to remove all traces of the old instance. It could be that it isn't downloading a new copy, just using the D8 that it downloaded already.