Error - io:job could not be initialized: missing field accessing 'heartbeat.monitors.0.hosts.0' (source:'/etc/heartbeat.yml') - elk

Heartbeat configuration file is below
# Directory + glob pattern to search for configuration files
path: ${path.config}/monitors.d/*.yml
# If enabled, heartbeat will periodically check the config.monitors path for changes
reload.enabled: true
# How often to check for changes
reload.period: 10s
heartbeat.monitors:
- type: http
id: my_app
name: "Check my_app liveness endpoint"
labels.application.name: my_app
schedule: '#every 1m'
service.name: 'my_app' # must be same as in apm
hosts: ["https://${host}/path/to/destination1", "https://${host}/path/to/destination2"]
check.request.method: HEAD
check.response.status: [200]
fields_under_root: true
fields:
service.environment: "${my_env}"
labels.application.name: my_app
####Enabling logging to heartbeat###
logging.level: debug
logging.to_files: true
logging.files.path: /usr/share/heartbeat/logs
logging.files.name: heartbeat-log
logging.files.keepfiles: 30
logging.files.permissions: 0640
output.kafka:
hosts: ["${KAFKA_URL}"]
ssl.verification_mode: "none"
topic: "heartbeat"
partition.round_robin:
reachable_only: true
client_id: ${MY_APPLICATION}-heartbeat-${MY_ENVIRONMENT}
required_acks: 1
monitoring:
enabled: false
this configuration is deployed as a Configmap inside the heartbeat pod.
But after the deployment, we are getting this error in Kibana Uptime Monitor :
Also tried hardcoded the variables which are there inside the yaml posted. The result is the same.
Can anybody help me?

Related

Rundeck Yaml file formatting issue and issue with logging into Rundeck with AD user

I'm trying to allow domain users to log into my rundeck instance by following the guide https://vinusumi.wordpress.com/2017/12/28/setup-active-directory-authentication-for-rundeck/. However, I'm running into 2 issues.
For some reason, I'm unable to log into rundeck with a user thats added to the "rundeck_admins" group. I confirmed that the credentials are correct and I believe that the info I added to the "jaas-activedirectory.conf" is syntactically correct and accurate based on my AD settings. According to the "/var/log/rundeck/service.log" it says the following:
2018-12-13 20:13:29.689 DEBUG --- [tp1465511423-25] ailsUsernamePasswordAuthenticationFilter : Updated SecurityContextHolder to contain null Authentication
2018-12-13 20:13:29.689 DEBUG --- [tp1465511423-25] ailsUsernamePasswordAuthenticationFilter : Delegating to
authentication failure handler grails.plugin.springsecurity.web.authentication.AjaxAwareAuthenticationFailureHandler#51aaa9d4
I'm having trouble figuring out the proper syntax regarding the yaml file used for my "rundeck_users" group
description: "Ops Engineers can launch jobs but not edit them"
context:
project: *
for:
resource:
- equals:
kind: 'node'
allow: [read,update,refresh]
- equals:
kind: 'job'
allow: [read,run,kill]
- equals:
kind: 'adhoc'
allow: [read,run,kill]
- equals:
kind: 'event'
allow: [read,create]
job:
- match:
name: '.*'
allow: [read,run,kill]
adhoc:
- match:
name: '.*'
allow: [read,run,kill]
node:
- match:
nodename: '.*'
allow: [read,run,refresh]
by:
group:
- rundeck_users
---
context:
application: rundeck
description: "Ops Engineers can launch jobs but not edit them"
for:
project:
- match:
name: '*'
allow: [read]
system:
- match:
name: '.*'
allow: [read]
by:
group:
- rundeck_users
1.- Make sure the authentication is being read. When rundeck is starting
<..>
2018-12-14 01:52:57.186 INFO --- [ main] rundeckapp.BootStrap : RSS feeds disabled
2018-12-14 01:52:57.187 INFO --- [ main] rundeckapp.BootStrap : Using jaas authentication <<<<<<<<<
<..>
2.- Verify the yaml content is correct for example using http://www.yamllint.com/
3.- Use an existing/working aclpolicy and use your group for testing purpose and check if the acl policy is causing the issue.
Hope it helps
AD authentication - These steps are relevant for rundeck version 3 and up on centos7.
For older rundeck version see procedure I posted here:
https://groups.google.com/forum/#!msg/rundeck-discuss/P2qQHNpDct4/aP0ot7V2BAAJ
Create AD config file with the following content:
AD {
com.dtolabs.rundeck.jetty.jaas.JettyCachingLdapLoginModule required
debug="true"
contextFactory="com.sun.jndi.ldap.LdapCtxFactory"
providerUrl="ldap://<ip>:389 ldap://<ip>:389"
bindDn="CN=authUser,CN=Users,DC=your,DC=domain,DC=com"
bindPassword="<authUserPassword>"
authenticationMethod="simple"
forceBindingLogin="true"
userBaseDn="CN=Users,DC=your,DC=domain,DC=com"
userRdnAttribute="sAMAccountName"
userIdAttribute="sAMAccountName"
userPasswordAttribute="unicodePwd"
userObjectClass="person"
roleBaseDn="CN=Users,DC=your,DC=domain,DC=com"
roleNameAttribute="sAMAccountName"
roleMemberAttribute="member"
roleObjectClass="group"
cacheDurationMillis="300000"
reportStatistics="true";
};
Create file /etc/sysconfig/rundeckd with the following lines.
Note that LOGIN_MODULE value should be the same as what you set in your file.
export JAAS_CONF=/path/to/file/jaas-AD.conf
export LOGIN_MODULE=AD
I recommend creating full control yaml at first so you can test AD authentication, and then remove permissions as necessary. Note groups in yaml should be the same as in your AD.

After specifying an alternate datasource, grails is throwing a sql exception for embedded h2

I have a small set of grails 3.0.11 applications. We have a domain module which is shared between the applications.
When one of the apps(app1, app2, etc) starts up, it connects to the datasource and creates a table for every class in the domain module. All of the apps in the suite will attempt to create these tables.
I modified the application.yml so that it would use an MSSQL instance rather than the internal h2 db, and I set dbCreate to update so that the schema will persist through shutdowns of the application.
I am trying to split up the domain so that each application will only manage the schema for its relevant classes. ie, app1 will handle ddl for classA, classB, and classC on startup, and app2 will handle classX, classY, classZ.
Following this guide, I have defined a second datasource unique to each app (i.e., app1db, app2db, etc), and I have added mapping = { datasource 'appXdb'} to each class specifying the relevant app.
Now, when I start the app, I am getting a sql exception:
Caused by: java.sql.SQLException: Driver:jTDS 1.3.1 returned null for
URL:jdbc:h2:mem:grailsDB;MVCC=TRUE;LOCK_TIMEOUT=10000
Why is my app still trying to access an h2 db after redefining datasource to point to an mssql instance and adding a second datasource which also points to mssql?
application.yml:
---
server:
port: 3434
contextPath: '/app1'
---
grails:
profile: web
codegen:
defaultPackage: cars.app
info:
app:
name: '#info.app.name#'
version: '#info.app.version#'
grailsVersion: '#info.app.grailsVersion#'
spring:
groovy:
template:
check-template-location: false
---
grails:
mime:
disable:
accept:
header:
userAgents:
- Gecko
- WebKit
- Presto
- Trident
types:
all: '*/*'
atom: application/atom+xml
css: text/css
csv: text/csv
form: application/x-www-form-urlencoded
html:
- text/html
- application/xhtml+xml
js: text/javascript
json:
- application/json
- text/json
multipartForm: multipart/form-data
pdf: application/pdf
rss: application/rss+xml
text: text/plain
hal:
- application/hal+json
- application/hal+xml
xml:
- text/xml
- application/xml
urlmapping:
cache:
maxsize: 1000
controllers:
defaultScope: singleton
converters:
encoding: UTF-8
views:
default:
codec: html
gsp:
encoding: UTF-8
htmlcodec: xml
codecs:
expression: html
scriptlets: html
taglib: none
staticparts: none
---
hibernate:
cache:
queries: false
use_second_level_cache: true
use_query_cache: false
region.factory_class: 'org.hibernate.cache.ehcache.EhCacheRegionFactory'
endpoints:
jmx:
unique-names: true
shutdown:
enabled: true
dataSources:
dataSource:
pooled: true
jmxExport: true
driverClassName: net.sourceforge.jtds.jdbc.Driver
username: grails
password: password
app1DataSource:
pooled: true
jmxExport: true
driverClassName: net.sourceforge.jtds.jdbc.Driver
username: grails
password: password
environments:
development:
dataSource:
dbCreate: update
url: jdbc:jtds:sqlserver://127.0.0.1;databaseName=cars_demo
appDataSource:
dbCreate: update
url: jdbc:jtds:sqlserver://1127.0.0.1;databaseName=cars_demo
dataSource:
dbCreate: update
url: jdbc:h2:mem:testDb;MVCC=TRUE;LOCK_TIMEOUT=10000;DB_CLOSE_ON_EXIT=FALSE;DB_CLOSE_DELAY=-1
production:
dataSource:
dbCreate: update
url: jdbc:h2:mem:prodDb;MVCC=TRUE;LOCK_TIMEOUT=10000;DB_CLOSE_ON_EXIT=FALSE;DB_CLOSE_DELAY=-1
properties:
jmxEnabled: true
initialSize: 5
maxActive: 50
minIdle: 5
maxIdle: 25
maxWait: 10000
maxAge: 600000
timeBetweenEvictionRunsMillis: 5000
minEvictableIdleTimeMillis: 60000
validationQuery: SELECT 1
validationQueryTimeout: 3
validationInterval: 15000
testOnBorrow: true
testWhileIdle: true
testOnReturn: false
jdbcInterceptors: ConnectionState
defaultTransactionIsolation: 2 # TRANSACTION_READ_COMMITTED
mapping element: static mapping = {datasource 'app1DataSource'}
edit1: added application.yml and mapping element.
turns out, I had not properly nested my environmental overwrites in application.yml. I added a parent element dataSources: about the actual ds defs in the main section but did not do the same in the environment sections which mean that my ds's were loading w/o a url which then defaulted to grails h2 db.
Thanks #quindimildev! I recognized my oversight while trying to figure out formatting to follow your suggestion.
In my case, I had to clear all old compiled files which had an old configuration in them.
To make sure I had a fresh compile I did the following:
grails clean-all
grails refresh-dependencies
grails compile
grails prod war
After that the error that returned "null for URL:jdbc:h2:mem:grailsDB;MVCC=TRUE;LOCK_TIMEOUT=10000" went away.

Drupal 7 not working with drupalvm

I was trying to create a drupalvm instance running drupal 7 by changing the "core" and "version" as suggested in the readme file, and then running vagrant up, but the issue is that after doing so it keeps on installing drupal8 (default).
Following are the drupal.make.yml file and the config.yml file that I edited before building the machine.
drupal.make.yml
---
api: 2
# Basic Drush Make file for Drupal. Be sure to update the drupal_major_version
# variable inside config.yml if you change the major version in this file.
# Drupal core (major version, e.g. 6.x, 7.x, 8.x).
core: "7.x"
projects:
# Core.
drupal:
type: "core"
download:
# Drupal core branch (e.g. "6.x", "7.x", "8.0.x").
branch: "7.0.x"
working-copy: true
# Other modules.
devel: "1.x-dev"
config.yml
---
# `vagrant_box` can also be set to geerlingguy/centos6, geerlingguy/centos7,
# geerlingguy/ubuntu1204, parallels/ubuntu-14.04, etc.
vagrant_box: geerlingguy/ubuntu1404
vagrant_user: vagrant
vagrant_synced_folder_default_type: nfs
# If you need to run multiple instances of Drupal VM, set a unique hostname,
# machine name, and IP address for each instance.
vagrant_hostname: drupalvm.dev
vagrant_machine_name: drupalvm
vagrant_ip: 192.168.88.88
# Allow Drupal VM to be accessed via a public network interface on your host.
# Vagrant boxes are insecure by default, so be careful. You've been warned!
# See: https://docs.vagrantup.com/v2/networking/public_network.html
vagrant_public_ip: ""
# A list of synced folders, with the keys 'local_path', 'destination', and
# a 'type' of [nfs|rsync|smb] (leave empty for slow native shares). See
# http://docs.drupalvm.com/en/latest/extras/syncing-folders/ for more info.
vagrant_synced_folders:
# The first synced folder will be used for the default Drupal installation, if
# build_makefile: is 'true'.
- local_path: ~/Documents/projectohri/drupalvm
destination: /var/www/drupalvm
type: nfs
create: true
# Memory and CPU to use for this VM.
vagrant_memory: 1024
vagrant_cpus: 2
# The web server software to use. Can be either 'apache' or 'nginx'.
drupalvm_webserver: apache
# Set this to false if you are using a different site deployment strategy and
# would like to configure 'vagrant_synced_folders' and 'apache_vhosts' manually.
build_makefile: true
drush_makefile_path: /vagrant/drupal.make.yml
# Set this to false if you don't need to install drupal (using the drupal_*
# settings below), but instead copy down a database (e.g. using drush sql-sync).
install_site: true
# Settings for building a Drupal site from a makefile (if 'build_makefile:'
# is 'true').
drupal_major_version: 7
drupal_core_path: "/var/www/drupalvm/drupal"
drupal_domain: "drupalvm.dev"
drupal_site_name: "Drupal"
drupal_install_profile: standard
drupal_enable_modules: [ 'devel' ]
drupal_account_name: admin
drupal_account_pass: admin
drupal_mysql_user: drupal
drupal_mysql_password: drupal
drupal_mysql_database: drupal
# Additional arguments or options to pass to `drush site-install`.
drupal_site_install_extra_args: []
# Cron jobs are added to the root user's crontab. Keys include name (required),
# minute, hour, day, weekday, month, job (required), and state.
drupalvm_cron_jobs: []
# - {
# name: "Drupal Cron",
# minute: "*/30",
# job: "drush -r {{ drupal_core_path }} core-cron"
# }
# Drupal VM automatically creates a drush alias file in your ~/.drush folder if
# this variable is 'true'.
configure_local_drush_aliases: true
# Apache VirtualHosts. Add one for each site you are running inside the VM. For
# multisite deployments, you can point multiple servernames at one documentroot.
# View the geerlingguy.apache Ansible Role README for more options.
apache_vhosts:
- servername: "{{ drupal_domain }}"
documentroot: "{{ drupal_core_path }}"
extra_parameters: |
ProxyPassMatch ^/(.*\.php(/.*)?)$ "fcgi://127.0.0.1:9000{{ drupal_core_path }}"
- servername: "adminer.drupalvm.dev"
documentroot: "/opt/adminer"
- servername: "xhprof.drupalvm.dev"
documentroot: "/usr/share/php/xhprof_html"
- servername: "pimpmylog.drupalvm.dev"
documentroot: "/usr/share/php/pimpmylog"
apache_remove_default_vhost: true
apache_mods_enabled:
- expires.load
- ssl.load
- rewrite.load
# Nginx hosts. Each site will get a server entry using the configuration defined
# here. Set the 'is_php' property for document roots that contain PHP apps like
# Drupal.
nginx_hosts:
- server_name: "{{ drupal_domain }}"
root: "{{ drupal_core_path }}"
is_php: true
- server_name: "adminer.drupalvm.dev"
root: "/opt/adminer"
is_php: true
- server_name: "xhprof.drupalvm.dev"
root: "/usr/share/php/xhprof_html"
is_php: true
- server_name: "pimpmylog.drupalvm.dev"
root: "/usr/share/php/pimpmylog"
is_php: true
nginx_remove_default_vhost: true
# MySQL Databases and users. If build_makefile: is true, first database will
# be used for the makefile-built site.
mysql_databases:
- name: "{{ drupal_mysql_database }}"
encoding: utf8
collation: utf8_general_ci
mysql_users:
- name: "{{ drupal_mysql_user }}"
host: "%"
password: "{{ drupal_mysql_password }}"
priv: "{{ drupal_mysql_database }}.*:ALL"
# Comment out any extra utilities you don't want to install. If you don't want
# to install *any* extras, make set this value to an empty set, e.g. `[]`.
installed_extras:
- adminer
- drupalconsole
- mailhog
- memcached
# - nodejs
- pimpmylog
# - redis
# - ruby
# - selenium
# - solr
- varnish
- xdebug
- xhprof
# Add any extra apt or yum packages you would like installed.
extra_packages:
- unzip
# `nodejs` must be in installed_extras for this to work.
nodejs_version: "0.12"
nodejs_npm_global_packages: []
# `ruby` must be in installed_extras for this to work.
ruby_install_gems_user: "{{ vagrant_user }}"
ruby_install_gems: []
# You can configure almost anything else on the server in the rest of this file.
extra_security_enabled: false
drush_version: master
drush_keep_updated: true
drush_composer_cli_options: "--prefer-dist --no-interaction"
firewall_allowed_tcp_ports:
- "22"
- "25"
- "80"
- "81"
- "443"
- "4444"
- "8025"
- "8080"
- "8443"
- "8983"
firewall_log_dropped_packets: false
# PHP Configuration. Currently-supported versions: 5.5, 5.6, 7.0.
php_version: "5.6"
php_memory_limit: "192M"
php_display_errors: "On"
php_display_startup_errors: "On"
php_enable_php_fpm: true
php_realpath_cache_size: "1024K"
php_sendmail_path: "/usr/sbin/ssmtp -t"
php_opcache_enabled_in_ini: true
php_opcache_memory_consumption: "192"
php_opcache_max_accelerated_files: 4096
php_max_input_vars: "4000"
composer_path: /usr/bin/composer
composer_home_path: '/home/vagrant/.composer'
# composer_global_packages:
# - { name: phpunit/phpunit, release: '#stable' }
# Run specified scripts after VM is provisioned. Path is relative to the
# `provisioning/playbook.yml` file.
post_provision_scripts: []
# - "../examples/scripts/configure-solr.sh"
# MySQL Configuration.
mysql_root_password: root
mysql_slow_query_log_enabled: true
mysql_slow_query_time: 2
mysql_wait_timeout: 300
adminer_install_filename: index.php
# Varnish Configuration.
varnish_listen_port: "81"
varnish_default_vcl_template_path: templates/drupalvm.vcl.j2
varnish_default_backend_host: "127.0.0.1"
varnish_default_backend_port: "80"
# Pimp my Log settings.
pimpmylog_install_dir: /usr/share/php/pimpmylog
pimpmylog_grant_all_privs: true
# XDebug configuration. XDebug is disabled by default for better performance.
php_xdebug_default_enable: 0
php_xdebug_coverage_enable: 0
php_xdebug_cli_enable: 1
php_xdebug_remote_enable: 1
php_xdebug_remote_connect_back: 1
# Use PHPSTORM for PHPStorm, sublime.xdebug for Sublime Text.
php_xdebug_idekey: PHPSTORM
php_xdebug_max_nesting_level: 256
# Solr Configuration (if enabled above).
solr_version: "4.10.4"
solr_xms: "64M"
solr_xmx: "128M"
# Selenium configuration.
selenium_version: 2.46.0
# Other configuration.
known_hosts_path: ~/.ssh/known_hosts
7.0.x is not a valid drupal version. Re-read the docs above that link in the drupal.make.yml and change it to "7.x"
Also, be sure to run vagrant destroy to remove all traces of the old instance. It could be that it isn't downloading a new copy, just using the D8 that it downloaded already.

What's wrong with this simple file.managed saltstack configuration?

From a fresh salt stack installation on server and client, the goal is to serve a file with a number inside:
SERVER
$vim /etc/salt/master
...
file_roots:
base:
- /srv/salt
...
$echo 1 > /srv/salt/tmp/salt.config.version
$cat /srv/salt/top.sls
base:
'*':
- tmpversion
$cat /srv/salt/tmpversion/init.sls
/tmp/salt.config.version:
file.managed:
- source: salt://tmp/salt.config.version
- user: root
- group: root
- mode: 644
CLIENT (minion)
$vim /etc/salt/minion
...
master: <masterhostnamehere>
...
I'm using salt '*' state.sls tmpversion to apply the configuration. I don't know how to get the changes applied automatically..
Salt doesn't do anything until you tell it to. So that means that you have to run the salt command on the cli when you want a state to be applied, or you can use Salt's internal scheduler or your system's cron to run the job regularly.

Gitlab Active Directory issues - gitlab-7.7.1_omnibus

I am having issues with Active Directory authentication via LDAP on Gitlab omnibus. I have tested the credentials and bind dn using ldapsearch and received a response with no issues, but for some reason I am not seeing any attempts at connecting when I login as an AD user on the gitlab frontend. I receive the error "Could not authorize you from Ldapmain because "Invalid credentials"." no matter if I'm using valid credentials or not.
I also receive the following from sudo gitlab-rake gitlab:check:
** Invoke gitlab:ldap:check (first_time)
** Invoke environment
** Execute gitlab:ldap:check
Checking LDAP ...
LDAP users with access to your GitLab server (only showing the first 100 results)
Server: ldapmain
Checking LDAP ... Finished
Please let me know if my explanation is not clear, or if you think that additional information would be helpful. I tried searching around and am not finding my exact issue.
My configuration is as follows:
gitlab_rails['ldap_enabled'] = true
gitlab_rails['ldap_servers'] = YAML.load <<-EOS # remember to close this block with 'EOS' below
main: # 'main' is the GitLab 'provider ID' of this LDAP server
## label
#
# A human-friendly name for your LDAP server. It is OK to change the label later,
# for instance if you find out it is too large to fit on the web page.
#
# Example: 'Paris' or 'Acme, Ltd.'
label: 'LDAP'
host: 'myadserver.my.domain.net'
port: 389
uid: 'sAMAccountName'
method: 'plain' # "tls" or "ssl" or "plain"
bind_dn: 'CN=Gitlab,OU=Service Accounts,OU=Washington\, D.C.,OU=United States,OU=NA,DC=my,DC=domain,DC=net'
password: 'mypasswrd'
# This setting specifies if LDAP server is Active Directory LDAP server.
# For non AD servers it skips the AD specific queries.
# If your LDAP server is not AD, set this to false.
active_directory: true
# If allow_username_or_email_login is enabled, GitLab will ignore everything
# after the first '#' in the LDAP username submitted by the user on login.
#
# Example:
# - the user enters 'jane.doe#example.com' and 'p#ssw0rd' as LDAP credentials;
# - GitLab queries the LDAP server with 'jane.doe' and 'p#ssw0rd'.
#
# If you are using "uid: 'userPrincipalName'" on ActiveDirectory you need to
# disable this setting, because the userPrincipalName contains an '#'.
allow_username_or_email_login: true
# Base where we can search for users
#
# Ex. ou=People,dc=gitlab,dc=example
#
base: 'OU=Washington\, D.C.,OU=United States,OU=NA,DC=my,DC=domain,DC=net'
# Filter LDAP users
#
# Format: RFC 4515 http://tools.ietf.org/search/rfc4515
# Ex. (employeeType=developer)
#
# Note: GitLab does not support omniauth-ldap's custom filter syntax.
#
#user_filter: ''
EOS
This was, of course, a whitespace issue. See lines below:
gitlab_rails['ldap_enabled'] = true
gitlab_rails['ldap_servers'] = YAML.load <<-EOS # remember to close this block with 'EOS' below
main: # 'main' is the GitLab 'provider ID' of this LDAP server
## label
#
# A human-friendly name for your LDAP server. It is OK to change the label later,
# for instance if you find out it is too large to fit on the web page.
#
# Example: 'Paris' or 'Acme, Ltd.'
label: 'LDAP'

Resources