I'm learning Ansible and I would like to install Nagios server with several monitored nodes. Nagios install steps that I'm following are from this tutorial on Digitalocean.
Step 5 of this tutorial confuses me as this is my first time using Ansible. This step involves a configuration file for monitored node on the master server, which I achieved using templates like this
- name: Configure Nagios server
hosts: master
sudo: true
vars:
nagios_slaves_config_dir: /etc/nagios/servers
nagios_config_file: /etc/nagios/nagios.cfg
tasks:
# shortened for brevity
- name: copy slaves config
template: src=../templates/guest.cfg.j2 dest=/etc/nagios/servers/{{ item }}.cfg owner=root mode=0644
with_items: groups['slaves']
Template looks like this
define host {
use linux-server
host_name {{ inventory_hostname }}
alias {{ inventory_hostname }}
address {{ hostvars['slave'].ansible_eth1.ipv4.address }}
}
define service {
use generic-service
host_name {{ inventory_hostname }}
service_description PING
check_command check_ping!100.0,20%!500.0,60%
}
This configuration file gets created but {{ inventory_hostname }} variable is wrong - instead of node_1 it states master
How can I template the configuration file for every monitored node so that it is created with the proper values?
:EDIT:
One idea is to generate config files on monitored nodes and copy them to master node. Will try tomorrow.
Your play is specifically only targeting your master server:
- name: Configure Nagios server
hosts: master
...
so the task will only run against this node (or multiple nodes in an inventory group called master).
You then seem to have got in a bit of a muddle with how you get the variables from the other servers that you wish to monitor (everything in the slaves inventory group in your case).
inventory_hostname is going to do pretty much what it says on the tin - it's going to give you the hostname of the server that the task is running against. Which in this case is only ever going to be master.
You are, however, on the right track with this line:
address {{ hostvars['slave'].ansible_eth1.ipv4.address }}
but you should have instead used the item that is being passed to the template in the task loop (you use with_items: groups['slaves'] to loop through all the hosts in slaves).
So your template wants to look something like:
define host {
use linux-server
host_name {{ hostvars[item].ansible_hostname }}
alias {{ hostvars[item].ansible_hostname }}
address {{ hostvars[item].ansible_eth0.ipv4.address }}
}
define service {
use generic-service
host_name {{ hostvars[item].ansible_hostname }}
service_description PING
check_command check_ping!100.0,20%!500.0,60%
}
This will generate a Nagios config file on the master named the same as the entry in the inventory file under the slaves group (this could be anything but by default would be an IP address, short or fully qualified domain name) for each server in the slaves group with the expected values templated in.
Alternatively you might want to rethink your whole strategy so that running a task against a monitored node creates the config file on the Nagios server allowing you to register servers to be monitored with a central Nagios server.
It's unclear from your explanation where you expect Ansible to get the node_1 value from. If this is not the hostname, where else is the information stored? If it's stored in variable, you could access it that way, but from the looks of it, you are using your inventory in a backwards fashion. You should not be using internal implementation details of the system as an inventory name. How are you even able to connect to master, via an entry in /etc/hosts?
Instead of defining your host's name as master, I would could create a variable to track and specify whether the host is a master or slave, for instance, using something like cluster_type: master or cluster_type: slave. These variables could be applied as host variables or group variables (which is probably what you want if you have multiple slaves). The host name in your inventory should ideally be something that you can actually connect to and reference.
Related
I'm facing a problem with gcloud and their support can't seem to help me.
So, to put my app in prod I need to use a redis instance to host some data. I'm using memorystore because I like to have everything on gcloud.
My app is in the standard environment on app engine so on their doc (https://cloud.google.com/memorystore/docs/redis/connect-redis-instance-standard) they ask me to configure a VPC connector. But I think that the CIDR that I put is always wrong, can someone help me finding the good CIDR.
connectMode: DIRECT_PEERING
createTime: '2020-03-13T17:20:51.590448560Z'
currentLocationId: europe-west1-d
displayName: APP-EPG
host: 10.240.224.179
locationId: europe-west1-d
memorySizeGb: 1
name: projects/*************/locations/europe-west1/instances/app-epg
persistenceIamIdentity: *************
port: 6379
redisVersion: REDIS_4_0
reservedIpRange: 10.240.224.176/29
state: READY
tier: BASIC
Thank you all !
First in order to VPC connector work yor App Engine instances have to be in the same VPC & region that your Redis instance is. If not there will not be connectivity between the two.
Also make sure you redis and app use one of the approved locations. By now it's a lot of them.
Your redis instance is in europe-west1 region so to create your VPC connector you have to set the name of the VPC network your redis instance is in
(for example "default").
IP range you were asking about is any range (not reserved by the network redis instance is in).
So - for example if your "default" network is 10.13.0.0/28 then you have to specify something else like 10.140.0.0/28 etc. It has to be /29 - otherwise you won't be able to create the connector.
Why 10.13.0.0 or any other addresses ? They are going to be assigned as the source network for you apps to connect to the Redis (or any
other VM's) in the specified network.
I've tested it using the command:
cloud compute networks vpc-access connectors create conn2 --network default /
--range 10.13.0.0/28 --region=europe-west1
Or you can do it using console in Serverless VPC Access and clicking "Add new connector";
You can also read documentation on how to create a connector.
I have a react-app, I set up my app to run on a custom url using the HOST variable when starting the app, something like:
"scripts": {
"start": "HOST=my-local-website.com ..."
}
I need to access this url from a windows virtual machine to test it on IE11, before setting up the HOST variable I was able to access it simply from my IP address (192.168.X.XX:3000), having changed the HOST variable this doesn't work anymore.
Does anyone know how I can access it from a virtual machine?
Thank you in advance
I suggest to set environment variables in separate .env file like described in dicumentation.
In .env file set HOST=my-local-website.com to change host (it's unclear why official doc recommends prefixing all env variables with REACT_APP_)
Web site name my-local-website.com mapping to IP address of server (192.168.X.XX:3000 in your case) is done using DNS. This relate to networking and not to frameworks you use. So to be able to access your site by name you have to establish mapping between name of the site and IP address
I terms of DNS this mapping will look like
my-local-website.com A 192.168.X.XX
But for testing purpuses you can use simplified approach (I don't think that you have established DNS server in place). On Windows you can use hosts file which is located in C:\Windows\System32\drivers\etc folder. File is named hosts. Open it with any text editor (like notepad) and add string
192.168.X.XX my-local-website.com
IP address goes first, name last. Dont include port number (:3000) as it not related to DNS. hosts file should be changed on you test (client) PC, not on the PC where your app run.
You may also modify hosts on PC where you app runs to check if host has been configured correctly.
To check that everything is correct you may use ping like this
ping my-local-website.com
IP address should be printed if you configured everything correctly.
If you run your app on Windows host there may be problem with firewall configuration. If your app open on the same PC where it is started but not on another PC, most probaly that firewall blocks traffic. It can be WIndows Firewall or antivirus software if you have any.
This is probably going to be a stretch, but it seems worth asking. Let's say I have a static Angular web app on nginx-host (linux) along with the following /etc/hosts file, automatically generated for the host by some stringy configuration management tool:
127.0.0.1 localhost
10.0.0.1 internalhost
Next, I have a stock Nginx configuration for nginx-host, nothing fancy is happening with the server blocks - with this problematic location block:
location ^~ /app {
return 301 http://internalhost/end/point
}
The problem is that this block returns precisely the url listed in the location block without any sort of translation from internalhost to the appropriate IP, resulting in client-side errors - and I need it to resolve that IP before handing it back.
Please note, we can't use maps or upstreams here. The Immovable Wall here is that our configuration system handles all service discovery and host-dependency resolution by doing lookups and generating hostfiles, and it's entirely separated from our internal nginx configuration, so we can't connect the lookups to the nginx setup to allow for dynamic maps or upstreams. This is also happening across several isolated segments of network for varied testing environments, so it's a hard requirement that we reference the nginx host's /etc/hosts file to resolve the host name before passing back the redirect path, as internalhost can be anything from dev-internalhost to production-backup-internalhost, all of which have distinct IPs.
Note: proxy_pass is not a solution here, we need the redirect for SSO purposes, and when the request is made to the internalhost location, the request params need to carry on through to the internalhost machine from the nginx-host, and the client needs to see the redirect to know that it's now on speaking terms with the internalhost server.
Edit for clarity: the client has no way to resolve internalapp on its own: nginx-host has a static UI, a link on that UI hits the /app endpoint, and nginx needs to pass back an IP-based link derived from the generated local hosts file. internalapp has no DNS records at all aside from the local hosts file - but the IP address in the hosts file would resolve to something like dev-internalapp.example.com that could actually be reached
The hostname in the 301 redirect is just text that's sent to the webbrowser.
The vast majority of clients out there will use DNS to find the IP address from the hostname in the URL.
If you have browsers "out there" on the Internet, they will have to have a fully qualified domainname (e.g. www.example.com) as internal or something similar will never point back to you. In this case you will have to change the message sent back by nginx. It's trivial if you have control over the configuration to do that...
If you only have internal hosts, your internal DNS should be more than able to allow internal hosts to properly resolve "internalhost" to its internal IP address. In essence, make sure they send a searchdomain record along with the DHCP responses.
How to setup the DHCP and DNS inside a company is relatively easy if they have an internal network/IT team that knows what they are doing.
If they have a mess -that happens every so often- nothing will work properly, no matter what you do.
A lot depends on how they set things up, but in general to make names like "internalhost" resolv internally to an IP address, I'd use:
pick an internal domainname (if they have not already). e.g. .local is an option, or historically in.example.com (where example.com is their external name). It does not have to be known externally, it just must not be used externally. Having it known externally makes it slightly harder, so avoid that.
DHCP: I'd set DHCP to emit the optional "search domain". How to do that depends on what DHCP server they use, but e.g. https://serverfault.com/questions/481453/set-search-domain-on-dhcp-server-without-changing-domain-name shows an example. I typically make sure it emits in.example.com, example.com as that makes it easier on typing domain names.
Internal DNS: now just add on the internal DNS server(s) an A record for internalhost.in.example.com. and point that to your RFC1918 address.
Optional: I'd make sure the firewalls disallow internal clients from using external DNS servers - or (better?) redirect them transparently to the internal DNS server(s). that way you avoid users setting e.g. 8.8.8.8 and 8.8.4.4 as DNS servers, overruling what they get from the DHCP server, and hence not seeing your internal names.
That's it. http://internalhost/whatever will now go to the machine with the IP address given in step 3 above and browsers will send a Host: header (if you have virtual hosts!) of internalhost.
I am working on an angular app using the angular cli to set things up. Running the ng serve command spawns a server at this address <my_ec2_host_name>:4200. When I try to access the page on the browser it doesn't work (connection timed out error). I believe this is because of security reasons so I added the following rule to my security groups for the ec2 instance:
Port 4200 should now be accessible but I still can't get the page to load. Can someone think of how to get this to work?
Start angular with below command.
ng serve --host=0.0.0.0 --disable-host-check
it will disable host check and allow to access with IP
You can set up the host option like this:
ng serve -host 0.0.0.0
The steps you are doing are correct for opening a port via Security Groups in the EC2 console. Make sure you are modifying the correct security group, and make sure that your changes have been saved.
Your container may have additional firewalls in place, so you will want to check the OS documentation. For Example, RHEL uses iptables as a further security measure: https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Security_Guide/sect-Security_Guide-IPTables.html.
That looks correct. Are you sure that your server is running and listening for connections?
You should ssh to that server and verify that the page can be loaded locally. Eg:
curl http://<YOUR HOST IP ADDRESS>:4200
eg: curl http://54.164.10.123:4200
You should be careful to use the public ip address (eg: IPv4 Public IP when you're in the EC2 console). I've run into problems in the past where I've got a server listening on one IP address (often localhost) and not the public ip address.
Also maybe a problem: Is your host inside a VPC of some sort?
Is it possible to enable notifications for services in NAGIOS but to disable hosts notifications? I have a lot of local printers which don't have a impact when they are down but I want to have a service notification e.g. for "no paper" or "low toner cartridge".
Any experiences? Thank you
There are a couple of options, you can create a new host template to use for printers that inherits from your generic-host template, but turn off the setting to enable host notifications with:
notifications_enabled 0
E.g.
define host{
name generic-printer
use generic-host
notifications_enabled 0
register 0
}
Then each printers host definition could include the line
use generic-printer
in its definition.
Alternately, you could create a brand new printer template similar to the one for generic-host with notifications_enabled disabled and also not including any entry for check_command (which is where the command used to determine if a host is OK is chosen).
Default checks like ping or ssh can be disabled by deleting the service definition in the corresponding machine's .cfg file in the monitoring host. nrpe or nrdp services can be used for checks like low toner cartrige or no paper.
Steps to remove the ping check for printer1:
1- Open the printer1.cfg file in the nagios server to edit. (Usually under .../nagios/configurations/objects/)
2- Find the service definition in printer1.cfg where service_description value is PING and delete this definition.
3- Restart NagiOS. After this, the ping check should not be visible from the nagios web interface too.