Nagios Supervisor status check - nagios

I want to check my supervisord status by nagios.I haven 2 servers 1 nagios and other is client server.In my client server supervisor is running.
I have put my check_supervisord.py file in my /usr/local/nagios/libexec path & on my services.cfg file:
define service {
use generic-service
host_name ubuntuserver
service_description supervisord
check_command check_supervisord!80!hduser!password
}
But it showing me plugin missing error,

Since your other plugins are successfully running I would guess this is a permission issue.
cd /usr/local/nagios/libexec
chmod 755 check_supervisord.py
chown root:nagios check_supervisord.py
Try that and see if the plugin works. If this doesn't work try and see what permissions supervisord needs to run in a script or compare the script permissions to the other plugins that are working on your system.

Related

Ubuntu integration to windows domain

``
Hello
I am migrating an Ubuntu Bionic 18.4 Linux server to a windows domain
I followed the steps below:
1- update packages first.
2- install the required packages.
sudo apt -y install realmd sssd sssd-tools sssd-ad libnss-sss libpam-sss adcli samba-common-bin oddjob oddjob-mkhomedir packagekit
sudo apt-get install -y krb5-user sssd-krb5
pam ????
3- Server Network config
create file 99_config.yaml (/etc/netplan/99_config.yaml)
configure IP , DNS server and domain
Change server hostname to a fully qualified domain name
sudo hostnamectl set-hostname serverName.mydomain
change /etc/hosts
add or update line 127.0.0.1 serverName.mydomain
apply change : sudo netplan apply
4- Discover the domain
realm discover mydomain (work fine)
5- Keberos config
REALM (EN MAJUSCULE)= mydomain
kdc = my domaine Active Directory Server IP
admin_server = my domaine Active Directory Server Name
6- Join ubuntu server to the domain
realm join MyNameServerIP mamadi.fofana (work fine)
7- Modify pam to automatically create a home directory for AD users
pam-auth-update
Check “activate mkhomedir”.
8- Test to see if the integration is working correctlyPermalink
id myuserName#myDomain
getent myuserName#myDomain
groups myuserName#myDomain
All those 3 above commands work fine
9- Admin config
Update sudoers file to include your domain administrators security group with full sudo access:
sudo nano /etc/sudoers.d/admins
Add the necessary lines to it. For example:
user ALL=(ALL) ALL
%Domain\ Admins ALL=(ALL) ALL
To avoid adding the domain name to the username every time, configure this.
sudo nano /etc/sssd/sssd.conf
Change the ‘use_fully_qualified_names’ value to False.
Restart and check:
sudo systemctl restart sssd
allow authorization for some AD users or groups
sudo realm permit myUserName#myDomain, someUserName#myDomain
sudo realm permit -g 'Domain Admins'
Login using SSH via another terminal:
ssh -l myuserName#myDomain MyUbuntuServerIP
At first it worked; several domain users managed to connect with ssh , fileZilla and directly on the server
with their domain credential.
The only concern was that the resolution didn't work with the ubuntu server name; we used the IP address
To fix the name resolution problem, I had to install and configure samba and nmbd
Suddenly after a few days, I couldn't connect to the server with the domain accounts
with SSH I have the message
Connection closed by ServerIP port 22
directly on the server
i have the message
Sorry that didn't work, please try again
I am however sure of the password, and other users have failed to connect
Do you have an idea of the origin of the problem or a way to debug to identify the source of the problem?
the migration worked at first, then stopped recognizing domain user passwords
I specify that although the users of the domain cannot connect,
the following commands still work and show correct outputs
realm discover mydomain (work fine)
id myuserName#myDomain
getent myuserName#myDomain
groups myuserName#myDomain
Please assist
8- Test to see if the integration is working correctlyPermalink
id myuserName#myDomain
getent myuserName#myDomain
groups myuserName#myDomain
All those 3 above commands work fine

Setup Nagios check_clamd

I'm getting a (No output returned from plugin) from a host and cannot understand why:
Service on monitor server:
# Check Clamd availability
define service {
hostgroup_name clamd-servers
service_description ClamAV Daemon
check_command check_nrpe!check_clamd
use generic-service
notification_interval 0 ; set > 0 if you want to be renotified
}
Hosts on monitor:
# Clamd Servers
define hostgroup {
hostgroup_name clamd-servers
alias ClamAV servers
members fsmvps
}
nrpe_local.fcfg on host fsmvps
command[check_clamd]=/usr/lib/nagios/plugins/check_clamd -H /var/run/clamav/clamd.ctl
Running the command /usr/lib/nagios/plugins/check_clamd -H /var/run/clamav/clamd.ctl on the host will produce the following output as clam is up and running:
CLAMD OK - 0.000 second response time on socket /var/run/clamav/clamd.ctl [PONG]|time=0.000219s;;;0.000000;10.000000
Clueless at the moment as to why no output is returned as I'm a beginner on Nagios.
Perhaps your NRPE service wasn't setup right (sometimes it complains about ssl).
Running (as the nagios user) on your monitor server something like :
/usr/lib/nagios/plugins/check_nrpe -H fsmvps check_clamd
Might help diagnose things.
It might be :
Permissions (can the nagios user on fsmvps read /var/run/clamav/clamd.ctl)
check_nrpe needs the -n flag or a different port.
you've not restarted nrpe on the fsmvps server after editing it's config.

Add a plugin from Nagios Exchange to Nagios 3.x

I just finished installing Nagios 3 in Ubuntu server and I'm not sure how I can add a third party plugin into it.
The plugin is available : Here
Thanks in advance for your help
You didn't mention any information about the server that you want to monitor with Nagios.
I'm going to assume it's an Ubuntu Linux server and it's not the same server as the machine you installed Nagios on.
On the server to be monitored:
Ensure that NRPE (Nagios Remote Plugin Executor) is installed. Here's a link to instructions for installing NRPE on the Ubuntu operating system.
http://tecadmin.net/install-nrpe-on-ubuntu/
After you install NRPE on the server to be monitored, it's very important that you edit the nrpe.cfg file (most likely found at etc/nagios/nrpe.cfg but this can differ based on your installation method).
You need to modify the allowed_hosts configuration line to include the IP address of your Nagios server. If you don't, NRPE will refuse connection attempts from Nagios and you won't be able to run your Nagios plugin or report results back to Nagios.
Be sure to restart NRPE after you've modified nrpe.cfg.
Next you'll need to download the Nagios plugin to the server being monitored. For example:
wget --directory-prefix=/usr/lib/nagios/plugins/ https://github.com/thehunmonkgroup/nagios-plugin-file-ages-in-dirs/archive/v1.1.tar.gz
cd to your nagios plugins directory and extract the tar-gzipped archive you just downloaded:
cd /usr/lib/nagios/plugins/
tar zxvf v1.1
ls -al /usr/lib/nagios/plugins/nagios-plugin-file-ages-in-dirs-1.1/check_file_ages_in_dirs
Be sure to give the nagios plugin script execute permissions:
chmod a+x /usr/lib/nagios/plugins/nagios-plugin-file-ages-in-dirs-1.1/check_file_ages_in_dirs
With the nagios plugin now residing on your server to be monitored, you will need to define some command definitions on that same server.
First you need to find the path that NRPE will search for new command definitions that you manually add to the system.
To do this, grep your nrpe.cfg file for the term "include_dir".
For example:
grep include_dir /etc/nagios/nrpe.cfg
include_dir=/etc/nrpe.d/
If no result for "include_dir" is returned from your grep, add the above "include_dir" configuration to your nrpe.cfg file. Ensure that the /etc/nrpe.d/ folder is created.
Create a new file in your include_dir named check_file_ages_in_dirs.cfg. Add to check_file_ages_in_dirs.cfg a command definition for check_file_ages_in_dirs pointing to the path of your Nagios plugin and including the arguments necessary to execute it.
For example:
echo "command[check_file_ages_in_dirs]=/usr/lib/nagios/plugins/nagios-plugin-file-ages-in-dirs-1.1/check_file_ages_in_dirs -d \"/tmp\" -w 24 -c 48" >> /etc/nrpe.d/check_file_ages_in_dirs.cfg
cat /etc/nrpe.d/check_file_ages_in_dirs.cfg
command[check_file_ages_in_dirs]=/usr/lib/nagios/plugins/nagios-plugin-file-ages-in-dirs-1.1/check_file_ages_in_dirs -d "/tmp" -w 24 -c 48
For the above, I hard-coded the warning and critical thresholds of 24 hours and 48 hours. I've also hard-coded the directory to check as "/tmp"
Attempt to execute the nagios plugin script locally to confirm it's working correctly:
/usr/lib/nagios/plugins/nagios-plugin-file-ages-in-dirs-1.1/check_file_ages_in_dirs -d "/tmp" -w 24 -c 48
OK: 1 dir(s) -- /tmp: 1 files
Ensure the nrpe user has read permissions on your check_file_ages_in_dirs.cfg file:
chmod a+r /etc/nrpe.d/check_file_ages_in_dirs.cfg
Restart your nrpe service, as per the instructions in http://tecadmin.net/install-nrpe-on-ubuntu/
You also need to ensure that if you have any firewall rules in place, they allow tcp traffic to port 5666.
On your Nagios server:
From your Nagios server, you'll need to manually run check_nrpe against your host to be monitored so as to verify correct functioning of the Nagios plugin and correct NRPE configuration.
Find the location of your check_nrpe file. On my installation, it's located at /usr/local/nagios/libexec/check_nrpe, but this could be different for your installation.
find / -name "check_nrpe" -type f
/usr/local/nagios/libexec/check_nrpe
If you don't have check_nrpe, you'll need to install it on your Nagios server.
apt-get install nagios-nrpe-plugin
First execute check_nrpe against your server to be monitored with no remote command arguments. This is just to confirm that NRPE is running on your remote server and it's correctly configured to allow connections from your Nagios server.
Note: For this example I'll pretend the IP address of the host I want to monitor is 10.0.0.1. Replace this with the IP address of the host you want to monitor.
/usr/local/nagios/libexec/check_nrpe -H 10.0.0.1
NRPE v2.14
The check_nrpe command above should return the version number of the NRPE agent running on the remote host if it's configured correctly.
Next attempt to manually invoke the Nagios plugin via NRPE:
/usr/local/nagios/libexec/check_nrpe -H 10.0.0.1 -c check_file_ages_in_dirs
OK: 1 dir(s) -- /tmp: 1 files
If you get output similar to the above, then it's time to move on to defining hosts, services, and commands on your Nagios server.
It would be cleaner to define separate configuration files for host, service, and command definitions. But that's outside of the scope of this post.
For now, we'll define these things in the default Nagios configuration file (nagios.cfg).
First locate your nagios.cfg file:
find / -name "nagios.cfg" -type f
/usr/local/nagios/etc/nagios.cfg
Edit the nagios.cfg file.
Add a host definition for the server you wish to monitor:
define host {
host_name Remote-Host
alias Remote-Host
address 10.0.0.1
use linux-server
contact_groups admins
notification_interval 0
notification_period 24x7
notifications_enabled 1
register 1
}
Add a command definition for the remote execution of check_file_ages_in_dirs:
define command {
command_name check_file_ages_in_dirs
command_line $USER1$/check_nrpe -H $HOSTADDRESS$ -c check_file_ages_in_dirs
register 1
}
Add a service definition that will reference the check_file_ages_in_dirs command:
define service {
service_description check_file_ages_in_dirs
use generic-service
check_command check_file_ages_in_dirs
host_name Remote-Host
contact_groups admins
notification_interval 0
notification_period 24x7
notifications_enabled 1
flap_detection_enabled 1
register 1
}
Save and exit your nagios.cfg file.
Validate your Nagios configuration file:
nagios -v /usr/local/nagios/etc/nagios.cfg
If no errors are reported, restart your Nagios service.
Check the Nagios Web UI, and you should see your check_file_ages_in_dirs service monitoring your remote host.

Vagrant Up by Non-Sudo Vagrant User fails

I created a new non-sudo user(user1) in vagrant(Ubuntu 12.04 OS), and added the insecure public key to the user1 authorised key file. In vagrant file, added the default user as "user1" :
config.ssh.default.username = "user1"
Now vagrant up is failing with following error message:
The following SSH command responded with a non-zero exit status.
Vagrant assumes that this means the command failed!
mkdir -p /vagrant
Stdout from the command:
Stderr from the command:
sudo: no tty present and no askpass program specified
Sorry, try again.
sudo: no tty present and no askpass program specified
Sorry, try again.
sudo: no tty present and no askpass program specified
Sorry, try again.
sudo: 3 incorrect password attempts
But if am setting the sudo user for default user, then vagrant up is successful.
Can anyone help me with the changes I need to do to enable vagrant up for non-sudo users.
Vagrant requires root/sudo permissions on the VM for almost all of it's operations; like configuring the networking, mounting shared folders, running provisioners, etc. So you wouldn't get very useful VM without sudo even if you managed to avoid it.
Note that you only need sudo access on the guest. Vagrant commands itself can (and should) be run as a non-root user on the host.

Are Independent instances of Postgresql possible

I want to install postgresql for use as the backend to a Windows application.
This seems to be no problem if postgresql is not already installed on the system.
If postgresql is already installed then unless the command line parameters contain the superpassword etc of the existing installation then the install fails.
As I will likely never know the superpassword or other account details of any pre-existing postgresql instances and the machine owners may not either it seems that this will frustrate any attempt to install postgresql in such a situation.
I believe it is possible to install completely independent instances of sql server but is this possible for postgresql?
BTW:
If the command line does contain the correct superpassword then the install just seems to overwrite the existing install and ignores parameters like --prefix etc .
I used init db to create a new database cluster before doing a second install but this new cluster was ignored?
In general you can have multiple independent instances of PostgreSQL. Strictly speaking it's database cluster with separate:
data directory
configuration (e.g. postgresql.conf, pg_hba.conf)
listening TCP/UDP port (default 5432+)
owner user and superuser role
locale and default encoding
log file
postmaster server process (on Windows postgres.exe)
Perfect well-done example is Debian with easy to use postgresql-common infrastructure (pg_ctlcluster, pg_lsclusters, pg_createcluster, pg_dropcluster, included SSL, log rotation and so on).
EDIT:
I found it's rather easy to install second, third, etc. instance of same versioned PostgreSQL under Windows with EnterpriseDB's installer, no need to use initdb and pg_ctl (assuming 64-bit installation, probably you need to use Program Files (x86) for 32-bit installation):
Open cmd with admin privileges (Run as Administrator)
Execute: cd "C:\Program Files\PostgreSQL\9.0\installer\server"
Create new database cluster (press Enter on every step): initcluster.vbs postgres postgres 12345 "C:\Program Files\PostgreSQL\9.0" "C:\Program Files\PostgreSQL\9.0\data2" 5433 DEFAULT
Register as Windows Service: startupcfg.vbs 9.0 postgres 12345 "C:\Program Files\PostgreSQL\9.0" "C:\Program Files\PostgreSQL\9.0\data2" postgresql-x64-9.0-2
Run newly created service postgresql-x64-9.0-2 using services.msc and you have second server
Change 12345 to your password specified during PostgreSQL installation. You don't have to use data2 directory, use whatever you like (but of course not existing data directory).
On Windows 7 I had success following these steps. You'll need the PsExec.exe utility available in the Sysinternals Suite. I assume here that the path to the Sysinternals Suite and the path to the bin folder of your existing PostgreSQL installation are in your PATH environment variable.
Open a cmd.exe window and enter the following command to open a prompt as the Network Service account.
psexec -i -u "nt authority\network service" cmd.exe
The Network Service account won't have access to your PATH, so cd 'C:\PostgreSQL\9.3\bin' and then enter the following command to initialize a data directory for your new instance. I've called mine "data2". It doesn't have to be in the postgres directory, but that's where the default data directory goes, so it's a reasonable choice.
initdb "C:\PostgreSQL\9.3\data2"
Edit C:\PostgreSQL\9.3\data2\postgresql.conf so that port = 5433 (the default instance uses 5432, and you shouldn't have two instances on the same port)
Leave the Network Service cmd prompt and in your standard prompt enter the following command to register the new service. Here I've named my new instance "pg_test"
pg_ctl register -N pg_test -U "nt authority\network service" -D "C:\PostgreSQL\9.3\data2"
Run the following command to start the service.
net start pg_test
The database owner role will be 'YOURMACHINENAME$'. If you want to change this to the standard 'postgres', you have to first create a new super user role that can rename the owner. From the command prompt, enter the following to create this super user.
createuser -s -r -l -i -P -h localhost -p 5433 -U YOURMACHINENAME$ mysuperuser
Finally, connect to the server with psql (psql -U mysuperuser -h localhost -p 5433 postgres) and enter the following commands to rename your database owner and add a password.
ALTER USER "YOURMACHINENAME$" RENAME TO postgres;
ALTER USER postgres WITH PASSWORD 'yourpassword';
Something like this should work (if not it's probably bug):
postgresql-9.0.4-1-windows_x64.exe ^
--mode unattended ^
--prefix c:\postgres\9.0-second ^
--servicename postgresql-x64-9.0-second ^
--serviceaccount postgres2 ^
--servicepassword <password> ^
--serverport 5433 ^
--superaccount postgres ^
--superpassword <password>
EDIT: after a couple of tests I believe it's not possible to create different Postgres instances of the same version using the One-click installer. Sorry.
OTOH you could always play with initdb and pg_ctl and use the existing installation to create a new instance. It would not be as easy as just starting the installer but it's doable.

Resources