pnp4nagios not logging performance data for new host - nagios

We've just updated Nagios from 3.5.x to the current version (4.0.7) and subsequently added a new host for monitoring.
The new host shows as 'Down' in Nagios, and this seems to be related to the fact that pnp4nagios is not logging performance data (the individual checks for users, http etc are all find).
Initially there was an error that the directory
/usr/local/pnp4nagios/var/perfdata/newhost.com
that contains the xml setup and rrd files for the new host was missing), so I manually created this directory, but now it complains that the files are missing.
Does anyone know the appropriate steps to overcome this issue?
Thanks,
Toby
PS I'd tag this 'pnp4nagios', but that tag doesn't exist and I can't create them
UPDATE
It's possible that pnp4nagios is a red herring/symptom. Looking more closely I realise that Nagios actually believes the host is down, even though all services are up. The host status information is '(Host check timed out after 30.01 seconds)'...does this make any more sense?

It's indeed very unlikely that pnp4nagios has something to do with your host being down. pnp actually exports output and performance data to feed the rrd database and xml files (via npcd module or evenhandler command).
The fact that nagios reports the host check timed out after 30 sec means that :
- you have a problem with your host check command, please double-check the syntax
- this check command times out after a certain timelapse (most likely defined in nagios.conf) because the plugin was still running.
I'd recommend running this command from the server's prompt. You want to do something like :
/path/to/libexec/check_command -H ipaddress -args
For example:
/usr/local/libexec/nagios/check_ping -H 192.168.1.1 -w 200,40% -c 500,80% -timeout 120
See if something might be hanging. Having the output would be helpful.
Once your host check returns correct output and performance data to nagios, pnp will hopefuly do the rest.

In the unlikely event it helps anyone, pnp4nagios was indeed a red herring. The problem was that ping wasn't enabled for the host being checked, and this is the test for whether a host is up or not. Hence this was failing, despite other services being reported as working.

Related

SQLAlchemy/pgAdmin: Error: password authentication failed for user "root"

I'm conducting a study and I need to store some data. I found an open source data scraper and parser online: https://github.com/hicsail/materials
I've followed some instructions (some of which but not all came from here) and
installed Postgres, created a docker-compose.yml file, and created a config file:
Above is the config file, and this is the .yml file
I started by going into the pgAdmin folder and running "docker-compose up", after which, this was the result:
I'm not sure if the "no privileges flag" means anything. Anyway, after this, I opened up a localhost:5050 in my browser and logged into pgAdmin.
I named the database "materials" as this was what it was supposed to be named.
Same thing with the username and password; both were named "root". However, when I run the command to parse the data, I'm getting this error:
I've been stuck on this for a long time now, and I can't seem to find any solution. This is running in a python2.7 conda environment, as per the requirements. These were the other installed libraries (I'm not 100% sure those were the exact versions, but I tried to get them as close as possible).
psycopg2==2.7.3.1
requests>=2.20.0
SQLAlchemy==1.0.9
wheel==0.24.0
If I need to clarify anything please let me know.
Thanks.
I found an answer for this. I had to go into services, then to postgres, and stop it from running. I then had to kill the port "5432" and run it again.

Nagios check_logfiles plugin Create multiple Alerts

We currently use the Consol Labs (https://labs.consol.de/nagios/check_logfiles/) check_logfile plugin to alert on strings found within our application logs. One thing that we are having some issues with is that whenever there are several alerts within a time frame or one alert is has a bit a length to it. The nagios alert that is created only shows a small amount of the alert. Which requires the support staff to always connect to the systems to see what the full alert is.
Is there any way to make with the check_logfile, or Nagios/NRPE be able to display the full log alert in the nagios alert that is created?
Thanks,
I too just started with this Nagios plugin, check_logfiles. I have gotten it to work under Unix/Linux. I can't get the plugin to work on Windows which is what I need.
But, I did see while in there,
$options A list of options which control the influence of pre- and postscript. Known options are smartpostscript, supersmartpostscript, smartprescript and supersmartprescript. With the option report=”short|long|html” you can customize the plugin’s output. With report=long/html, the plugin’s output can possibly become very long. By default it will be truncated to 4096 characters (The amount of data an unpatched Nagios is able to process). The option maxlength can be used to raise this limit, e.g. maxlength=8192. The option seekfileerror defines the errorlevel, if a seekfile cannot be written, e.g. seekfileerror=unknown (default:critical). The same applies to protocolfileerror (default: ok). Usually the last error message will be shown in the first line of the output. With preview=5 you can tell check_logfiles to show for example the last 5 hits. (default is: preview=1)
also, I'm not completely sure that this is Gospel anymore, as it looks like Nagios has done something to allow longer messages. ?
Functionally, NRPE can only handle a payload of 1024 bytes, which limits the amount of data that you can receive on your Nagios server.
so, I really don't know. I also seen that there is a multi-line NRPE agent capability.
please see this article - interestingly - it appears there is a way, however, it is not clear. I think your best bet would be to enter a case ticket with Nagios Core support forum. I've had success with Nagios support forum.
https://sourceforge.net/p/nagios/mailman/nagios-users/thread/C68E26BB.5E2E4%25dszmandi%40imc.net.au/#msg23143763

<VirtualHost> error in Apache2.conf

I cannot fix this without asking for help. On install of LAMP via Syn. Pkg. Mgr. and trying to setup and run LAMP, I have received:
mark#Lexington:/$ apachectl restart
/usr/sbin/apachectl: 87: ulimit: error setting limit (Operation not permitted)
apache2: Syntax error on line 237 of /etc/apache2/apache2.conf: Syntax error on line 1 of /etc/apache2/sites-enabled/example.com: /etc/apache2/sites-enabled/example.com:1: <VirtualHost> was not closed.
The /etc/apache2/apache2.conf file talks about a <VirtualHost> and I have read examples of what to put there, but I'm not able to understand what I am doing. And since the file says DON'T unless you know what you are doing, I am asking:
This is XUbuntu 12.04. I tried installing LAMP. The purpose of this is to run vnstat in a browser and see the bandwidth usage. Also, I want to "serve" a mp3 file to a weblog I keep. I don't understand why I would make an Apache error log visible in a browser. I would have little reason to see the bandwidth usage at another location. The only other reason for LAMP is I am trying to use MythTV to send the TV signal to a "smart" tv via ethernet cable.
If you can point me towards a URL or other help, I'm much obliged.
If you can give me the name of a text editor that shows line numbers so I can look at "line 237" I'll try to figure the syntax error.
oh, this looks easy...i think. You'll see <VirtualHosts> near or at the top of the file /etc/apache2/sites-enabled/example.com (error says line 1). Scroll down through everything. if you do not see </VirtualHosts> (notice the slash), put that in the bottom of the file. Save and close, then restart apache
Regarding the "operation not permitted": you may have to use sudo to elevate your privileges (assuming you are on unix, which i think you are). Research how to do this if you don't know.

cakephp: warning 512 /tmp/cache/ not writable on shared host justhost

When I go to www.merryflowers.com/webroot/ i'm getting the following warnings. Based on the guidance i got from my previous post (cakephp: configuring cakephp on shared host justhost), I right clicked on the app/tmp/ (on the remote server) and all the folders within that and set the permission to be writable (ie. 777). But I'm still getting the same warnings.
Since i'm using windows 7 (chmod doesn't work), I also tried CACLS on the command prompt for tmp folder. Since i'm not familiar with CACLS, i don't know the exact command to make tmp writable to all. Can someone please help me out. Thank you.
Warning (512): /home/aquinto1/public_html/merryflowers.com/tmp/cache/ is not writable [CORE/cake/libs/cache/file.php, line 278].php, line 429
Warning (512): /models/ is not writable [CORE/cake/libs/cache/file.php, line 278]
Warning (512): /persistent/ is not writable [CORE/cake/libs/cache/file.php, line 278]
Is your site hosted locally on your Windows machine, like through XAMPP or WAMP, etc? Those are *nix paths, not Windows paths.
Did you FTP to your sites - like, with an FTP client - and change the permissions? Doing this through FTP clients isn't always 100% reliable. It looks like you changed the perms on /tmp, but they didn't cascade to /tmp/cache, etc. like you thought. Try setting them all one by one.
According to your other post - cakephp: configuring cakephp on shared host justhost - your site is set up with remote hosting. I looked at their service briefly, from the looks of them, you can probably remote (aka, "shell" or "ssh") into your server and get access to the command line. A lot of webhosts provide this these days, although you may have to specifically request they enable it for you.
On a Windows machine, you can use PuTTY to shell into your remote server: http://www.chiark.greenend.org.uk/~sgtatham/putty/
HTH. :)

How do I turn SQL logging on in Postgres 8.2?

I've got the following settings in my postgres conf:
log_destination = 'stderr'
redirect_stderr = on
log_directory = '/tmp/psqlog'
log_statement = 'all'
And yet no logs are logged. What am I missing here? There is reference on the internet to a variable called "logging_collector", but when I try and set that, postgres dies on startup with a FATAL: unknown variable.
This is on MacOS 10.4.
Ta.
I believe that you need to change log_destination to "syslog" or a specific directory. Output that goes to stderr will just get tossed out. Here's the link to the doc page, but I'll see if I can find an example postgresql.conf somewhere http://www.postgresql.org/docs/8.2/static/runtime-config-logging.html
This mailing list entry provides some info on setting up logging with syslog http://archives.postgresql.org/pgsql-admin/2004-03/msg00381.php
Also, if you're building postgres from source, you might have better luck using a os x package from Fink or MacPorts. Doing all of the configuration yourself can be tricky for beginners, but the packages normally give you a good base to work from.

Resources