Nagios nrpe plugin install on remote host - nagios

On Centos7, following nrpe plugin install steps, when testing the connection between the Nagios server and the remote agent, I got this error...
/usr/local/nagios/libexec/check_nrpe -H 192.168.50.5
CHECK_NRPE: Error - Could not connect to 192.168.50.5: Connection reset by peer
In /etc/xinetd.d/nrpe, I added the Nagios server's IP address to the only_from field.
# default: off
# description: NRPE (Nagios Remote Plugin Executor)
service nrpe
{
disable = no
socket_type = stream
port = 5666
wait = no
user = nagios
group = nagios
server = /usr/local/nagios/bin/nrpe
server_args = -c /usr/local/nagios/etc/nrpe.cfg --inetd
only_from = 127.0.0.1 ::1 {server_IP}
log_on_success =
}
I then restarted the xinetd service; however, upon checking the service status this error log message appeared...
Aug 09 09:32:21 localhost.localdomain xinetd[1448]: bind failed (Address already in use (errno = 98)). service = nrpe
Aug 09 09:32:21 localhost.localdomain xinetd[1448]: Service nrpe failed to start and is deactivated.

The solution was to not only to include the server IP in /etc/xinetd.d/nrpe, but also to stop the nrpe service before restarting the xinetd service.
systemctl stop nrpe
systemctl restart xinetd
It seems restarting xinetd on its own failed to load the nrpe service because it the ports conflicted with the existing nrpe service.

Related

Unable to make Remote Connection with Postgresql

I have PostgreSQL running on Ubuntu Server and I want to make remote connection with PostgreSQL running on port 5432.
I've checked if I can ping the public IP of ubuntu server from my machine and that works fine.
Next I've changed two files on ubuntu server first I've changed postgresql.conf which looks as below
#------------------------------------------------------------------------------
# CONNECTIONS AND AUTHENTICATION
#------------------------------------------------------------------------------
# - Connection Settings -
listen_addresses = '*' # what IP address(es) to listen on;
# comma-separated list of addresses;
# defaults to 'localhost'; use '*' for all
# (change requires restart)
port = 5432 # (change requires restart)
Next I've added two lines in pg_hba.conf as below
host all all 0.0.0.0/0 trust
host all all ::/0 trust
Finally I checked if firewall is running by running sudo ufw verbose which outputted inactive.
As per my understanding I've allowed PostgreSQL to accept remote connection and firewall is also not present hence nothing is blocking. Still I get the following error.
psycopg2.OperationalError: connection to server at "XXX.XXX.XXX.XXX", port 5432 failed: Connection timed out (0x0000274C/10060)
Is the server running on that host and accepting TCP/IP connections?
How can I fix this error?
Edit
Although I can ping and ssh to the Ubuntu server using public IP but can not telnet.
I checked if port 5432 is open using this link but it turned out to be closed.

Unable to SSH into wireguard IP until I ping another server from inside the server

I have wireguard setup on a machine (call it MachineA, with the IP 10.42.0.19). I have my laptop configured with the IP 10.42.0.15, call it LaptopB. I am able to SSH into MachineA from the LaptopB when I connect both peers using ssh root#MachineA. Then, if I wait a while, I can no longer SSH into the MachineA from LaptopB. For example, the same command ssh root#MachineA just hangs.
Using -vvvv shows me this:
$ ssh -vvvv root#10.42.0.19
OpenSSH_8.3p1 Ubuntu-1ubuntu0.1, OpenSSL 1.1.1f 31 Mar 2020
debug1: Reading configuration data /home/xrd/.ssh/config
...
debug2: ssh_connect_direct
debug1: Connecting to 10.42.0.19 [10.42.0.19] port 22.
And, it never connects.
There is a simple fix: from inside the machine, ping any other Wireguard machine on the network. MachineA is a DigitalOcean droplet. If I use the web console to login, and then ping any other peer on the network (say 10.42.0.4), then immediately after the ping starts, the SSH connection completes.
How do I troubleshoot this?
I have not restarted wireguard on either LaptopB nor MachineA. Both appear to be connected.
In my wg0.conf on both ends they are more or less like this:
[Interface]
Address = 10.42.0.19/24
PrivateKey = DontYouWishYouHadThis
DNS = 10.42.0.1,8.8.8.8
[Peer]
PublicKey = SomePublicKeyIsHere
AllowedIPs = 10.42.0.0/24
Endpoint = 33.33.33.33.:51280

Nagios monitoring ok but SSL Handshake error

Good Folks,
I have a weird situation here. My remote linux server is nice monitored by Nagios but when I try to run check_nrpe -H I get SSL Handshake error. I don't the same error from Nagios server.
[code]
[root#agent1 ~]# /usr/local/nagios/libexec/check_nrpe -H master
CHECK_NRPE: Error - Could not complete SSL handshake.
[root#agent1 ~]#
[root#master ~]# /usr/local/nagios/libexec/check_nrpe -H agent1
NRPE v2.15
[root#master ~]#
[/code]
Any idea how to resolve it?
Check your nrpe configuration file and your IP in only_from list.
nano /etc/xinetd.d/nrpe
# default: on
# description: NRPE (Nagios Remote Plugin Executor)
service nrpe
{
flags = REUSE
socket_type = stream
port = 5666
wait = no
user = nagios
group = nagios
server = /usr/local/nagios/bin/nrpe
server_args = -c /usr/local/nagios/etc/nrpe.cfg --inetd
log_on_failure += USERID
disable = no
only_from = IP1 IP2 IP3
}
Correct answer is recompiling nrpe with ssl headers.
./configure --enable-ssl
Rest steps which are given in everywhere in general documentation are correct.

xDebug [Errno 24] Too many open files when connecting to DBGp Proxy

I'm having issues running an xDebug session when I've connected to the DBGp proxy successfully. I'm using both local and remote SSH tunnels for port 9000 (xdebug), and 9001 for a (xdebug DBGp client).
] The code is being debugged remotely, the xDebug server is running on an Amazon EC2 instance
] I am using Zend Studio for my local debugging client on my Macbook
] I am running a remote SSH tunnel for port 9000 "ssh ec2-user#X.X.X.X -R 9000/127.0.0.1/9000"
As of here, I'm able to successfully use xDebug, but then I run into issues running the proxy
But then I start running into issues running the proy:
] I then run the dbgp proxy on the remote server
./pydbgpproxy
INFO: dbgp.proxy: starting proxy listeners. appid: 20906
INFO: dbgp.proxy: dbgp listener on 127.0.0.1:9000
INFO: dbgp.proxy: IDE listener on 127.0.0.1:9001
] I then setup a local SSH tunnel for port 9001 - "ssh ec2-user#X.X.X.X -L 9001/127.0.0.1/9001"
] From Zend Studio I'm able to connect successfully to the DBGp server, where "SessionName" is the name of my session
INFO: dbgp.proxy: Server:onConnect ('127.0.0.1', 51828) [proxyinit -p 9000 -k SessionName -m 0]
] When I trigger a remote xdebug debugging sessions using my session name, it fails like so.
INFO: dbgp.proxy: connection from 127.0.0.1:39172 [<__main__.sessionProxy instance at 0x122e0e0>]
INFO: dbgp.proxy: connection from 127.0.0.1:39173 [<__main__.sessionProxy instance at 0x7f87980210e0>]
INFO: dbgp.proxy: connection from 127.0.0.1:39174 [<__main__.sessionProxy instance at 0x7f87980243b0>]
INFO: dbgp.proxy: connection from 127.0.0.1:39175 [<__main__.sessionProxy instance at 0x7f879814c878>]
INFO: dbgp.proxy: connection from 127.0.0.1:39176 [<__main__.sessionProxy instance at 0x7f87800a2368>]
INFO: dbgp.proxy: connection from 127.0.0.1:39177 [<__main__.sessionProxy instance at 0x123cb48>]
INFO: dbgp.proxy: connection from 127.0.0.1:39178 [<__main__.sessionProxy instance at 0x12387e8>]
INFO: dbgp.proxy: connection from 127.0.0.1:39179 [<__main__.sessionProxy instance at 0x122ec68>]
INFO: dbgp.proxy: connection from 127.0.0.1:39180 [<__main__.sessionProxy instance at 0x124fb48>]
INFO: dbgp.proxy: connection from 127.0.0.1:39181 [<__main__.sessionProxy instance at 0x7f8798047dd0>]
INFO: dbgp.proxy: connection from 127.0.0.1:39182 [<__main__.sessionProxy instance at 0x1244d88>]
ERROR: dbgp.proxy: Unable to connect to the server listener 127.0.0.1:9000 [<__main__.sessionProxy instance at 0x7f8790025878>]
Traceback (most recent call last):
File "./pydbgpproxy", line 222, in startServer
File "/usr/lib64/python2.6/socket.py", line 184, in __init__
error: [Errno 24] Too many open files
WARNING: dbgp.proxy: Unable to connect to server with key [SessionName], stopping request [<__main__.sessionProxy instance at 0x7f8790025878>]
WARNING: dbgp.proxy: Exception in _cmdloop [[Errno 104] Connection reset by peer]
INFO: dbgp.proxy: session stopped
It actually shows those lines like "INFO: dbgp.proxy: connection from 127.0.0.1:39179 [<main.sessionProxy instance at 0x122ec68>]" almost 50x more than what I copied and pasted here for brevity.
It seems like I got it to work but it's erring out. I'm currently using the pydbgpproxy phyton script, version 7 from: http://code.activestate.com/komodo/remotedebugging/. I tried the version 8 script but it just errors. I also tried the pydbgpprox, version 6, but it still has the same exact issue.
IN SUMMARY: xDebug is running on the server, I can connect to it normally without proxy. With the proxy I can connect to it successfully to, but then running a script it encounters this werid error.
Does anyone know what this issue might be caused from?

PSQLException when connecting to Postgres server via JDBC in same LAN (PGAdmin works)

I have some severe problems with connecting from DBVisualizer (8.0.9) to a PostgreSQL server which is running in the same LAN. DBVis is Java-based thus uses JDBC for connection. Connecting from PGAdmin works like a charm - only DBVis connection via JDBC isn't. And I need that to be solved!
Specs:
My PC: Ubuntu 12.04 LTS (64Bit), IP: 192.168.110.193
Server OS: Suse LINUX Enterprise Server 11, IP: 192.168.110.12
Postgresql server version: 9.1
Java VM: Java HotSpot(TM) 64-Bit Server VM
Java Version: 1.6.0_33
Java Vendor: Sun Microsystems Inc.
OS Name: Linux
OS Arch: amd64
OS Version: 3.2.0-25-generic
When starting a connection, I'm getting a "Connecting..." message and after ~5 minutes of waiting the following error message appears in the connection window:
"An error occurred while establishing the connection:
Long Message:
The connection attempt failed.
Details:
Type: org.postgresql.util.PSQLException
SQL State: 08001"
In the debug console I get:
12:04:57 [DEBUG pool-2-thread-8 D.ā] RootConnection: Driver.acceptsURL("jdbc:postgresql://192.168.110.12:5432/MYDATABASE")
12:04:57 [DEBUG pool-2-thread-8 D.ā] RootConnection: Driver.connect("jdbc:postgresql://192.168.110.12:5432/MYDATABASE", {user=******, password=******})
12:24:58 [DEBUG pool-2-thread-8 D.ā] RootConnection: EXCEPTION -> org.postgresql.util.PSQLException: The connection attempt failed.
The debugging information of the JDBC driver is also provided:
org.postgresql.util.PSQLException: The connection attempt failed.
at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:150)
at org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:66)
at org.postgresql.jdbc2.AbstractJdbc2Connection.(AbstractJdbc2Connection.java:125)
at org.postgresql.jdbc3.AbstractJdbc3Connection.(AbstractJdbc3Connection.java:30)
at org.postgresql.jdbc3g.AbstractJdbc3gConnection.(AbstractJdbc3gConnection.java:22)
at org.postgresql.jdbc4.AbstractJdbc4Connection.(AbstractJdbc4Connection.java:30)
at org.postgresql.jdbc4.Jdbc4Connection.(Jdbc4Connection.java:24)
at org.postgresql.Driver.makeConnection(Driver.java:393)
at org.postgresql.Driver.connect(Driver.java:267)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:616)
at com.onseven.dbvis.d.B.D.ā(Z:1548)
at com.onseven.dbvis.d.B.F$A.call(Z:278)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
at java.util.concurrent.FutureTask.run(FutureTask.java:166)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:679)
For convenience the relevant part of the server's pg_hba.conf:
#"local" is for Unix domain socket connections only
local all all peer
#IPv4 local connections:
host all all 192.168.110.0/24 md5
#IPv6 local connections:
host all all ::1/128 md5
And the relevant parts of the postgresql.conf:
# - Connection Settings -
listen_addresses = '*' # what IP address(es) to listen on;
# comma-separated list of addresses;
# defaults to 'localhost', '*' = all
# (change requires restart)
#port = 5432 # (change requires restart)
max_connections = 100 # (change requires restart)
# Note: Increasing max_connections costs ~400 bytes of shared memory per
# connection slot, plus lock space (see max_locks_per_transaction).
#superuser_reserved_connections = 3 # (change requires restart)
#unix_socket_directory = '' # (change requires restart)
#unix_socket_group = '' # (change requires restart)
#unix_socket_permissions = 0777 # begin with 0 to use octal notation
# (change requires restart)
#bonjour = off # advertise server via Bonjour
# (change requires restart)
#bonjour_name = '' # defaults to the computer name
# (change requires restart)
# - Security and Authentication -
#authentication_timeout = 1min # 1s-600s
#ssl = off # (change requires restart)
#ssl_ciphers = 'ALL:!ADH:!LOW:!EXP:!MD5:#STRENGTH' # allowed SSL ciphers
# (change requires restart)
#ssl_renegotiation_limit = 512MB # amount of data between renegotiations
#password_encryption = on
#db_user_namespace = off
# Kerberos and GSSAPI
#krb_server_keyfile = ''
#krb_srvname = 'postgres' # (Kerberos only)
#krb_caseins_users = off
# - TCP Keepalives -
# see "man 7 tcp" for details
#tcp_keepalives_idle = 0 # TCP_KEEPIDLE, in seconds;
# 0 selects the system default
#tcp_keepalives_interval = 0 # TCP_KEEPINTVL, in seconds;
# 0 selects the system default
#tcp_keepalives_count = 0 # TCP_KEEPCNT;
# 0 selects the system default
The connection is running now but unfortunately I cannot exactly tell the steps that led to the solution. Anyway, I will try to sketch them:
I installed an old version of DBVisualizer (7.1.5) and was able to successfully establish a connection to the db server. Then I went straight back to the 8.0.9
version of DBVis and tested the connection again. Unexpectedly, the connection also worked here, even though I didn't change configurations - neither in my DBVis 8.0.9 installation nor on the DB server. That's it. Maybe someone has some some more clues on that issue.
The error code 08001 is a generic error code telling that the JDBC driver could not connect to the database. The reason for this could be many.
You should enter the IP address or DNS name of the server where the database runs as the Database Server and the Database Port it listens to for TCP/IP connections, by default 5432.
After you have entered this, please use the Ping Server button to see if you can reach that server and port. If you get an error message, you have either entered incorrect Database Server or Port values, there is a firewall that blocks the connection, or the PostgreSQL server is not configured to access connections from your PC. If Ping Server says everything is fine but you still cannot connect, the problem is most likely with the login credentials for the user account you specify.

Resources