I am working on a project that has my computer communicating with an arduino board that reads the sensor output and test it in a C plugin ( Reading from a serial port after writing on it) for nagios. My problem is that the status information is always null. My plugin is in the lib"/usr/local/nagios/libexec"
In commands.cfg I added the following:
define command{
command_name arduino_temp_sensor
command_line /usr/local/nagios/libexec/essai.c
}
And in the localhost.cfg I added the following:
define service{
use generic-service
host_name localhost
service_description Temp
check_command arduino_temp_sensor
}
I'm confused if the output of the printf should appear in the status information or not.
Thanks in advance.
It works when I removed .c as following:
define command{
command_name arduino_temp_sensor
command_line /usr/local/nagios/libexec/essai
}
Related
I have a BPF code (section "classifier"). I use this to load to an interface using the tc (traffic controller) utility. My code changes the mark in __skbuff. Later when I try to catch this mark using iptables, I observe that the mark I edited has disappeared.
Code:
__section("classifier")
int function(struct __sk_buff *skb)
{
skb->mark = 0x123;
I use the iptable mangle table's below rule to see if the mark is written correctly.
# iptables -t mangle -A PREROUTING -i <my_interface> \
-m mark --mark 0x123 \
-j LOG --log-prefix "MY_PRINTS" --log-level 7
Following are the TC commands I used to load my bpf programs ;
# tc qdisc add dev <myInterface> root handle 1: prio
# tc filter add dev <myInterface> parent 1: bpf obj bpf.o flowid 1:1 direct-action
The issue is in your tc commands. You are attaching your filter on the egress side.
The root parent refers to the egress side, used for traffic shaping. If instead you want to attach your filter on the ingress side, you should use something like this (no handle needed):
# tc qdisc add dev <myInterface> ingress
# tc filter add dev <myInterface> ingress bpf obj bpf.o direct-action
Or, better practice, use the BPF-specific qdisc clsact, which can be used to attach filters for both ingress and egress (not much documentation on it, besides its commit log and Cilium's BPF documentation (search for clsact)):
# tc qdisc add dev <myInterface> clsact
# tc filter add dev <myInterface> ingress bpf obj bpf.o direct-action
Services are up and running on the remote nodes. CLI execution returns OK, but in UI it returning CRITICAL with Status Information:'Return code of 7 is out of bounds'
nagios-xxxxxxxx:~# /usr/lib/nagios/plugins/check_tcp -H hostname -p <port> -w 5 -c 10 -t 60
TCP OK - 0.002 second response time on hostname port XXXXXXX|time=0.001642s;5.000000;10.000000;0.000000;60.000000
Can someone help me in fixing it?
Nagios log:
[XXXXXXX] Warning: Return code of 7 for check of service 'XXXXXXX' on host was out of bounds.
[XXXXXXX] Warning: Return code of 7 for check of service 'XXXXXXX' on host was out of bounds.
[XXXXXXX] Warning: Return code of 7 for check of service 'XXXXXXX' on host was out of bounds.
[XXXXXXX] Warning: Return code of 7 for check of service 'XXXXXXX' on host was out of bounds.
[XXXXXXX] Warning: Return code of 7 for check of service 'XXXXXXX' on host was out of bounds.
I fixed these issues.Actually issues are with duplicated service configs on nagios server: location:: /etc/nagios4/objects/services/
Cleard the duplcate service configs from the location and reloaded nagios service.
Issues cleared.
I reproduced this problem on my systems. I have 620 hosts, 7000 services.
When the number of services exceed 6189, all plugins become unusable with "Return code of 7 out of bounds", even if there are just /bin/true command.
The main solution is to set in nagios.cfg:
enable_environment_macros=0
I did not want to do this for a long time, because I have one of plugins which uses nagios ENV variables during building HTML e-mail for notifications.
But I found this solution for its running, you need to set manually necessary ENV for particular plugin in this way:
define command{
command_name notify-html-service
command_line NAGIOS_NOTIFICATIONTYPE='$NOTIFICATIONTYPE$' NAGIOS_SERVICEATTEMPT='$SERVICEATTEMPT$' NAGIOS_SERVICESTATE='$SERVICESTATE$' NAGIOS_CONTACTGROUPNAME='$CONTACTGROUPNAME$' NAGIOS_HOSTNAME='$HOSTNAME$' NAGIOS_SERVICEDESC='$SERVICEDESC$' NAGIOS_LONGSERVICEOUTPUT='$LONGSERVICEOUTPUT$' NAGIOS_HOSTADDRESS='$HOSTADDRESS$' NAGIOS_HOSTGROUPNAMES='$HOSTGROUPNAMES$' NAGIOS_HOSTALIAS='$HOSTALIAS$' NAGIOS_SERVICEOUTPUT='$SERVICEOUTPUT$' NAGIOS_LONGDATETIME='$LONGDATETIME$' NAGIOS_SERVICEDURATION='$SERVICEDURATION$' NAGIOS_NOTIFICATIONRECIPIENTS='$NOTIFICATIONRECIPIENTS$' NAGIOS_SERVICEGROUPALIAS='$SERVICEGROUPALIAS$' NAGIOS_HOSTALIAS='$HOSTALIAS$' NAGIOS_NOTIFICATIONAUTHOR='$NOTIFICATIONAUTHOR$' NAGIOS_NOTIFICATIONCOMMENT='$NOTIFICATIONCOMMENT$' NAGIOS_CONTACTEMAIL='$CONTACTEMAIL$' NAGIOS_SERVICEATTEMPT='$SERVICEATTEMPT$' /usr/bin/perl '$USER7$/send.notify' http://192.168.1.1/nagios 2>/tmp/send.log
}
define command{
command_name notify-html-host
command_line NAGIOS_NOTIFICATIONTYPE='$NOTIFICATIONTYPE$' NAGIOS_HOSTSTATE='$HOSTSTATE$' NAGIOS_CONTACTGROUPNAME='$CONTACTGROUPNAME$' NAGIOS_HOSTNAME='$HOSTNAME$' NAGIOS_HOSTADDRESS='$HOSTADDRESS$' NAGIOS_HOSTGROUPNAMES='$HOSTGROUPNAMES$' NAGIOS_HOSTALIAS='$HOSTALIAS$' NAGIOS_LONGDATETIME='$LONGDATETIME$' NAGIOS_NOTIFICATIONRECIPIENTS='$NOTIFICATIONRECIPIENTS$' NAGIOS_SERVICEGROUPALIAS='$SERVICEGROUPALIAS$' NAGIOS_LONGHOSTOUTPUT='$LONGHOSTOUTPUT$' NAGIOS_HOSTALIAS='$HOSTALIAS$' NAGIOS_HOSTOUTPUT='$HOSTOUTPUT$' NAGIOS_HOSTDURATION='$HOSTDURATION$' NAGIOS_NOTIFICATIONAUTHOR='$NOTIFICATIONAUTHOR$' NAGIOS_NOTIFICATIONCOMMENT='$NOTIFICATIONCOMMENT$' NAGIOS_CONTACTEMAIL='$CONTACTEMAIL$' NAGIOS_SERVICEATTEMPT='' /usr/bin/perl '$USER7$/send.notify' http://192.168.1.1/nagios 2>/tmp/send.log
}
This helped to me. At the beginning it was one command for both notifications, with different host/service ENV vars presetting by nagios:
define command{
command_name notify-html
command_line /usr/bin/perl $USER2$/send.notify http://192.168.1.1/nagios 2>/tmp/send.log
}
By the way, nagios documentation not recommends to set enable_environment_macros=1:
Enabling this is a very bad idea for anything but very small setups,
as it means plugins, notification scripts and eventhandlers may run
out of environment space. It will also cause a significant increase
in CPU- and memory usage and drastically reduce the number of checks
you can run.
PS/
My answer was edited, due to need to split notify-html command to notify-html-host and notify-html-service. I started to receive wrong host notifications due to errors with macros deffinitions (service macros are absent in host notification events), and I had to trace debug log of nagios and saw a lots of 'WARNING: An error occurred processing macro' messages.
Good Luck.
I had this exact same issue but it seems it was due to the number of services tied to a single Servicegroup. Once the Servicegroup had more than nine services reporting they would return:
[XXXXXXX] Warning: Return code of 7 for check of service 'XXXXXXX' on host was out of bounds.
I reorganized my services into a few separate Servicegroups and all the checks functioned normally again without any further adjustment.
I can not find this configuration defined anywhere in my include/configs/.h file and includes, nor in configs/_defconfig and it is still defined in .config file after configuring u-boot. I am seeing this configuration defined in tools/Makefile. Is it default? Should I use #undef in me include/configs/.h or CONFIG_CMD_NET=n in configs/_defconfig? What is better?
This configuration option is described as
CONFIG_CMD_NET:
Network commands.
bootp - boot image via network using BOOTP/TFTP protocol
tftpboot - boot image via network using TFTP protocol
Symbol: CMD_NET [=y]
Type : boolean
Prompt: bootp, tftpboot
Location:
-> Command line interface
-> Network commands
Defined at cmd/Kconfig:403
Selects: NET [=n]
You can disable CMD_NET using make menuconfig.
Command line interface
Network commands
[*] bootp, tftpboot
You can also hardcode your configuration in your board's config file, as suggested in the README file:
EXAMPLE: If you want all functions except for network support you can write:
#include "config_cmd_all.h"
#undef CONFIG_CMD_NET
I have successfully installed PNP4Nagios 6.0 and added host/service details to get the graph but it is not displaying any graph.
When I try to check the configuration, I am getting the following error:
**[CRIT] Command looks suspect (/bin/mv /usr/local/nagios/var/service-perfdata /usr/local/nagios/var/spool/xidpe/$TIMET$.perfdata.service)**
Following is my commands.cfg and nagios.cfg configuration details. Is there anything should I change:
Commands.cfg
define command {
command_name process-host-perfdata-file-bulk
command_line /bin/mv /usr/local/nagios/var/host-perfdata /usr/local/nagios/var/spool/xidpe/$TIMET$.perfdata.host
}
define command {
command_name process-host-perfdata-file-pnp-bulk
command_line /bin/mv /usr/local/nagios/var/host-perfdata /usr/local/nagios/var/spool/perfdata/host-perfdata.$TIMET$
}
define command {
command_name process-host-perfdata-pnp-normal
command_line /usr/bin/perl /usr/local/nagios/libexec/process_perfdata.pl -d HOSTPERFDATA
}
define command {
command_name process-service-perfdata-file-bulk
command_line /bin/mv /usr/local/nagios/var/service-perfdata /usr/local/nagios/var/spool/xidpe/$TIMET$.perfdata.service
}
define command {
command_name process-service-perfdata-file-pnp-bulk
command_line /bin/mv /usr/local/nagios/var/service-perfdata /usr/local/nagios/var/spool/perfdata/service-perfdata.$TIMET$
}
define command {
command_name process-service-perfdata-pnp-normal
command_line /usr/bin/perl /usr/local/nagios/libexec/process_perfdata.pl
nagios.cfg
PNP settings - bulk mode with NCPD
process_performance_data=1
service performance data
service_perfdata_file=/usr/local/nagios/var/service-perfdata
service_perfdata_file_template=DATATYPE::SERVICEPERFDATA\tTIMET::$TIMET$\tHOSTNAME::$HOSTNAME$\tSERVICEDESC::$SERVICEDESC$\tSERVICEPERFDATA::$SERVICEPERFDATA$\tSERVICECHECKCOMMAND::$SERVICECHECKCOMMAND$\tHOSTSTATE::$HOSTSTATE$\tHOSTSTATETYPE::$HOSTSTATETYPE$\tSERVICESTATE::$SERVICESTATE$\tSERVICESTATETYPE::$SERVICESTATETYPE$\tSERVICEOUTPUT::$SERVICEOUTPUT$
service_perfdata_file_mode=a
service_perfdata_file_processing_interval=15
service_perfdata_file_processing_command=process-service-perfdata-file-bulk
host performance data
host_perfdata_file=/usr/local/nagios/var/host-perfdata
host_perfdata_file_template=DATATYPE::HOSTPERFDATA\tTIMET::$TIMET$\tHOSTNAME::$HOSTNAME$\tHOSTPERFDATA::$HOSTPERFDATA$\tHOSTCHECKCOMMAND::$HOSTCHECKCOMMAND$\tHOSTSTATE::$HOSTSTATE$\tHOSTSTATETYPE::$HOSTSTATETYPE$\tHOSTOUTPUT::$HOSTOUTPUT$
host_perfdata_file_mode=a
host_perfdata_file_processing_interval=15
host_perfdata_file_processing_command=process-host-perfdata-file-bulk
Thanks & Regards,
Arun
hello board this question may be a little clean and green however,
I've been trying to set up Nagios NSCA for passive checks on a local ubuntu box as a prototype.
for those in the know, my nsca listening on 5667 and send_nsca is on the same ubuntu computer (localhost 127.0.0.1) . I've been reading and testing object definitions and service templates however I have been getting config errors when i try to access nagios web after modifications.
I hope to get clearer instructions on how I can create the service (directories/configurations) to process passive checks in Nagio3 for ubuntu.
There are a few things to consider, firstly that localhost is defined as a host and secondly that the check actually exists as it would for any other check but with a command that doesn't actually do anything, for example.. I've created a passiveservices.cfg file with services defined as follows:
define service{
use generic-service,service-pnp
host_name Server1,Server2
service_description Uptime
active_checks_enabled 1
passive_checks_enabled 1
check_command check_null
check_freshness 1
check_period none
}
define service{
use generic-service,service-pnp
host_name Server1,Server2
service_description Drive space
active_checks_enabled 1
passive_checks_enabled 1
check_command check_null
check_freshness 1
check_period none
Note that the check command is check_null, it's not actually doing anything.. and passive_checks_enabled is 1.
There are two lines within Nagios.cfg which you need to enable:
accept_passive_host_checks
accept_passive_service_checks
It's also a good idea to enable the following two lines aswell
check_service_freshness
check_host_freshness
If a server doesn't poll in after a set amount of time, it'll trigger a script (I trigger an email within my config)
Lastly, enable the following two lines:
log_external_commands
log_passive_checks
They'll help with debugging if this doesn't work. It writes out to /var/log/syslog on Ubuntu (well, it does on mine)..