Nagios: How to check service three times a day - nagios

I need a service to be checked three times a day at fixed times.
The check should run at 7, 15 and 23 hours (every 8 hours at those times).
What I have tried is define a this time period:
define timeperiod{
timeperiod_name three_times_a_day
monday 07:00-07:10, 15:00-15:10, 23:00-23:10
tuesday 07:00-07:10,15:00-15:10,23:00-23:10
wednesday 07:00-07:10,15:00-15:10,23:00-23:10
thursday 07:00-07:10,15:00-15:10,23:00-23:10
friday 07:00-07:10,15:00-15:10,23:00-23:10
saturday 07:00-07:10,15:00-15:10,23:00-23:10
sunday 07:00-07:10,15:00-15:10,23:00-23:10
}
And the service (on several host) like this:
define service{
use all_templates
host_name some_host
service_description some_service
check_command some_command
check_period three_times_a_day
max_check_attempts 1
check_interval 480 ; run every 8 hours
}
From here https://assets.nagios.com/downloads/nagioscore/docs/nagioscore/4/en/timeperiods.html it says
"When Nagios Core attempts to reschedule a host or service check, it will make sure that the next check falls within a valid time range within the defined timeperiod. If it doesn't, Nagios Core will adjust the next check time to coincide with the next "valid" time in the specified timeperiod."
But the thing is that this it's not happening.
When i check the Scheduling Queue, i see:
+--------------+--------------+-----------------+-----------------+
| Host  | Service  | Last Check  | Next Check  |
+--------------+--------------+-----------------+-----------------+
| some_host | some_service | 8/12/2019 9:35 | 8/12/2019 15:01 |
| some_host_1 | some_service | 8/12/2019 7:01 | 8/12/2019 15:01 |
| some_host_2 | some_service | 8/12/2019 8:50 | 8/12/2019 15:02 |
| some_host_3 | some_service | 8/12/2019 9:30 | 8/12/2019 15:02 |
| some_host_4 | some_service | 8/12/2019 9:22 | 8/12/2019 15:02 |
| some_host_5 | some_service | 8/12/2019 7:03 | 8/12/2019 15:03 |
| some_host_6 | some_service | 8/12/2019 8:53 | 8/12/2019 15:04 |
| some_host_7 | some_service | 8/12/2019 9:58 | 8/12/2019 15:04 |
| some_host_8 | some_service | 8/12/2019 9:30 | 8/12/2019 15:04 |
| some_host_9 | some_service | 8/12/2019 7:05 | 8/12/2019 15:05 |
| some_host_10 | some_service | 8/12/2019 9:01 | 8/12/2019 15:05 |
| some_host_11 | some_service | 8/12/2019 10:02 | 8/12/2019 15:05 |
| some_host_12 | some_service | 8/12/2019 9:21 | 8/12/2019 15:05 |
| some_host_13 | some_service | 8/12/2019 7:08 | 8/12/2019 15:08 |
| some_host_14 | some_service | 8/12/2019 7:08 | 8/12/2019 15:08 |
| some_host_15 | some_service | 8/9/2019 14:49 | 8/12/2019 16:24 |
+--------------+--------------+-----------------+-----------------+
Why the service is beign checked outside the timperiod?
Why some_host_15 didn't check on 8/10 and 8/11 and 8/12?
How can I achive to check a service 3 times a day at fixed times?
Thanks!

"When Nagios Core attempts to reschedule a host or service check, it will make sure that the next check falls within a valid time range within the defined timeperiod. If it doesn't, Nagios Core will adjust the next check time to coincide with the next "valid" time in the specified timeperiod."
I was actually feeling pretty sure this wouldn't be the case, but maybe this is a bug if you're seeing a different behavior. I would expect the time periods and the check intervals to create a timing issue that would cause many checks to be dropped. Regardless of how things should work and what is/isn't expected behavior, I wouldn't personally configure it like this. Since you say that:
I need a service to be checked three times a day at fixed times.
Here's what I would do, if I were you:
I would run this check as a cron job, and send in the result of the check as a passive check command to Nagios. This way, you know for sure that the check will always run on time.
I would then configure a freshness_threshold to ensure that this passive service has actually phoned home recently.
I would also configure a check_command that prepares for the eventuality of the service not having a fresh result, i.e. something that executes only if no service check has been received -- perhaps a script that re-runs the check and notifies me somehow.

Related

Erlang Privilege Separation

I'm working on an security oriented project based on Erlang. This application needs to access some parts of the system restricted to root or other privileged users. Currently, this project will only work on Unix/Linux/BSD systems and should not alter files (read-only access).
I've thought (and tested) some of these solutions, but, I don't know what should I take with Erlang. What is the worst? What is the best? What is the easiest to maintain?
Thanks!
1 node (as root)
This solution is the worst, and, I want to avoid it even on testing servers.
_____________________________
| |
| (root) |
| ___________ _______ |
| | | | | |
| | Erlang VM |<---| Files | |
| |___________| |_______| |
|_____________________________|
You can see here a big picture of what I don't currently want.
#!/usr/bin/env escript
main([]) ->
ok;
main([H|T]) ->
{ok, Data} = file:read_file(H),
io:format("~p: ~p~n", [H,Data]),
main(T).
Run it as root, and voilà.
su - root
${script_path}/readfile.escript /etc/shadow
1 node (as root) + 1 node (as restricted user)
I need to start 2 nodes, one running as root or with another privileged user and one other running node with restricted users, easily accessible from outside world. This method work pretty well but has many issue. Firstly, I can't connect to privileged user node with standard Erlang distributed protocol due to remote procedure call between connected nodes (restricted node can execute arbitrary commands on privileged node). I don't know if Erlang can actually filter RPC before executing them. Secondly, I need to manage two nodes on one host.
________________ ____________________________
| | | |
| (r_user) | | (root) |
| ___________ | | ___________ _______ |
| | | | | | | | | |
| | Erlang VM |<===[socket]===>| Erlang VM |<---| Files | |
| |___________| | | |___________| |_______| |
|________________| |____________________________|
In following examples, I will start two Erlang shell. The first shell will be in restricted mode:
su - myuser
erl -sname restricted -cookie ${mycookie}
The second one will run with a privileged user:
su - root
erl -sname privileged -cookie ${mycookie}
Standard Erlang RPC (not enough security)
Finally, on restricted node (via shell for this example), I can have access to all files:
{ok, Data} = rpc:call(privileged, file, read_file, ["/etc/shadow"]).
With "Firewall" Method
I'm using local unix socket in this example, supported only until R19/R20.
Restricted user need to have access to this socket, stored somewhere in
/var/run.
1 node (as restricted user) + external commands (with sudo)
I give the right to Erlang VM process to execute commands with sudo. I just need to execute specific program, get stdout and parse it. In this case, I need to use existing available programs from my system or create a new one.
________________ _______________________
| | | |
| (r_user) | | (root) |
| ___________ | | ______ _______ |
| | | | | | | | | |
| | Erlang VM |<===[stdin]===>| sudo |<---| Files | |
| |___________| | | |______| |_______| |
|________________| |_______________________|
1 node (as restricted user) + ports (setuid)
I create a ports set with setuid flag. This program has now right to read from files but, I need to place it in secure place on the server. If I want to make it dynamic, I should also define a strict protocol between Erlang VM and this ports. IMO, setuid is rarely a good answer.
________________ ________________________
| | | |
| (r_user) | | (root) [setuid] |
| ___________ | | _______/ _______ |
| | | | | | | | | |
| | Erlang VM |<===[stdin]===>| ports |<---| Files | |
| |___________| | | |_______| |_______| |
|________________| |________________________|
1 node (as restricted user) + NIF
I don't think I can give specific right to a NIF inside Erlang VM, maybe with capsicum or other non-portable/OS-specific kernel features.
_______________
| |
| (r_user) |
| ___________ |
| | | |
| | Erlang VM | |
| |___________| |
| | | |
| | NIF | |
| |___________| | _______
| | | | | |
| | ??? |<---| Files |
| |___________| | |_______|
|_______________|
1 node (as restricted user) + 1 daemon (as root)
I can create a daemon running as root, connected to Erlang VM with an Unix Socket or another methods. This solution is a bit like ports or external command with sudo, except I need to manage a long living daemon with privilege.
________________ _________________________
| | | |
| (r_user) | | (root) |
| ___________ | | ________ _______ |
| | | | | | | | | |
| | Erlang VM |<===[socket]==>| daemon |<---| Files | |
| |___________| | | |________| |_______| |
|________________| |_________________________|
Custom Erlang VM
OpenSSH and lot of other secure software runs as root and create 2 interconnected process with pipes. When starting Erlang VM as root, 2 processes are spawned, one as root, and another in restricted user. When some action require root privilege, restricted process send a request to root process and wait for its answer. I guess its the more complex solution currently, and I don't master enough C and Erlang VM to make this thing working well.
______________ _______________
| | | |
| (root) | | (r_user) |
| __________ | | ___________ |
| | | | | | | |
| | PrivProc |<===[pipe]===>| Erlang VM | |
| |__________| | | |___________| |
|______________| |_______________|
From security perspective your best option is to minimise the amount and complexity of code running with root privileges. So I would rule out all the options when you run a whole Erlang VM as root - there's simply too much code there to lock it down safely.
As long as you only need to read some files, the best option would be to write a small C program that you run from the Erlang VM with sudo. All this program has to do is to open the file for you and hand over the file descriptor to the Erlang process via a Unix socket. I used to work on a project that relied on this technique to open privileged TCP ports, and it worked like a charm. Unfortunately that wasn't an open source project, but with some googling I found this library that does exactly the same thing: https://github.com/msantos/procket
I'd advise you to fork procket and strip down it a bit (you don't seem to need icmp support, only regular files opened in read-only mode).
Once you have the file descriptor in the Erlang VM, you can read from it in different ways:
Using a NIF like procket:read/2 does.
Access the file descriptor as an Erlang port, see the network sniffing example in the procket docs.

camel-quartz not working as expected in karaf

I am having an issue with apache camel quartz cron timers in karaf 4.0.3. It appears that when the quartz job executes, it is being executed multiple times. The following example blueprint is loaded as part of my "mass-orchestrator" app. The Hello World output prints out immediately multiple times. Instead, it should print out ONLY once every 2 minutes. Does anyone know what is happening here and how to correct it?
<blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:cm="http://aries.apache.org/blueprint/xmlns/blueprint-cm/v1.1.0"
xmlns:ext="http://aries.apache.org/blueprint/xmlns/blueprint-ext/v1.1.0"
xsi:schemaLocation="
http://www.osgi.org/xmlns/blueprint/v1.0.0
http://www.osgi.org/xmlns/blueprint/v1.0.0/blueprint.xsd
http://aries.apache.org/blueprint/xmlns/blueprint-cm/v1.1.0
http://aries.apache.org/schemas/blueprint-cm/blueprint-cm-1.1.0.xsd
http://aries.apache.org/blueprint/xmlns/blueprint-ext/v1.1.0
http://aries.apache.org/schemas/blueprint-ext/blueprint-ext-1.1.xsd">
<camelContext xmlns="http://camel.apache.org/schema/blueprint"
id="simple">
<route>
<from uri="quartz:myTimerName?cron=*+0/2+*+*+*+?" />
<setBody>
<simple>Hello World</simple>
</setBody>
<to uri="stream:out" />
</route>
</camelContext>
</blueprint>
I then launch Karaf clean and install the latest camel (but I was able to reproduce in many versions of camel going back to 2.12).
__ __ ____
/ //_/____ __________ _/ __/
/ ,< / __ `/ ___/ __ `/ /_
/ /| |/ /_/ / / / /_/ / __/
/_/ |_|\__,_/_/ \__,_/_/
Apache Karaf (4.0.3)
Hit '<tab>' for a list of available commands
and '[cmd] --help' for help on a specific command.
Hit '<ctrl-d>' or type 'system:shutdown' or 'logout' to shutdown Karaf.
karaf#root()> feature:repo-add camel
Adding feature url mvn:org.apache.camel.karaf/apache-camel/LATEST/xml/features
karaf#root()> feature:install camel-blueprint
karaf#root()> feature:install camel-quartz
karaf#root()> feature:install camel-stream
karaf#root()> install mvn:com.cerner.cts.oss/mass-orchestrator/1.0.0-SNAPSHOT
Bundle ID: 66
karaf#root()> list
START LEVEL 100 , List Threshold: 50
ID | State | Lvl | Version | Name
----------------------------------------------------------------------------------
52 | Active | 80 | 2.17.0.SNAPSHOT | camel-blueprint
53 | Active | 80 | 2.17.0.SNAPSHOT | camel-catalog
54 | Active | 80 | 2.17.0.SNAPSHOT | camel-commands-core
55 | Active | 80 | 2.17.0.SNAPSHOT | camel-core
56 | Active | 80 | 2.17.0.SNAPSHOT | camel-karaf-commands
57 | Active | 80 | 2.2.6.1 | Apache ServiceMix :: Bundles :: jaxb-impl
58 | Active | 80 | 3.1.4 | Stax2 API
59 | Active | 80 | 4.4.1 | Woodstox XML-processor
60 | Active | 80 | 2.17.0.SNAPSHOT | camel-quartz
61 | Active | 80 | 1.4 | Commons DBCP
62 | Active | 80 | 1.6.0 | Commons Pool
63 | Active | 80 | 1.1.1 | geronimo-jta_1.1_spec
64 | Active | 80 | 1.8.6.1 | Apache ServiceMix :: Bundles :: quartz
65 | Active | 80 | 2.17.0.SNAPSHOT | camel-stream
66 | Installed | 80 | 1.0.0.SNAPSHOT | mass-orchestrator
karaf#root()> start 66
karaf#root()> Hello World
Hello World
Hello World
Hello World
Hello World
Hello World
Hello World
<snip>
karaf#root()>
The cron should be
0+0/2+*+*+*+?
To run only once every 2nd minute. If you use * it means that it runs every seconds in intervals of every 2nd minute.

c - tcp - netmap : could tun/tap interface neglict the use of netmap?

I just asked a question here : previous question
Would a Tun/tap device avoid a netmap/pf_ring/dpdk installation ?
If tun/tap allow to bypass kernel, isn't it the same thing ?
Or those codes bring so many optimizations that they overclass tun os bypass strategy ?
I don't quite understand here.
Thanks
TUN/TAP interfaces are virtual network interfaces in which instead of sending and receiving packets from physical media, sends and receives them from user space program. They don't by pass kernel but it's common to set TAP interface as the default interface in order to have a user space program to intercept applications traffic.
Consider below diagram a typical interaction of an userspace program with network interface and network stack.
+--------------------------+
| Network Interface Driver |
+------------+-------------+
|
+------------+-------------+
| Network Stack |
+--------+---+---+---------+
| | | Kernel Space
+----------------------------------------+
| | | User Space
|Sockets|
| | |
+--------+---+---+---------+
| User Space Applications |
+--------------------------+
There’s no way to entirely bypass network stack in case of TAP interface. Userspace applications can still connect to physical interface. Only if a frame specifically is directed to the TAP interface the interceptor application can intercept frames.
+--------------------------+ +--------------------------+
| Network Interface Driver | | TAP Interface |
+------------+-------------+ +--------+----+------------+
| | |
+------------+-------------+ | |
| Network Stack +--------------+ |
+---+------------------+---+ |
| | | Kernel Space
+-------------------------------------------------------------------------+
| | | User Space
Sockets to NIC Sockets to TAP TAP File Descriptor
| | |
+---+------------------+---+ +-------------+------------+
| Normal Applications | | Interceptor Application |
+--------------------------+ +--------------------------+
In case of Netmap once userspace application exclusively acquired the NIC it is up to userspace application to decide which frames (if any) can be injected to the network stack. Therefore we can have the performance of direct packet capturing and take advantage of network stack services when we need them. The exclusive access to NIC is not a good feature always, consider the simple scenario when the NIC has to reply to ARP request.
+-----------------------------------------------------------+
| Netmap Enabled Network Interface Driver |
+-----+-----------------------------------------------------+
|
+-----+-----+ +-----------+ +--------------------------+
| NIC Rings | | Host Ring +------+ Network Stack |
+-----+-----+ +-----+-----+ +--------+---+---+---------+
| | | | | Kernel Space
+-------------------------------------------------------------------------+
| | | | | User Space
MMAP Access MMAP Access |Sockets|
| | | | |
+-----+-------------+------+ +--------+---+---+---------+
| Interceptor Application | | Normal Applications |
+--------------------------+ +--------------------------+
Unfortunatly DPDK doesn’t support “host rings” according to http://dpdk.org/doc/guides/sample_app_ug/netmap_compatibility.html
I’m not certain about PF_RING
Also note in order to utilize NETMAP/PF_RING/DPDK you must modify, recompile or even redesign your application to match to the frameworks.

How to create a pull-down menus in C with ASCII?

As some DOS application, how to display a pull-down menus in C with ASCII?
(and can be controlled by arrow keys)
like this:
+-------------------- +--------------------
| File | Edit | Help ... | File | Edit | Help ...
+----------+--------- +------+----------+--
| New (N) | | Cut (X) |
| Open (O) | AND | Copy (C) |
| Save (S) | | Undo (U) |
+----------+ +----------+
Which library can I use?
I would suggest using a library like ncurses. Doing this from scratch is pretty tough, but, if you really want, you need to use a lot of printf's and functions to write a character at a certain position.
Here you will find more ncurses alternatives for windows: NCurses-Like System for Windows

Change encoding in PostgreSQL 9.1

I have the following databases
sudo -u postgres psql -c "\list"
List of databases
Name | Owner | Encoding | Collate | Ctype | Access privileges
-----------+----------+----------+---------+-------+-----------------------
postgres | postgres | LATIN1 | en_US | en_US |
template0 | postgres | LATIN1 | en_US | en_US | =c/postgres +
| | | | | postgres=CTc/postgres
template1 | postgres | LATIN1 | en_US | en_US | =c/postgres +
| | | | | postgres=CTc/postgres
(3 rows)
How can I change encoding from LATIN1 to UTF8 in the database template1 or template0?
Since you don't appear to have any actual data here, just shutdown and delete the cluster (server and set of databases) and re-create it. Which operating system are you using? The standard PostgreSQL command to create a new cluster is initdb, but on Debian/Ubuntu et al you'd typically use pg_createcluster
See also How do you change the character encoding of a postgres database?
Although you can try to tweak the encodings, it's not recommended. Even though I suggested it in that linked question, if you had data with latin1 characters here, you'd need to recode them to utf-8.
Just use:
update pg_database set encoding = pg_char_to_encoding('LATIN1') where datname = 'seguros'

Resources