My boss is starting to complain that he gets too many emails from my icinga/nagios instance and so am i to be honest, and he doesn't need to know everything, neither does development.
So what i want to do, is to limit the amount of email is sent out.
I started by removing contact_groups from hosts, and instead apply these to individual services that they care about, all well and good.
But, for certain hosts i would like notifications be sent out on a host down, but not services defined. How would i go on about doing that?
TLDR: how to make nagios email user about host down, but not services on that host
For these users who don't need all of the details, look at service_notifications_enabled as the contact config option. This will only give them host notifications. You can do ALL kinds of things to fine tune your notifications with Nagios, so don't waste them if people don't need to get them, otherwise they'll just get filtered into a mailbox that no one ever reads.
http://nagios.sourceforge.net/docs/3_0/objectdefinitions.html#contact
Related
Let us say, there is a system containing data, where the user can view or manipulate it, using the options in the system, but should not be able to copy/ extract/ export the data out of the system. Also, any bots such as RPA or crawlers should not be exporting too. The data strictly recides in the system.
Eg: VDI - Virtual Desktop Infrastructure, does some sort of this work. People can connect to remote machines and do some work, but cannot extract data out of it to their local machine, unless it allows the user to do so. Even RPA bots will not be allowed to run in that remote system, only can be run in local system but it will be tedious to build such a bot, providing a closer solution to the above problem.
I am just looking for alterate light-weight options. Please let me know, if there is any solution available.
There is simply no way of stopping all information export.
A user could just take a photo to the screen and share the info.
If by exporting you mean exporting files, then simply do not allow exporting the files in your program or restrict the option, if you need to store data on the disk, store it encrypted.
The best options would be to configure a machine only to use that software, so on boot it would lauch the software fullscreen, deny any usb autorun keys and have something like Veyon insyalled to be remotely controlled and have some config data on the disk but pretty much all the data on a remote server.
If you need a local cache, you can keep it encrypted.
That said theoretically if a user had access to the ram physically, he/she could retrieve that data but it is highly unlikely.
First of all, you'll have to make ssh and ftp useless! this is to prevent scp or other ftp software from being used to move things from inside the system out and vice versa, block ports 20, 21 and 22!
If possible, I'd block access to cloud storage services (DNS/Firewall), so that no one with access to the machine would be able to upload stuff to common cloud services or if you have a known address that might be a potential goal for your protected data. Make sure that online code repositories are also blocked! if the data can be stored as text, it can be also transfered to github/gitlab/bitbucket as a normal repo... you can block them also at DNS level. Make sure that users don't have the previlage to change network settings, otherwise they can bypass your DNS blocks!
You should prevent any kind of external storage connectivity.. by disallowing your VM from connecting to the server's USB ports or even bluetooth if exists.
That's off the top of my head... I'll edit this answer if I remember any more things to block.
So the scenario is like this...
I have a number of different users in an organization. Each has his own session of an AngularJS app running in their browser. They share an internet connection over a local LAN.
I need them to continue working together (data, notifications, ... etc) even when they lose internet i.e. server side communication.
What is the best architecture for solving this?
Having clients communicate directly, without a server, requires peer-to-peer connections.
If your users are updating data that should be reflected in the database, then you will have to cache that data locally on the client until the server is available again. But if you want to first send that data to other peers, then you need to think carefully about which client will then update the database when the server comes back up (should it be the original client that made the edit - who may not be online anymore - or should it be the first client that establishes server connection). Lots to consider in your architecture.
To cope with this scenario you need angular service-worker library, which you can read about here.
If you just want the clients/users to communicate without persisting data in the database (eg. simple chat messages) then you don't have to worry about the above complexity.
Refer to this example which shows how to use simple-peer library with Angular2.
An assisting answer (doesn't fit in a comment) was provided here: https://github.com/amark/gun/issues/506
Here is it:
Since GUN can connect to multiple peers, you can have the browser connect to both outside/external servers AND peers running on your local area network. All you have to do is npm install gun and then npm start it on a few machines within your LAN and then hardcode/refresh/update their local IPs in the browser app (perhaps could even use GUN to do that, by storing/syncing a table of local IPs as the update/change)
Ideally we would all use WebRTC and have our browsers connect to each other directly. This is possible however has a big problem, WebRTC depends upon a relay/signal server every time the browser is refreshed. This is kinda stupid and is the browser/WebRTC's fault, not GUN (or other P2P systems). So either way, you'd have to also do (1) either way.
If you are on the same computer, in the same browser, in the same browser session, it is possible to relay changes (although I didn't bother to code for this, as it is kinda useless behavior) - it wouldn't work with other machines in your LAN.
Summary: As long as you are running some local peers within your network, and can access them locally, then you can do "offline" (where offline here is referencing external/outside network) sync with GUN.
GUN is also offline-first in that, even if 2 machines are truly disconnected, if they make local edits while they are offline, they will sync properly when the machines eventually come back online/reconnect.
I hope this helps.
we want to build an application (c#/.Net) for the following Scenario:
internal "alert System". Users should be informed about it-system outage, planned downtime for Services and so on.
only one-way : central Service will push Messages to user
we also Need the possibility to enable/disable a message, for example:
The message "there a Problems with mail System" should be removed from every Computer after the Problem is solved
we want to shedule Messages for planned maintanance
about 1000 windows Clients, we also want to "group" this Clients, so we can control which Client will get a message
First thought was writing small application which will query every X seconds a central database for new and existing Messages.
Maybe somebody has already worked on similar Project?
Is a Client with database query a way to go? Better to use other Technology, like WCF Service?
Thanks for your help
Marc
Sounds like you need an enhanced version of push notifications.
I'd suggest using push for all the messaging, it's delivered faster and I find it more reliable. Simply make the client connect to a message server and maintain the connection open. Whenever a message is supposed to be displayed to the client, have the server push it trough the connection (that's where the name comes from).
To group and manage the clients you could use a database, it's probably the best way to go, but the server needs to handle all the open connections, and databases can only store DATA, not virtual objects representing a connection, so the server software need to manage them in a different way.
My suggestion: Whenever the server receives an incoming client connection, it will accept and query the client computer for a ID number that will also be used to find that client's information in the database.
Then it will create a dictionary using that ID as key, and the connection as the value.
This way at the time of sending a message to a determined group, you can do in two ways:
1) You can load from the database the IDs that belong to that group, and then send the messages to them. You will have to check whether that ID exists in the dictionary's KEYS array, because it is possible that a determined client is not yet connected.
2) You can iterate of the KEYS array of dictionary, check to which group that ID is part of, and if it is the desires group, send it.
If you're dealing with a big number of clients, I suggest you use method 1.
To disable/remove a message from the client's computer, simply have the server send a special Command message that the client software interprets as "remove that message". To make this possible every non-command message must have unique IDs, so that command messages can tell the client software which message that command applies to.
Your project sounds very interesting.
I would be glad to help you by writing a library you could use, or just help you figure it out on your own if you prefer. (Free of charge, just for the experience).
One of our problems is that our outbound email server sucks sometimes. Users will trigger an email in our application, and the application can take on the order of 30 seconds to actually send it. Let's make it even worse and admit that we're not even doing this on a background thread, so the user is completely blocked during this time. SQL Server Database Mail has been proposed as a solution to this problem, since it basically implements a message queue and is physically closer and far more responsive than our third party email host. It's also admittedly really easy to implement for us, since it's just replacing one call to SmtpClient.Send with the execution of a stored procedure. Most of our application email contains PDFs, XLSs, and so forth, and I've seen the size of these attachments reach as high as 20MB.
Using Database Mail to handle all of our application email smells bad to me, but I'm having a hard time talking anyone out of it given the extremely low cost of implementation. Our production database server is way too powerful, so I'm not sure that it couldn't handle the load, either. Any ideas or safer alternatives?
All you have to do is run it through an SMTP server and if you're planning on sending large amounts of mail out then you'll have to not only load balance the servers (and DNS servers if you're planning on sending out 100K + mails at a time) but make sure your outbound Email servers have the proper A records registered in DNS to prevent bounce backs.
It's a cheap solution (minus the load balancer costs).
Yes, dual home the server for your internal lan and the internet and make sure it's an outbound only server. Start out with one SMTP server and if you get bottle necks right off the bat, look to see if it's memory, disk, network, or load related. If its load related then it may be time to look at load balancing. If it's memory related, throw more memory at it. If it's disk related throw a raid 0+1 array at it. If it's network related use a bigger pipe.
I am writing an application wherein its a requirements to restrict the number of logins a user can have from a single IP address (as a way to stop spam).
We can't use captcha for some reason!
The only 2 ways I could think of to make this work was to either store in the database, the number of requests coming in from each IP.
OR
To store a tracking cookie which has the information regarding the same.
Now, the downside of the first mode is that there would be too much of db traffic - the application is going to be used by a ton of people.
The downside of storing this info as a cookie is that users can clear them up ad start fresh again.
I need suggestions, if there could be a way wherein the high db traffic and the loose bond with cookie based tracking can be handled.
You're talking about "logins" and a web-application therefore you have some sort of a session persisted somwhere. When creating those sessions you need to keep track of the number of active sessions per IP and not allocate new sessions when that threshold is reached.
Without more specific information about your framework / environment, that's about the best answer anyone can provide.
Also be aware that this approach fails in numerous ways because of NAT (network address translation). For example, our office has exactly one public IP address for X hundred people. The internal network is on private IP space.
if you want to get the IP and store somewhere, you could use $_SERVER['REMOTE_ADDR'] to get the IP of the user, make a field like "ip" in your database and you make a query in your SQL to check if the IP was used.
There are also other ways of tracking, like Flash Cookie, people usually don't know the existance of it, so most people wouldn't know how to clear it.