Nagios - Host/Service dependancy - nagios

I am implementing Nagios for a customer.
The customer has the following request;
We have a PC (PC-1) and a server (Server-1). The PC only has ping monitoring in Nagios, the server has Ping monitoring but also a service check (check if Citrix is up).
We would like PC-1 to go to a warning state when the Citrix service on Server-1 is in a down state. The reason we want that is for reporting purposes.
I am aware of service to service dependancy, but I am wondering if this service to host dependancy would work and how :).
Any help is welcome!
Thanks!

Yes, it can be done. See the docs on dependencies.
You don't explicitly make a service dependent on another host. Rather, you must make it dependent on a service on that host. Since services are also implicitly dependent on their hosts, this results in the behavior you want.
You can use your ping service check for this purpose, which is the only time you'd actually need a check_ping (or check_icmp) service check.
If you read the section on actual host dependencies, you'll see that the parent/child relationship is suggested, instead.

Related

Any way to run multiple Google App Engine local instances for appengineFunctionalTest?

Background
From the docs, at https://github.com/GoogleCloudPlatform/gradle-appengine-plugin
I see that by putting my functionalTests in /src/functionalTests/java does the following:
Starts the Local GAE instance
runs tests in the functionalTests directory
Stops the Local instance after the tests are complete
My Issue
For my microservices, I need to have 2 local servers for running my tests. 1 server is responsible for a lot of auth operations, and the other microservices talk to this server for some verification operations.
I've tried
appengineFunctionalTest.dependsOn ':authservice:appengineRun'
this does start the dependent server, but then it hangs and the tests don't continue. I see that I can set deamon = true and start the server on a background thread, but I can only seem to do that in isolation.
Is there a way to have a 'dependsOn' also be able to pass parameters to the dependent task? I haven't found a way to make that happen.
Or perhaps there is another way to accomplish this.
Any help appreciated

Bluemix Monitoring and Analytics: Resource Monitoring - JsonSender request error

I am having problems with the Bluemix Monitoring and Analytics service.
I have 2 applications with bindings to a single Monitoring and Analytics service. Every ~1 minute I get the following log line in both apps:
ERR [Resource Monitoring][ERROR]: JsonSender request error: Error: unsupported certificate purpose
When I remove the bindings, the log message does not appear. I also greped my code for anything related to "JsonSender" or "Resource Monitoring" and did not find anything.
I am doing some major refactoring work on our server, which might have broken things. However, our code does not use the Monitoring service directly (we don't have a package that connects to the monitoring server or something like that) - so I will be very surprised if the problem is due to the refactoring changes. I did not check the logs before doing the changes.
Any ideas will help.
Bluemix have 3 production environments: ng, eu-gb, au-syd, and I tested with ng, and eu-gb, both using 2 applications with same M&A service, and tested with multiple instances. They are all work fine.
Meanwhile, I received a similar problem that claim they are using Node.js 4.2.6.
So there are some more information we need to know to identify the problem:
1. Which version of Node.js are you using (Bluemix Default or any other one)
2. Which production environment are you using? (ng, eu-gb, au-syd)
3. Is there any environment variables are you using in your application?
(either the creating in code one, or the one using USER-DEFINED Variables)
4. One more thing, could you please try to delete the M&A service, and create it again, in case we are trapped in a previous fault of M&A.
cf ds <your M&A service name>
cf cs MonitoringAndAnalytics <plan> <your M&A service name>
NodeJS versions 4.4.* all appear to work
NodeJS uses openssl and apparently did/does not like how one of the M&A server certificates were constructed.
Unfortunately NodeJS does not expose the openssl verify purpose API.
Please consider upgrading to 4.4 while we consider how to change the server's certificates in the least disruptive manner as there are other application types that do not have an issue with them (e.g. Liberty and Ruby)
setting node js version 4.2.4 in package.json worked for me, however this is an alternative by-passing solution. Actual fix is being handled by core team. Thanks.

WCF - Launch Self Host Application on-demand

I am working on a project requiring communication between the Presentation layer (MVVM based client) and the Business layer. The application is to be installed on a single machine, and as such could have been executed using a .net remoting based approach. However, I have been suggested to use WCF since .net remoting is deprecated and WCF is the way to go.
So I implemented WCF Service as a WCF library project. I added a service reference (using visual studio tool by discovering services in the solution), which was successfully added on the client side. All works well, since during debug session visual studio launches the service library and the client can connect to it successfully.
Now Since the client and service host will be installed on same machine, I was thinking of using named pipes transport and self hosting for the WCF service. Here is where this gets very confusing to me:-
Since the MVVM client is the "main" app (since it is the application frontend), the client will be launched first. I am unable to come up with a solution to launch the service host "on-demand" when the client needs the same.
What are the solution I can use to do what I require for 1? I am not sure of using a continuous service through windows service for something that will be required "on-demand".
Please suggest clean approach to implement a way to launch "on-demand" service.
Thanks in advance.
I'm not quite following you here. MVVM is cake and really doesn't have anything to do with your problem. Using a servicebased architecture is a must today, I just want to smack every old guy/gal sticking to direct SQL on the client side, and don't see the dangers with that.
Well anyhow, you may solve this different ways. WCF is BIG, to big in my opinion. One way to use it, common on small applications like apps(WP8.1, 8.1 apps++), is just to connect, call the method you need and the close the connection. Case solved, given that I understand your needs. The otherway is to keep the session alive for each service... (ugh, loadbalancing etc).
I've been working on large LOB applications the last 4 years my self, what I can tell, is that if I were in charge, I would have done it very differently. For once I would used JSON endpoints with SSL over HTTP, and the json.net serializer. The datacontract serializer which is the default in WCF, is not good news at all. JSON allows easy communication with JavaScript based applications, and the serializer doesn't break like porcelain as the datacontract serializer does. Address/baseaddress may be stored in your config file, so it may be changed upon deployment(you probably have many servers, for different environments).
This is a really old post covering the subject(supporting SOAP aside JSON); REST / SOAP endpoints for a WCF service
Don't be stupid and just call your services directly now! Use the interface(Wrap if necessary) and feed it to your viewmodels for proper TDD! It will also allow you to completely replace WCF with another form of communication.
There also are alternatives to WCF.
IIS? Hosting WCF in IIS rather than a proper service? No way, flush that idea down the waterloo ;) (internal joke)
EDIT:
BTW: Your service must ofc already be running for your client to connect to it! Or nothing will be listening to your configured port. If you are selfhosting, you can use parameters to start in debug mode, that is start your service like a regular application you may debug. In Program.cs;
if (args.Length > 0 && String.Equals(args[0],"debugmode", StringComparison.OrdinalIgnoreCase)
{
Service1.Create(); // Debugging!
}
else
{
// Hosting
service = new Service1{ServiceName="YourService"};
ServiceBase.Run(new ServiceBase[] {service};);
}
Hope it helps,
Cheers,
Stian

Nagios / Check_MK web interface host visibility

I am using Nagios with Check_MK addon at a small ISP company I work for. I am a sole Nagios admin, but we have a few users who use the Nagios / Check_MK system (with Check_MK as the web frontend).
As most devices we use are MikroTik routers with proprietary OS within which I cannot install the check_mk agent (have to use SNMP), I am using Check_MK with generate_hostconf = False - I have to manually define WiFi interface checks (like signal strength check) anyways, so all host configuration is done in Nagios files.
All users who use the system are listed in cgi.cfg with authorized_for_all_services=user1,user2 and authorized_for_all_hosts=user1,user2 etc.
As I was not satisfied with the current configuration (there is not enough serverity-based differentiation among different hosts and service types - i.e. we want not only backbone / not-monitored host differentiation, but something more fine-grained like backbone / distribution layer 1 / distribution layer 2 / not-monitored client-side), I started to change the configuration to somewhat hackish setup with multiple contacts per real user, with different timeperiods assigned, so that e.g. 'distribution-layer 2' hosts don't wake people at 3 a.m. Perhaps this is not the proper way to do the thing.
Anyways, here is the problem - I created new contacts and contact groups and some rules for inventory - for services it works fine, it seems, but apparently hosts are not visible in Check_MK web interface (but they are visible in our Nagios website). Most likely it is so due to the fact I am logged in as the 'old' user, who is not a part of the new contact group, but still who is supposed to see all hosts (as defined in cgi.cfg). Can I do something to make hosts visible in Check_MK GUI with that setup and not only in Nagios web interface?
I had to use check_mk --flush hostname and re-inventory with check_mk -II hostname even after changing the settings back to the previous state to make the hosts appear again.
I haven't tried to add new contacts to .htaccess, as I don't really want to create multiple contacts with login permissions. Does Check_mk simply ignore the authorized_for_all_hosts / services directive defined in cgi.cfg in this case?
I can see that Check_MK itself is able to communicate with those hosts not show in GUI - I can do check_mk -II hostname or check_mk -N hostname. Appropriate entries are present in etc/check_mk.d/check_mk_objects.cfg and nagios/var/retention.dat; hostnames are listed with check_mk --list-tag TAG etc. so most likely it is a problem with GUI user permissions only.
I know I could use notification_period directives for hosts and custom SNMP services within Nagios configuration files and extra_service_conf['notification_period'] in main.mk, but I am actually using that for some exceptional cases and wasn't sure about the precedence rules.
Anyways, it is Ubuntu Server 12.04 LTS x86_64, Nagios Core 3.4.1, Check_MK 1.2.0p3.
Apparently it is enough to use default_user_role = "admin" in multisite.mk. Perhaps not the safest thing, but it gets the job done in this setup.

Application Release/Upgrade Strategyfor Silverlight Business Application?

I am interesting in hearing if others have addressed release management for Silverlight applications.
I have a business application that is to be released shortly andam concerned about how to "release" updates to this application. Typically this application's users will leave the application open all day (and potentially all night) without reloading it.
What if there is is need to release an change that includes an web service interface change? How can this be deployed w/o causing errors on the client side?
We have grown so used to deploying ASP.Net apps by just dropping the latest code on the server. My only idea currently involves a client version number and a periodic timer on to check for updates.
I would love to know what others have done before implementing this.
Thanks,
Mike
I just answered a question on how to make sure that .xap files are not cached by the browser, which might be of some help:
Prevent Silverlight xap from being cached by proxy server
But that's no use if the users never reload your application. In my own application this is not a problem since users will be automatically thrown out whenever we deploy an update to the web service. But I like your idea with the timer, I would go with that.
Stating the obvious but don't do anything to annoy your users. E.g. could they spend twenty minutes entering data, nip off to the coffee machine and return to click Submit to find the timer has expired, noticed an update and their work is lost due to a forced restart?
If so, and I admit this hasn't had a lot of thought, if e.g. you have to make changes to the web service that break the current release, could you have the new web service version side-by-side such that users don't get thrown out until the timer has expired and the unit of work is complete? Or is this also stating the obvious?
For server code, i.e. endpoints just do as per normal. for the xap's I think you have a few options depending upon how you handle communications. You could have request contain a version number and if the server has been updated then force some code to reload the client, bit lame, messy but do-able. Perhaps a cleaner solution would be to control the clients session, which presumably is part and parcel with requests to the backedn. When you deploy a new version you could invalidate the client session, perhaps forcing a page refresh with custom logic. If your protocol is push base you could send a command to the client to do what ever you want, for many systems that are on all day its likely that this infrastructure would exist (if u've build it nicely :)). For instance our service layer is abstracted away from the repositories models and view models, in our case we'd could send a logout or perhaps a specific command to kick in some custom logic on the client informing the application is being updated and to refresh your browser when done. Our shell is light weight so our modules (basically other xap's) can be updated in time for the refresh.
I would recommend you to use a solution like mentioned in App Arch Guide:
The Guide Chapter I mean see Deployment considerations.
Divide the application into logical
modules that can be cached
separately, and that can be replaced
easily without requiring the user to
download the entire application
again.
Version your components.
Have you considered keeping a WCF polling duplex channel going that alerts the app when it needs to reload? In addition, you can have your WCF calls direct to a virtual directory that contains 'interfaced' calls. For example:
Silverlight app hosted at "x.x.x.x\Default.aspx"
Silverlight talks to WCF at "x.x.x.x\Version2\DataPortal.svc"
DataPortal.svc talks to a GAC (or otherwise base) assembly that can identify what version can handle what calls.
This way, if you upgrade to "x.x.x.x\Version3\DataPortal.svc", you can still make calls against Version2, assuming those calls have code to convert them to a Version3 concept.
This helps in cases where your line of business app has dynamic xap downloading ('main', 'customer', 'inventory', etc.) and you want to release them independently.

Resources