How to see disk space history - Nagios Core - nagios

I stood up Nagios Core for my employer and it's working out great out of the box. I'm monitoring most of our drives, services and windows servers using check_nt.
I'm just trying to see the disk space history for a certain drive. I know that if you go to the availability report, you can see changes in disk space usage, but I'm hoping to get it in a more usable format - such as a CSV or a graph. Is this possible?
Thanks!

Yeah, It's definitely possible. I use something called PNP4Nagios to generate graphs that display all history but there are other tools with loads of documentation available online.
Another option is NagiosGraph (or a combination of Graphite and Grafana if you're feeling brave).
PNP4Nagios - http://docs.pnp4nagios.org/pnp-0.6/install
NagiosGraph - http://www.linuxfunda.com/2013/04/02/steps-to-configure-nagiosgraph-with-nagios-core/
Drop me a message if you're having troubles with any of the above.

Related

Adobe Experience Manager WorkBench Check out/in Issue

Shortly I ve Windows Server 2012 R2, AEM Forms(6.2), SQLServer(2014) and Workbench(6.2) in same server. At first when i install and configure all of them, i can check out or in my applications from Workbench succesfully. However After my software team executes some scripts at Database, we can not check in/out from workbench. The worst thing when i click check out, workbench gives any error. any log. on event log or server application. It gives nothing and don't do my transaction. I saw at forums some people have same issue but nobody writes solution.
Please if any one knows the solution, share with us. What's wrong with my workbench? what to do fix this issue?
The query that your software team ran turns off security on every single LiveCycle service and makes them run as the system user. This includes the services used by Workbench and is very bad. Some of the services rely on knowing who is logged in to operate correctly. In particular, how can LiveCycle know who has checked in/out a resource if the service always runs as system?
Your best bet is to restore the LiveCycle database - or at least the tb_sc_service_configuration table to be where it was before you ran the script.
If you need to remove security on individual services, you should do it through the admin console, but only do it for your processes. Never do it for systems services unless the Adobe documentation says it is OK.
As JeremyP pointed out, modifying the Adobe database directly is a bad idea. The database should be treated as a black box that is only manipulated by Adobe code (either by doing things in the Adobe tools or making calls to Adobe APIs).
You can either make security changes manually through the adminui (as he indicates, which is the most common way of doing it) or programatically using the Adobe client APIs. See the following links for sample code that uses the APIs:
Removing Security - http://help.adobe.com/en_US/livecycle/10.0/ProgramLC/WS624e3cba99b79e12e69a9941333732bac8-7f35.html
Setting the runAs user - http://help.adobe.com/en_US/livecycle/10.0/ProgramLC/WS624e3cba99b79e12e69a9941333732bac8-7f38.html
My company, 4Point, offers AEM Forms consulting services. We have an in-house Apache Ant library that wraps the code above to automate this (and other) common tasks that are typically required when deploying (and redeploying) AEM Forms solutions. It can be included as part of a consulting engagement.

A plea for a basic Notebook example getting data into and out of Google Cloud Datalab

I have started to try to use the Google Cloud datalab. While I understand it is a Beta product, I find the Doc's very frustrating, to say the least.
The questions here and lack of responses as well as lack of new revisions or docs over the several months the project has been available make me wonder if there is any commitment to the product?
A beginning would be a notebook that shows data ingestion from external sources to both the datastore system and the Big query system. That is a common use case. I'd like to use my own data, it would be great to have a Notebook to ingest it. It seems that should be doable without huge effort? And it would get me (and others) out of this mess trying to link the various terse docs from various products and workspaces up and working together..
in addition to a better explanation of the Git hub connection process (prior question))
For BigQuery, see here: https://github.com/GoogleCloudPlatform/datalab/blob/master/content/datalab/tutorials/BigQuery/Importing%20and%20Exporting%20Data.ipynb
For GCS, see here: https://github.com/GoogleCloudPlatform/datalab/blob/master/content/datalab/tutorials/Storage/Storage%20Commands.ipynb
Those are the only two storage options currently supported in Datalab (which should not be used in any event for large scale data transfers; these are for small scale transfers that can fit in memory in the Datalab VM).
For Git support, see https://github.com/GoogleCloudPlatform/datalab/blob/master/content/datalab/intro/Using%20Datalab%20-%20Managing%20Notebooks%20with%20Git.ipynb. Note that this has nothing to do with Github, however.
As for the low level of activity recently, that is because we have been heads down getting ready for GCP Next (which happens this coming week). Once that is over we should be able to migrate a number of new features over to Datalab and get a new public release out soon.
Datalab isn't running on your local machine. Just the presentation part is in your browser. So if you mean the browser client machine, that wouldn't be a good solution - you'd be moving data from the local machine to a VM which is running the Datalab Python code (and this VM has limited storage space), and then moving it again to the real destination. Instead, you should use the cloud console or (preferably) gcloud command line on your local machine for this.

How to "explore" group of servers?

I need to check a group of servers (Unix, Linux) to know what kind of services, software (also version) are running there (check it once for a while and store it in database).
The idea is to have always fresh info about whole environment - its constantly changing. Perhaps you can suggest some solution that is already there?
Currently i am thinking about using Nagios or Cacti + plugins but I am not sure if this solution will be optimal.
Nagios is a very powerful monitoring solution (the best for me) : Open source, Compatible with both linux & windows, reporting & notifications via emails/SMS, Nice interface, Many many plugins...etc I've already worked with & I was very satisfied.
Check Nico Largo's Forum for Install. If you are not familiar to linux command search for FAN : Fully Automated Nagios which is a .iso where nagios is already in.
If you have any trouble during install or configuration post your questions there : https://serverfault.com/
Given that you want to poll for information on the system that can change dynamically, I would look at Check_MK.
It originally started as a plugin for Nagios that would poll a server for running services and generate the necessary configs for monitoring anything it discovered. Since then, it has evolved into a complete monitoring solution that provides its own complete ui (still based on nagios core), so you are safe in running this if you are familiar with nagios already.
See the website: http://mathias-kettner.com/checkmk_monitoring_system.html
You may need to select that you wish to view the "English" perspective of the site on first visit.

Simple standalone website checking tool

Background:
We run a content management platform that hosts 20+ separate websites - some intranets and some internet sites - so that have different end points routed for internal or external access.
We are currently upgrading our infrastructure - including software versions, hardware, changes of IP/VIP/DNS entries etc which affects all of the sites.
I want to be able to run a repeatable site test against all sites check everything is working fine and I'd like to do it from different end points (locally on each box in the cluster, from the cluster level, from the internet, from the intranet extra.
Anyone know of a simple tool that requires no sofware to install to run a repeatable regression test against a whole bunch of defined URL's?
I was thinking of a HTML page that I can run from different locations that is essentially a link checker.
Can anyone recommend a simple way to provide a level of automatic testing of our sites (in addition to our manual verification.
Thanks
Sounds like you're looking for Selenium: http://seleniumhq.org/
Edit
Wait, I think you mean 'Testing' them as in, check to see if they're online and reachable? Then I might just automate a series of ping or telnet commands, and check appropriate things. Would take a matter of minutes to write a little app in any language to do this.
There is all sorts of web site monitoring software available (check google, or ask for recos here). That's what you're looking for. There is a whole range from free to very expensive that monitor and stress your site from around the world.
Or you can write simple shell scripts that do what you want.
>> 'Testing' them as in, check to see if they're online and reachable?
Yes that's exactly it!! I was thinking the same - I could script something up but I thought I'd check first to see if someone has already done this - I guess not!
Thanks
Doesn't fit the "requires no sofware to install to run" part, and it's not necessarily super-cheap, but we've had great results with Radar Website Monitor for this kind of thing.

Bugzilla Reporting

Is there a really good free tool for BugZilla reporting? I am finding the default search options on the web interface far too limiting. My biggest issue is with the lack of Order By options (only 1 field at a time, and a very limited set of fields to choose from). I have done some Google searches, but I can't find any good free BugZilla reporting tools.
If there isn't one, can someone please point me to an example on how to access the BugZilla web services? If I can get the BugZilla data, then I can easily build my own reports that will better meet our needs.
Take a look at this: http://www.faqs.org/docs/bugzilla/dbdoc.html
Use this database schema for reference: faqs.org/docs/bugzilla/dbschema.html
If you need a web-interface, use your favorite dynamic website scripting language that can access MySQL databases (say PHP)...
Simple-ish Tutorial: freewebmasterhelp.com/tutorials/phpmysql/4
PHP MySQL API Reference: php.net/manual/en/ref.mysql.php
Then use SQL queries such as:
"SELECT * FROM bugs WHERE WHERE bug_status != 'RESOLVED' ORDER BY creation_ts ASC, votes DESC LIMIT 50"
which lists first 50 entries of unresolved bugs ordered first ascending creation time then descending by number of votes.
I have used this in the past and have liked it a lot: http://www.mediawiki.org/wiki/Extension:Bugzilla_Reports
You can also consider other tool eg mantis
(http://www.mantisbt.org/)
I've personally switched from Bugzilla into Mantis and installed some plugins (http://deboutv.free.fr/mantis/) and found this more comfortable
If you are a Java user, you might want to check out Mylyn for eclipse. This is integrates a task-driven development approach into eclipse.
With that, you can raise bugs, tie together SVN changes and bugs, and hide classes that are not relevant to fixing bugs, etc. It's a bit involved to get started with, but quite powerful.
It also comes with a connector for BugZilla. See this introductory article for an example.
If you don't use eclipse, but you do use Java, then note that since Mylyn is open-source, you might want to look at the source code of the Mylyn BugZilla connector for how they do their work.
Good luck.
You can try Deskzilla (http://deskzilla.com/) - it is a multi-platform desktop client for Bugzilla with Outlook-like interface, rich reporting and filtering capabilities, offline work, drag-n-drop, etc. It's a commercial product, but if you're working on an Open Source project you can use it for free.
AFAIK Bugzilla uses MySQL database for storing data. So probably you can connect with some visual db manager (plenty of it exists, see Toad Data Modeler, DbVisualizer) and try do do some sql work...
There is a list of some add-ons (free and commercial) listed on the Buzilla addons wiki.
If you are a Windows user, MyZilla is a possible option.
Otherwise, to work toward your own, see the Bugzilla API documentation, which, in a way, includes how to retrieve the current schema (Bugzilla::DB::Schema), and Bugzilla::WebService.
Netbeans also has Bugzilla integration (I haven't tried it...).
I have analized a bunch of bug tracking tools.
You can try track or mantis, because bugzilla is very unfriendly about reporting.
Mantis
Mantis can export data in excel: all the graphic you need can be generated by that sheet.
For more information take a look to my blog:
http://gioorgi.com/2008/bug-tracking-mantis/
Anyway, Track is used a lot more, so for sake of completeness I should cite it:
Track
Pros:
Can Also work with an embedded database (using sqlite).
Easy to setup and use.
Cons:
Feature are too much, and aims to be also a CMS to some extend.
Take a look to:
http://gioorgi.com/2008/bug-tracking-trac/
Since Bugzilla can be installed on your own server, I presume the simplest way is to do that and play with the databases it creates ("Bugzilla supports MySQL, PostgreSQL and Oracle as database servers"). The documentation also says you can modify the templates as you like.
Otherwise one could try paid support or some other bug trackers.
I use this bookmarklet and like how it searches right with the strings entered in the location bar like smart search. It lets you quickly search bugzilla or jump to a bug number via Bugzilla Quicksearch, and is IE6+, Moz, Op7+ compatible.
Its companions on the same page can be used to refine or help with bug search/report, e.g. collect buglinks (queries bugzilla to show a list of bugs linked to from the current page),ord buglinkify (turns all numbers on the page into bug links).

Resources