I will expand and create passive checks? - nagios

I have a running Ngios config and i will understanding, how works passive checks. I will expand and create my own passive checks. At the moment on the nodes is running a cron-job (ioadm check) and the results are going to the hagios GUI. It works pretty good.
I want to understand how going the "ioadm check" results to Nagios?
I hope you can help me.
regards
kinro

Related

How to create separate results with a single nagios check?

I've written a check for our Nagios to find offline AccessPoints. This one check will test all 400 APs at one turn, which is also very efficient, but there is one drawback: If some of the APs will be offline for a while and i know, there is no way to get rid of the critical-error in Nagios. If i acknowledge the service to a time when i expect the APs o work again, i will not see other APs fail.
Now, i wonder if there is a way to check all APs in one turn, but to create separate check-results in Nagios, so i could only ACK the ones which i know they're out-of-order for a while. I don't think do a check for each AP is a solution here.
A check for each AP is the easiest and cleanest way to do this. Then you can have a metacheck that alerts on the conditions you give to it.
Another approach is using a temp file to store the current status and a config file where you pseudo-ack the APs. the meta-check would only need to compare those two files.
In our Icinga (nagios-fork) installation we have something on these lines. I wrote a php frontend for it, the action note link brings me to a page where I can edit graphically the parameters for the checks.

How to interpret JMeter result while doing Database testing

I have recently started working with jMeter. And I was doing database stress testing for that. I have added the required drivers in the lib folder. And my jMeter is connected to database. And it works fine.
But the problem now I am facing is how to interpret those results. I just tested only one SQL which is doing SELECT on one table. Below is the screenshot of my various tabs in JMeter.
This below screenshot shows how many threads (10) I am running and Ramp up time.
This below screenshot shows me JDBC Connection Configuration settings, which I am not able to understand as well. It will be great if anyone can throw some thoughts on this what does it mean corresponding to number of threads I am running in my above picture.
This Below screenshot shows the result in a Summary report which I am again not able to interpret. What's the best way to interpret these results? Any thoughts on this will be of great help.
This Below screenshot shows the result which I am again not able to interpret. What's the best way to interpret these results? I was looking for how much time it is taking to execute that one single Select SQL. And this tab shows me lot of information but not sure how to interpret those. Any thoughts on this will be of great help.
Can anyone help me understanding these results? Thanks for the help.
You should use one of these:
Response Time Graph
Aggregate Graph
Look also at jmeter-plugins project:
http://code.google.com/p/jmeter-plugins/

How to rollback/tear down/clear the database changes after a system test runs?

I have a test method, using NUnit and Selenium, which opens a browser on our website which is on the Production Server and registers a user and verifies that the registration is successful.
(I know ideally the system tests should run on a separate Test Server rather than production but here they want to test whether the prod system works!)
The problem is how to rollback the database changes as a result of this test? For example, the state of my database before and after running the state should be the same.
I thought of 3 possible options but none is practical:
1) writing SQL queries to delete from the actual tables before starting the test (Setup) and after running the test (TearDown); this is my current approach however
The problem with this approach is that I have to know exactly which tables were involved for each System Test which runs and this can quickly become very complex as a test may impact more than one table.
2) Writing transactional Code
This is not an option since the code changes are done by the website, not by the unit test written.
3) Getting an snapshot of existing database (SQL Server 2008 R2) before each test starts then after the test finished, restoring the snapshot to the original one.
This idea sounds good to me if we could run the tests only on Staging environment but the problem is that the tests have to run on Production and may take like 5 minutes totally so rolling it back and restoring it, would be a stupid idea as the changes done in that 5 minutes would be lost!
Please advise what approach would be best possible option to resolve this problem? there may be a 4th option?
Thanks,
Option 4 never ever ever ever do tests on a production server it's a recipe for disaster (see thousands of funny (if you are not the protagonist) stories on the internet on how this could go horribly wrong), the right thing to do would be to configure the test and production server in the same way.
There is a fith option. If the website receives a registration for user "WeAreTestingOutSite" it does everything except for actually adding the user to the Database.
To be honest, as was said, there are better ways to test if a production site is still in operation than to run bots to register a user to make sure it is working (or operational).
I would recommend you going with 4th option: Introduce new feature which allows to delete the user. Probably not to the user himself/herself but to the system admins (Backoffice users). That way you can test if user can be registered - and deleted afterwards while not caring that much about the SQL scripts.

Fogbugz database schema management

This is a very simple question, and maybe the man himself can provide insight on this :)
Does anyone know the pseudocode behind how Fog Creek does database schema management?
I'm running into an issue and I'm trying to figure out if I'm handling it right... I have a module that runs each time someone spins up their site and examines their database to make sure that they have the right changes in place. if they are missing changes, then the script makes the required changes.
My issue is that I was trying to tie it to the session_start portion of the Global.asax, but it seems to be rather flaky at times, and I'm trying to come up with a better scenario.
For reference, I'm trying to run 1 x web application that can respond to any number of hosts, where the host maps via a metabase to find out what database it belongs to and then makes the necessary connections.
You might have more luck asking this on http://fogbugz.stackexchange.com/

Scheduling a RichCopy Jobs

Anyone use the timer feature of RichCopy? I have a job that works fine when I manually start the job. However, when I schedule the job and click run, the app appears to be waiting for the scheduled time to elapse yet never fires. Interesting enough when I stop the job the copy starts.
Anyone have any experience with using RichCopy timer?
IanB
Try created a batch file with command line options. Then use windows scheduler to launch the batch.
OMBG (Bill Gates) You need to read and get security policy and the respect it has to place on a hierarchy of upstream objects and credentials. Well that's the MS answer and attitude...
The reality is if you are working with server OSs you need to understand their security & policy frameworks, and how to debug them :). If your process loses the necessary file permissions or rights (2 different things) you should ask: "Hot damn, why didn't I fix that in the config/setup". People that blast the vendor/project (or even ####&$! MS) are just blinding themselves to the solution/s.
In most cases this kind of issue is due to Windows' AD removing the rights of a Local administrator User to run a scheduled task. It is a common security setting in corporate networks (implemented with glee by Domain Admins to upset developers) though it is really a default setting these days. It happens because the machine updates against an upstream policy (after you've scheduled a task) and decides that all of a sudden it won't trust you to run it (even though previously it let you set it up). In a perfect world it wouldn't let you set it up in the first place, but that isn't the way policy applies in Windows... (####&$! MS). LOL
Wow it only took 5 months to get an answer! (but here they are for the next person at least!)

Resources