I am using selenium webdriver with Jbehave to automate tests using BDD, and I have a problem regarding verifying values. I need my tests not to immediately fail after an assertion not equals as expected. Instead, I want my test to verify each value, and then if at least one assertion has failed, my step needs to be marked as failure.
I am using verifyEquals, which doesn't terminate the tests after immediately find an assertion failure, but continues instead.
My problem is, if one or more values do not successfully match as expected, my step is not marked as failure, and I have to go to the console to discover if it has some value mismatch.
In this case you should change your test architecture.
For any test frameworks that also have a verify feature that doesn't stop the test, you should terminate your tests with an Assert statement.
Verify is used only to validate some prerequisites to what the test is actually testing.
If you take for example a test where you test that the order amount is correct, you could go with the following:
verify name is correct
verify email is correct
assert order amount is correct
Instead of using verify, it is better to Asserts from the Junit library. You can even customize you assert by using a try catch block and decide on whether to continue the test by printing the error or stop the test.
Related
My Capybara tests are designed to call page.save_screenshot if an example fails. This is configured in a config.after(:each) block in my spec_helper.
However, not all of my tests always have a window open. Long story short, some of my tests may make a few requests using rest-client and then pass/fail based on some rspec expectations. When these type of tests fail, a browser window opens because of the call to page.save_screenshot.
Is there a way to add a conditional so that saving a screenshot is only attempted when there is a window (headless or non-headless) open?
You can probable do something like
page.save_screenshot if page.current_window.exists?
however the best options would really be to make tests that don't use the browser a different type of test (not system or feature) or add metadata to the test so that metadata can be used to determine whether or not the test uses the browser
# In your tests
it 'does something not using the browser', :no_browser do
...
end
# In your spec_helper
config.after(:each) do |example|
page.save_screenshot unless example.metadata[:no_browser]
end
I'm trying to catch "nrpe unable to read output" output from plugin and send an email when this one occurs and I'm a little bit stuck :) . Thing is there are different return codes when this error occurs on different plugin:
Return code Service status
0 OK
1 WARNING
2 CRITICAL
3 UNKNOWN
Is there a way either to unify return codes of all plugins I use(that there always will be 2[CRITICAL] when this problem occurs), or any other way to catch those alerts? I want to keep return codes for different situations as is(i.e. filesystem /home will be warning(return code 1) for 95% and critical(return code 2) for 98%
Most folks would rather not have this error sending alert emails, because it does not represent an actual failed check. Basically it means nothing more than:
The command/plugin (local or remote) was ran by NRPE, but
failed to return any usable status and/or text back to nrpe.
This most often means something went wrong with the command/plugin and it hasn't done the job it was expected to perform. You don't want alerts being thrown for checks, when the check wasn't actually performed - as this would be very misleading. It's also important to note that the Return Code is not even be coming from the command/plugin.
In my experience, the number one cause of this error is a bad check. And as the docs for NPRE state, you should run the check (with all its options!) to make sure it runs correctly. Do yourself a favor and test both working AND not working states. About 75% of the time, this has happened because the check only works correctly when it has OK results, and blows up when something not-OK must be reported.
Another issue that causes these are network glitches. NRPE connects and runs the check; but the connection is closed before any response is seen. Once again, not a true check result.
For a production Nagios monitoring system, these should be very rare errors. If they are happening frequently, then you likely have other issues that need to be fixed.
And as far as I can tell, all built-in Nagios plugins use the exact same set of return codes. Are you certain this isn't a 'custom' check?
Ok, I think I've found the solution for my problems-I will try to check nagios.log on each node for those errors.
What would be the proper thing to do for each case?
1: Context: Testing a function that creates a database as well as generating metadata for that database
Question: Normally unit test cases are supposed to be independent, but if we want to make sure the function raises an exception when trying to make a duplicate database, would it be acceptable to have ordered test cases where the first one tests if the function works, and the second one tests if it fails when calling it again?
2: Most of the other functions require a database and metadata. Would it be better to call the previous functions in the set up of each test suite to create the database and metadata, or would it be better to hard code the required information in the database?
Your automated test should model the following:
Setup
Exercise (SUT)
Verify
Teardown
In addition, each test should be as concise as possible and only expose the details that are being tested. All other infrastructure that is required to execute the test should be abstracted away so that the test method serves as documention that only exposes the inputs that are being tested in regards to what you want to verify for that particular test.
Each test should strive to start from a clean slate so that the test can be repeated with the same results each time regardless of the results of prior tests that have been executed.
I typically execute a test-setup and a test-cleanup method for each integration test or any test that depends on singletons that maintain state for the System-Under-Test and need to have it's state wiped.
Normally unit test cases are supposed to be independent, but if we want to make sure the function raises an exception when trying to make a duplicate database, would it be acceptable to have ordered test cases where the first one tests if the function works, and the second one tests if it fails when calling it again?
No, ordered tests are bad. There's nothing stopping you from having a test call another method that happens to be a test though:
#Test
public void createDataBase(){
...
}
#Test
public void creatingDuplicateDatabaseShouldFail(){
createDataBase();
try{
//call create again should fail
//could also use ExpectedException Rule here
createDataBase();
fail(...);
}catch(...){
...
}
}
Most of the other functions require a database and metadata. Would it be better to call the previous functions in the set up of each test suite to create the database and metadata, or would it be better to hard code the required information in the database?
If you use a database testing framework like DbUnit or something similar, it can reuse the same db setup over and over again in each test.
In my code I want to be able to log so I have passed the appengine context around my libraries. Then if there is a failure I can log to appengine like so:
context.Warningf("This is not correct!")
I am trying to write a unit test to specifically hit an error case. I am using the appengine/aetest package like this:
context, createErr := aetest.NewContext(nil)
When the test hits the above context.Warningf it fails because aetest.Context does not implement that function.
Is there a recommended way around this? For example, I guess I could set some variable to be "liv", "test" and then not log if in test but that seems hacky. Or is there something obvious I am missing here?
This was not a real problem, it was simply an incorrect environment. It was caused by me running tests in LiteIDE without setting up correctly.
I'm running R# 7.1.2 and trying to test a SL project using the MS unit testing framework. My tests run fine in R#, but the output messages include the parent exception (not just my custom message) and they are truncated:
e.g. instead of
"No user context set"
I get
Test method Some.Quite.Long.Namespace.And.Test_Method_Name threw exception: Microsoft.VisualStudio.T
I wouldn't even mind if the message included the assembly qualified method name...
Test method Some.Quite.Long.Namespace.And.Test_Method_Name threw exception: Microsoft.VisualStudio.Blah.Blah.Blah, No user context set
but instead it's truncated!
Obviously, it's difficult to tell if the unit test failed due to an expected assertion or some other issue (like a config issue)
I can't find anything in the R# options which relates to this...
Does anyone else have this problem or know what the issue might be? I suspect it's something to do with the reference to the unit testing framework as I also can't use the [ClassInitialize] attribute (it complains about wrong type for arg #1 in signature even though the type expected is the correct one!)