My Capybara tests are designed to call page.save_screenshot if an example fails. This is configured in a config.after(:each) block in my spec_helper.
However, not all of my tests always have a window open. Long story short, some of my tests may make a few requests using rest-client and then pass/fail based on some rspec expectations. When these type of tests fail, a browser window opens because of the call to page.save_screenshot.
Is there a way to add a conditional so that saving a screenshot is only attempted when there is a window (headless or non-headless) open?
You can probable do something like
page.save_screenshot if page.current_window.exists?
however the best options would really be to make tests that don't use the browser a different type of test (not system or feature) or add metadata to the test so that metadata can be used to determine whether or not the test uses the browser
# In your tests
it 'does something not using the browser', :no_browser do
...
end
# In your spec_helper
config.after(:each) do |example|
page.save_screenshot unless example.metadata[:no_browser]
end
Related
I know the basics of optimizing Robot Framework for speed on normal applications, but this is not a normal application. It's not a question of going as fast as possible, because if the code executes too fast on an Angular application, it'll try to click an element that isn't enabled or visible, or an element that doesn't exist yet. Timing issues abound, and the result is that I'm using a keyword (below) to slow down my program universally. The problem is that it's hard-coded, and I'm looking for a more "programatic" (programatical? I don't know the exact term) solution that will wait for an element to be clickable and then click it as soon as it is.
This is the keyword I use after every single click (${SLOW_TIME} is a global variable set to 0.5s):
Slow Down
# EXAMPLE USAGE
# Slow Down ${SLOW_TIME}
[Arguments] ${SLOW_TIME}
Sleep ${SLOW_TIME}
This is my current solution, which was written to verify that an element is ready to be clicked for test verification purposes, not speed. It's not complete (needs "Is Clickable") and occasionally causes the program to wait longer than it has to:
Verify Element Is Ready
# EXAMPLE USAGE
# Verify Element Is Ready id=myElementId
# Click Element id=myElementId
[Arguments] ${element}
Variable should exist ${element}
Wait until element is visible ${element}
Wait until element is enabled ${element}
I'm aware that Robot Framework isn't built for speed, but for long tests I'm tired of doing nothing for 10 minutes waiting for it to finish, only to see that I have an incorrect [Fail]. If the solution involves Python, Javascript, or Java, I can work that in.
EDIT: I'm currently using ExtendedSelenium2Library, but its implicit waits don't always work, so I wanted a second layer of waiting, but only as long as necessary.
First solution to explore would be to use libraries specifically designed for Angular based web applications, such as AngularJsLibrary or ExtendedSelenium2Library. As far as I know, ExtendedSelenium2Library is the one that works best (but perhaps not without any issues, I think it does have a few issues)
Next thing to know is, given that your element indeed is visible, it doesn't necessarily mean that it's ready to be clicked. There are quite a few ways to get around this kind of issues.
One way is to put a sleep in your test setup, to give the page some time to fully initialize. I'm personally not a fan of this solution. This solution also doesn't work well for pages that load new content dynamically after the initial document was initialized.
Another way is to wrap your click element in a wait, either by writing your own in Python or, using something like Wait Until Keyword Succeeds
I am using selenium webdriver with Jbehave to automate tests using BDD, and I have a problem regarding verifying values. I need my tests not to immediately fail after an assertion not equals as expected. Instead, I want my test to verify each value, and then if at least one assertion has failed, my step needs to be marked as failure.
I am using verifyEquals, which doesn't terminate the tests after immediately find an assertion failure, but continues instead.
My problem is, if one or more values do not successfully match as expected, my step is not marked as failure, and I have to go to the console to discover if it has some value mismatch.
In this case you should change your test architecture.
For any test frameworks that also have a verify feature that doesn't stop the test, you should terminate your tests with an Assert statement.
Verify is used only to validate some prerequisites to what the test is actually testing.
If you take for example a test where you test that the order amount is correct, you could go with the following:
verify name is correct
verify email is correct
assert order amount is correct
Instead of using verify, it is better to Asserts from the Junit library. You can even customize you assert by using a try catch block and decide on whether to continue the test by printing the error or stop the test.
I am trying to run a simple test on multiple browsers, here is a mock up of the code I've got:
String url = "http://www.anyURL.com";
WebDriver[] drivers = { new FireFoxDriver(), new InternetExplorerDriver,
newChromDriver() };
#Test
public void testTitle() {
for (int i = 0; i < drivers.length; i++) {
// navigate to the desired url
drivers[i].get(url);
// assert that the page title starts with foo
assertTrue(drivers[i].getTitle().startsWith("foo"));
// close current browser session
drivers[i].quit();
}// end for
}// end test
For some reason this code is opening multiple browsers seemingly before the first iteration of loop is completed.
What is actually happening here? and what is a good/better way to do this?
Please understand that I am by no means a professional programmer, and I am also brand new to using Selenium, so if what I am attempting is generally bad practice please let me know, but please don't be rude about it. I will respect your opinion much more if you are respectful in your answers.
No it's not.
In fact, most of the test frameworks have convenient ways to handle sequential/parallel executions of test. You can parametrize test class to run the same tests on multiple browsers. There is an attribute in TestNG called Parameters which can be used with setting.xml for cross browser testing without duplicating the code. An example shown here
I would no do that.
Most of the time it is pointless to immediately run your test against multiple browsers. Most of the problems you run into as you are developing new code or changing old code is not due to browser incompatibilities. Sure, these happens, but most of the time a test will fail because, well, your logic is wrong, and it will not just fail on one browser but on all of them. What do you gain from getting told X times rather than just once that your code is buggy? You've just wasted your time. I typically get the code working on Chrome and then run it against the other browsers.
(By the way, I run my tests against about 10 different combinations of OS, browser and browser version. 3 combinations is definitely not good enough for good coverage. IE 11 does not behave the same as IE 10, for instance. I know from experience.)
Moreover, the interleaving of tests from multiple browsers just seems generally confusing to me. I like one test report to cover only one configuration (OS, browser, browser version) so that I know if there are any problems exactly which configuration is problematic without having to untangle what failed on which browser.
What would be the proper thing to do for each case?
1: Context: Testing a function that creates a database as well as generating metadata for that database
Question: Normally unit test cases are supposed to be independent, but if we want to make sure the function raises an exception when trying to make a duplicate database, would it be acceptable to have ordered test cases where the first one tests if the function works, and the second one tests if it fails when calling it again?
2: Most of the other functions require a database and metadata. Would it be better to call the previous functions in the set up of each test suite to create the database and metadata, or would it be better to hard code the required information in the database?
Your automated test should model the following:
Setup
Exercise (SUT)
Verify
Teardown
In addition, each test should be as concise as possible and only expose the details that are being tested. All other infrastructure that is required to execute the test should be abstracted away so that the test method serves as documention that only exposes the inputs that are being tested in regards to what you want to verify for that particular test.
Each test should strive to start from a clean slate so that the test can be repeated with the same results each time regardless of the results of prior tests that have been executed.
I typically execute a test-setup and a test-cleanup method for each integration test or any test that depends on singletons that maintain state for the System-Under-Test and need to have it's state wiped.
Normally unit test cases are supposed to be independent, but if we want to make sure the function raises an exception when trying to make a duplicate database, would it be acceptable to have ordered test cases where the first one tests if the function works, and the second one tests if it fails when calling it again?
No, ordered tests are bad. There's nothing stopping you from having a test call another method that happens to be a test though:
#Test
public void createDataBase(){
...
}
#Test
public void creatingDuplicateDatabaseShouldFail(){
createDataBase();
try{
//call create again should fail
//could also use ExpectedException Rule here
createDataBase();
fail(...);
}catch(...){
...
}
}
Most of the other functions require a database and metadata. Would it be better to call the previous functions in the set up of each test suite to create the database and metadata, or would it be better to hard code the required information in the database?
If you use a database testing framework like DbUnit or something similar, it can reuse the same db setup over and over again in each test.
In my code I want to be able to log so I have passed the appengine context around my libraries. Then if there is a failure I can log to appengine like so:
context.Warningf("This is not correct!")
I am trying to write a unit test to specifically hit an error case. I am using the appengine/aetest package like this:
context, createErr := aetest.NewContext(nil)
When the test hits the above context.Warningf it fails because aetest.Context does not implement that function.
Is there a recommended way around this? For example, I guess I could set some variable to be "liv", "test" and then not log if in test but that seems hacky. Or is there something obvious I am missing here?
This was not a real problem, it was simply an incorrect environment. It was caused by me running tests in LiteIDE without setting up correctly.