I am looking for a general strategy to use Cucumber tests in a Linux environment to test against an Internet Explorer browser. I have seen a similar answer but that does not seem to apply to capybara 1.1.3.
I was hoping to use capybara to avoid using Selenium directly and the subsequent cost (speed, environmental dependencies) of not using headless tests, but it seems that may not be possible. I want to avoid using both capybara and selenium.
A nice compromise may be Ross Patterson's answer to a Selenium based question:
Headless browsers are a bad idea. They get you some testing, but nothing like what a real user will see, and they mask lots of problems that only real browsers encounter. You're infinitely better off using a "headed" browser (i.e., anything but HTMLUnit) on a headless environment (e.g., Windows, or Linux with XVFB).
Thanks for your thoughts.
Related
I want to write test cases for my rails application. I have already written a lot of test cases in Rails built-in framework Minitest.
Now I want to test javascript functionality of my web app.
I came across these two tools
1: Selenium web-driver
2: Capybara-webkit
I am confused which one to use. I know few advantages and disadvantages of these two tools like
Capybara webkit is headless while selenium web-driver open a browser.
Capybara is faster than selenium.
Capybara cannot open any other application while selenium can interact with third party apps like facebook and LinkedIn
Can anyone tell me the comparison of these two tools for testing ?
You're confusing a few things here. Capybara is a testing framework/DSL, for Ruby, which can be used with any of the test runner frameworks (RSpec, Minitest, etc). It can use a number a of different drivers to communicate with the web app being tested.
The default driver is rack_test which doesn't support any JS and cannot connect to any addresses outside the app under test.
A second driver option is selenium-webdriver which can control multiple different real browsers firefox/chrome/safari/etc. for testing, and can connect to any valid URL. The downside of using selenium-webdriver as the driver is that it opens a real browser and is therefore usually slower with a larger memory footprint.
Another driver option is capybara-webkit which is headless and can also connect to any valid URL. It is generally faster than using selenium however as it is built on an old version of QtWebkit it doesn't support newer web standards (ES2015, etc) so at a minimum you need to make sure all JS is transpiled to ES5 maximum.
There is nothing to stop you using different drivers for different tests to get the benefits of speed for most tests and then use a real browser for tests that need things like WebRTC, etc. The Capybara README details how to do that when using different test runners (RSpec, Minitest, etc)
I'm new in tests area. Regression team where I belong has built GUI tests for some web applications with complex business logic that developers team has produced.
Until now, we have been using Selenium IDE to build regression tests (record, edit, parameterize, debug and playback). Tests are exported and maintained in Html format. We used to have a tool to manage tests and iterations (store html scripts/tests suites, run tests in batch mode, run tests in background, get detailed test result reports), which is now deprecated because uses Selenium RC. Additionally, tests are made only in Firefox, but our clients are mainly IE users.
So, we have some important and strategic decisions to make. We need urgently to start testing in IE and a new way to do the tasks we were doing.
An attempt was made to change the code of tests’ manager tool in order to work with Selenium Webdriver. It was tried to code tests in Ruby from the beginning, since Selenium IDE export to Ruby was not satisfactory. We figured out that huge changes and subsequent tests on the manager tool were needed. It would also involve programming the methods and test them.
Our regression team is quite small and we don’t want to focus too much on the programming task itself, but more on testing our webApps. Additionally, no one on the general team had experience in working with Ruby before.
Can you help us with some suggestions about the route we should take?
Is there an integrated solution easy to work with (as Selenium IDE) and able to do the manager tasks of our old tool without taking us much time on “hard coding”?
Is there any reliable open source tool that could do it? And a commercial solution?
We are in the middle of choosing our headless browser driver solution that will be some implementation of Selenium WebDriver.
There is the GhostDriver, which leverages the PhantomJS in the backend on the one side and HtmlUnitDriver which based on HtmlUnit on the other.
PhantomJS uses WebKit, the rendering engine of Safari, to render the pages while HtmlUnitDriver uses the Rhino engine which no other browsers use (it's just "simulating" browser behaviour. The last fact considered as a con, because the rendering behavior can differ significantly from the popular browsers.
In our opinion, PhantomJS is a much stronger candidate. But, we don't know everything :) Is there other considerations and trade-offs we should take into account with our decision? other scenarios where HtmlUnitDriver can be a better solution?
From my experience with a number of headless browsers, I'd say:
HtmlUnitDriver: the fastest of all implementations I've come across, and perfect for simple, static pages, especially those without JavaScript. Any remotely complex page seems to produce problems - that's my practical experience even if I can't justify in detail. Perfect for testing Selenium features on demo pages, scraping status pages etc.
PhantomJSDriver (PhantomJS + GhostDriver): not as much faster as you might hope vs the desktop browsers, however, much easier to set up than Firefox + xvfb. By default screenshots can look a bit odd, but that usually turns out to be because PhantomJS defaults to a narrow window unless explicitly set (read below for why).
Update: a bit more detail on PhantomJS versions, from my other answer.
Like Safari, PhantomJS uses WebKit for rendering (e.g. Firefox uses Gecko)
Different PhantomJS versions are built against different WebKit versions. PhantomJS 2.x uses WebKit 538.x, which makes it equivalent to Safari 7 or 8. whereas PhantomJS 1.9.8 uses WebKit 534.34, which is equivalent to Safari 5.
This may be an issue for you, since Google determines Safari 5 to be an "old" browser and will therefore potentially render its search pages differently.
So ensuring you use PhantomJS 2.x can reduce the rendering differences that a lot of people report vs. desktop browsers.
Another interesting possibility is SlimerJS. However, I've not got it to work reliably enough yet.
I've never had reliability issues with either HtmlUnitDriver or PhantomJSDriver (the only annoyance one was a HttpClient 4.5 / HtmlUnit 2.17 incompatibility issue).
(In answer to the comment about modifying HTTP requests, I'd personally recommend sticking to the WebDriver API and use a proxy like BrowserMob to mutate requests and responses rather than taking advantage of browser-specific features.)
All in all, I'd advise against creating a tool or process that forces users to choose one browser over another. If possible, allow them to configure or override. For the majority of cases I'd plump for PhantomJS, as it won't let you down. However, the performance of HtmlUnit should be considered for the simplest pages.
See also (perhaps): http://www.guru99.com/selenium-with-htmlunit-driver-phantomjs.html and https://www.quora.com/Software-Testing/How-does-PhantomJS-compare-to-Selenium
I've used phantomJS in a few projects over the last couple of years, but have often had issues with it. For example javascript on pages behaving different to chrome, firefox, internet explorer. Some pages simply not loading, possibly due to redirects but I'm not positive (e.g. keycloak log in pages).
I've not used HtmlUnit as much, but as I type this it is avoiding some of the above phantom issues for me on tests with keycloak login pages.
PhantomJS development has been suspended as of March 3rd 2018 while headless mode has been added to Chrome and Firefox
This means that if you want to receive updates you should either use HtmlUnit, Chrome or Firefox for a headless driver.
I have been asked to record a long running scenario which involves pages of functionality for the life-cycle of a patient from registration to billing. I tried Selenium IDE but it is flaky, giving replay errors of what it just recorded. When i try Selenium 2, I get into DOM and XPATH problems. Selenium 2 is meant for unit testing i believe. What are the open source alternatives which scale to 5 minute scenario record and replay ? I know this is a subjective question, which might have been asked before, but the options might have improved.
We use Selenium 2 on a daily basis (driven by groovy scripts but that's not the point) to run long running scenarii involving multi-websites connections [and even mail confirmations verification]. It's very stable when a proper error handling is done. The key to success with long scenarios is "expect to fail". Like in a real world, when you sometime have to click twice on a button
You have to use the WEbdriver and not the recording in IDE.
You have to use Paje Object Model to make the project stable.
see this article:
https://weblogs.java.net/blog/johnsmart/archive/2010/08/09/selenium-2web-driver-land-where-page-objects-are-king
Selenium Web driver will actually work. Xpath problems might be due to page loading time issues.
Include Implicit or Explicit waits in your selenium code.
Even Thread.sleep(milliseconds) will fix the issues to some extent.
I would actually suggest maybe switching over to Watir-webdriver with PageObject if you are going to be using long-running scenarios. We have extremely long scenarios in an AJAX application and could not solve the problem with Selenium. Switching over to watir-webdriver and the page-object gem allowed us to reuse pages with proper waits, and no failures.
My current default browser is Chrome (dev). I'm using VS2010 and Silverlight4, with ASP.NET MVC3. I don't seem to have the problems with debugging that I've seen others have. My main complaint is that I regularly have to clear my browser cache to get the latest version of my app to show up. Sometimes I have to clear it two or three times. I've taken to changing the background color of certain elements just to be sure whether I've got the actual latest changes.
Are Firefox or IE better in this regard? Is there are trick to make my latest version always appear?
Too lazy to do fiddler.
Seems I hadn't googled very well before, this article seems to be precisely what I wanted
http://codeblog.larsholm.net/2010/02/avoid-incorrect-caching-of-silverlight-xap-file
via this discussion which has other options and some useful discussion https://betaforums.silverlight.net/forums/p/11995/449355.aspx
Unfortunately, that part of my project has been on hold for a bit, so I haven't tried it out yet.
Like you, I use Chrome for my main browser, and I don't use IE for any regular browsing. But I do use IE for Silverlight development, for this reason and others. I rarely if ever have trouble with the IE cache holding onto outdated versions of my XAP file, but this happens pretty regularly with FireFox and Chrome. In addition, depending on how I closed my previous debug session, when I start up a new one, FireFox and Chrome frequently open up my previous tab(s) in addition to the one that I'm actually trying to debug. Consequently, IE is (for me) the cleanest browser to actually debug with.
This isn't really an answer -- just an observation. Sorry :-).
Have you tried investigating why this is happening using Fiddler or a similar HTTP debugging tool? Personally I've never been able to debug Silverlight in Chrome so I usually have to force IE when debugging. But I never have the problem of a stale application. I'd check Fiddler to see if you can isolate the issue. It's probably not directly related to Silverlight.
Your probleme looks like a lot like a cache configuration issue. The web server is often configurated rather aggressively concerning caching of static files, as the XAP.
So the response header are probably set in a way that maximize browser caching.
You could change the webserver configuration to prevent client side caching of the XAP file.
Don't forget to remove these setting in production, however.