getstatuscode is not working with phantomJS - selenium-webdriver

I am testing a web app with loads and loads of web pages and I would like to verify that none of the URLs are broken on every commit. Here is a code snippet.
$page = $this->getSession()->getPage();
$page_URLs = $page->findAll('css', 'header nav ul a');
assertEquals(16, count($page_URLs));
foreach($page_URLs as $pageUrl){
try{
$pageUrl->click();
$statusCode = $this->getSession()->getStatusCode();
echo $pageUrl->getText();
assertEquals(200, $statusCode, "The webpage is not available");
} catch (Exception $ex) {
echo 'Caught exception: ', $ex->getMessage(), "\n";
}
$this->getSession()->back();
}
I was using the Behat, MINK with Goutte driver (as a headless browser) for CI integration (and getStatusCode() was working fine). But most of the functionality on the web app is java script driven therefore I have to move on to PhantomJS which support javascript. But I didn't realise that getStatusCode() doesn't work with PhantomJS.
Has anyone got any idea if I can replace this with something and get the similar result.

PhantomJS support in Behat is realised via the WebDriver protocol (selenium). WebDriver doesn't support inspecting status codes on purpose, as it falls out of scope of emulating user actions.

Related

Capybara does not completely reset when visit different URL than app_host

I am new to Capybara so I may miss something obvious but I am not sure what is going on. I have three test cases in the same suite with app_host set to URL A.
Test1: Visit website A which then redirects to website B and requires log in to B.
Test2: Visit website B and perform some tests
Test3: Visit website B and perform some tests.
In test 2 and 3, I use visit with absolute URL to visit website B and the code is identical. In test 2, I don't have to log in but in test 3, website B redirects to the log in page.
I found a similar issue here: Capybara with headless chrome doesn't clear session between test cases which use different subdomains but after updating from 2.8 to 3.9, I still have the same issue.
I also tried to Capybara.reset_sessions! and Capybara.current_session.driver.browser.manage.delete_all_cookies after each test without success.
I am using Capybara 3.29.0 and Selenium-webdriver 3.142.6. The Chrome driver is in a docker image selenium/standalone-chrome:3.14.0-iron.
Driver registration:
Capybara.register_driver :selenium do |app|
chrome_capabilities = Selenium::WebDriver::Remote::Capabilities.chrome(
chromeoptions: {
args: %w[headless no-sandbox disable-gpu --window-size=1024,1024]
})
Capybara::Selenium::Driver.new(app,
browser: :remote,
:url => 'http://localhost:4444/wd/hub',
:desired_capabilities => chrome_capabilities)
end
Capybara.default_driver = :selenium
Capybara.javascript_driver = :selenium
Any idea what cause the difference in behavior?
You're using massively out of date versions of Capybara and selenium-webdriver. The WebDriver protocol only allows for resetting the cookies of the host you're on when reset occurs, so if you're moving between hosts the cookies for only one of them are going to get cleared. If, however, you switch to using a recent version of Capybara with a recent version of selenium-webdriver and Chrome then Capybara will clear cookies for all hosts (using CDP)

Selenium: execute script after Google Tag Manager has loaded

I'm loading a URL in selenium, which uses Google Tag Manager to inject a script.
<script src="http://sample.com/file.js" />
I'm loading the URL in the webdriver using this
await driver.get('https://sample-page-that-uses-gtm.com)
When I go to that URL in my browser (not selenium driver), and manually check the script in the elements tab and console (query selector), I can successfully find the script. However, this is not the case in Selenium. I manually opened console of the Selenium Webdriver and ran a check for the presence of the injected script, but there was none detected.
This is what I ran on the console
document.querySelector('script[src="http://sample.com/file.js"]')
It finds the script on the browser, but not on Webdriver. Is this a problem with using GTM on Selenium?
try to use an expected condition to force the code to wait before checking if the script is present:
WebDriverWait(browser, 3).until(EC.element_to_be_clickable((By.XPATH, "//script[#src='http://sample.com/file.js']")))
this might be happening because you're checking for the script before it gets loaded in the DOM, if this does not solve your problem, please update your question with an example of a URL that uses Google Tag Manager and doesn't load on selenium, as this will help identify the problem

Understanding Selenium + browsermob-proxy + protractor + AngularJS

What I have: several integration test specs written in Jasmine for my AngularJS app (they navigate through my entire app)
What I want: perform a network monitoring of my app and export the data using HAR
Naive solution: just write a script which receives an URL and export the data using HAR. It's easy, but it's not automatic (I need to provide the urls manually)
Enhance solution: automate the process mentioned. A script that navigates through all the pages of my app and extracts the network data for each. But since I'm already navigating through all the pages of my app via integration tests (protractor + Jasmine) I want to just "plug-in" the part about exporting the network traffic.
I've found this How can I use BrowserMob Proxy with Protractor?, and I was checking out the example provided here example, but I'm not quite sure how it works.
What I should put as the host and port for the proxy?
I'm using Selenium, and I've specified the host and port for it, but I'm getting ECONNREFUSED errors.
This is my protractor file config:
var Proxy = require('browsermob-proxy').Proxy;
...
protractorConf = exports.base = {
//... more things
onPrepare: function() {
... more things....
browser.params.proxy = new Proxy({ // my selenium config for browsermob
selHost: '10.243.146.33',
selPort: 9456
});
//... more things
}
}
And in one of my integration tests specs (it's CoffeeScript btw):
beforeEach ->
browser.get BASE_URL
browser.params.proxy.doHAR 'some/page/of/my/app', (err, data) ->
if err
console.log err
else
console.log data
But I'm getting as I've said ECONNREFUSED error. I'm quite lost about the integration about Selenium with Protractor and brosermob.
Any ideas or alternatives? Thanks!

Cannot find hostname in file:/// error when using Ionic and OAuth.io

I am using Ionic and Oauth.io to perform authentication. If I run ionic serve and include the outh.js file in my index everything works good from the browser.
But when I run ionic run ios or install the app in android, I get the following error when I press the auth button (the one that suppose to execute OAuth.popup
I do not know what to do, until now I have checked the following:
In config.xml I have access, allow-intent and allow-navigation full permisive
I have installed and re-installed the plugin ionic plugin add https://github.com/oauth-io/oauth-phonegap.git
I tried to run the native app without the inclusion of the oauth.js file and everything breaks.
Using current versions up to date.
I am new to Ionic, so I don't know how to debug the device-running app or simulator.
Could be similar to this post but not exactly .
Your advices will be appreciated.
I figure it out reading some posts. The OAuth initialization and references should be done after the device is ready, so it is best to put the initialize in this block:
$ionicPlatform.ready(function() {
// ...
if(typeof window.OAuth !== 'undefined'){
$rootScope.OAuth = window.OAuth;
$rootScope.OAuth.initialize('XXX');
}
else{
console.log("plugin not loaded, this is running in a browser");
$.getScript( "lib/oauth.js", function() {
$rootScope.OAuth = OAuth;
$rootScope.OAuth.initialize('XXX');
});
}
});
Now, if the plugin is loaded it initializes the window.OAuth object, else the app is running in browser, so I have to include the oauth.js file. Also I assigned the OAuth to the $rootScope for quick access.
Hope this helps anyone.

Timed out waiting for Protractor to synchronize -- happens on server, but not on localhost

I'm writing a suite of Protractor tests from the ground up for a new Angular webapp. My team has asked me to run my tests against their Test server instance. I've got my config file and a basic test spec set up, but as soon as I hit the main page and do a simple expect() statement, I get "Timed out waiting for Protractor to synchronize with the page after 11 seconds."
Here's my test:
describe('the home page', function() {
it('should display the correct user name', function(){
expect(element(by.binding('user.name')).getText()).toContain(browser.params.login.user);
});
});
I cloned the dev team's git repo, set it up on my localhost, changed my baseUrl and ran my Protractor test against it locally. Passed without a hitch.
After some conversation with the dev team, I've determined that it's not a $http or $timeout issue. We use both of those services, but not repeatedly (I don't think they're "polling"). None of them should happen more than once, and all the timeouts are a half-second long.
What else could cause Protractor to time out like that? I wish it failed on my localhost so I could go tweak the code until I find out what's causing the problem, but it doesn't.
I have discovered the solution: check for console errors.
Turns out, one of our $http requests wasn't coming back because my Protractor tests were accessing the page via https, but one of our xhtml endpoints was at a non-secured location. This resulted in a very helpful console error which I had not yet seen, because it only occurred when accessing the site with WebDriver.
The error text: "The page at [url] was loaded over HTTPS, but requested an insecure XMLHttpRequest endpoint. This request has been blocked; the content must be served over HTTPS."
I modified my baseUrl to access the site via regular http, and now everything is working fine.

Resources