Unable to find text on reactjs page with capybara/poltergeist/phantomjs - reactjs

I cannot assert any text with the following configuration and screenshots are blank, even though there are no errors and network activity shows 100% success.
Versions
phantomjs-prebuilt 2.1.12
phantomjs 2.1.1
poltergeist 1.13.0
capybara 2.12.0
Debug logs
Gist of phantomjs/poltergeist/capybara logs
Symptoms
captured screenshot and it is transparent
captured network activity (see gist) successful
captured html is good (see gist)
no errors in phantomjs debug (see gist)
no errors in poltergeist debug (see gist)
neither long sleeps nor high Capybara.default_max_wait_time have no impact on result
react js works with both manual testing and with selenium/chrome
Feature
Feature: smoke
Scenario: Home
Given I go to the home page
#When I sleep for 30 seconds
Then I should see "Landing Page"
Poltergeist setup
Capybara.register_driver :poltergeist do |app|
Capybara::Poltergeist::Driver.new(app, {
js_errors: true,
timeout: 180,
debug: true,
phantomjs: "#{Rails.root}/node_modules/phantomjs-prebuilt/bin/phantomjs",
phantomjs_options: %w(--debug=true --load-images=no --proxy-type=none --ignore-ssl-errors=yes --ssl-protocol=TLSv1),
window_size: [1280, 600],
# inspector: true,
phantomjs_logger: PoltergeistLogger # File.open("target/phantomjs.log", "a")
})
end
TLDR;
I have no errors yet a reactjs page doesn't seem to be rendering a DOM within phantomjs or when accessed via poltergeist.
I've run out of options that I'm aware of. Any thoughts I can try?

Related

why Console logs are generated from instrument.ts file in sentry.io

Behaviour after integration sentry.io in react app.
Console logs are generated from the instremnts.ts file.
I have tried printing it from index page as well.
Do you need it for production or development? We are using this integration to transform console logs into event breadcrumbs. If you don't need them in development (or at all), you can turn them off:
Sentry.init({
dsn: '_YOUR_DSN_',
integrations: [new Sentry.Integrations.Breadcrumbs({ console: false })]
})

Capybara does not completely reset when visit different URL than app_host

I am new to Capybara so I may miss something obvious but I am not sure what is going on. I have three test cases in the same suite with app_host set to URL A.
Test1: Visit website A which then redirects to website B and requires log in to B.
Test2: Visit website B and perform some tests
Test3: Visit website B and perform some tests.
In test 2 and 3, I use visit with absolute URL to visit website B and the code is identical. In test 2, I don't have to log in but in test 3, website B redirects to the log in page.
I found a similar issue here: Capybara with headless chrome doesn't clear session between test cases which use different subdomains but after updating from 2.8 to 3.9, I still have the same issue.
I also tried to Capybara.reset_sessions! and Capybara.current_session.driver.browser.manage.delete_all_cookies after each test without success.
I am using Capybara 3.29.0 and Selenium-webdriver 3.142.6. The Chrome driver is in a docker image selenium/standalone-chrome:3.14.0-iron.
Driver registration:
Capybara.register_driver :selenium do |app|
chrome_capabilities = Selenium::WebDriver::Remote::Capabilities.chrome(
chromeoptions: {
args: %w[headless no-sandbox disable-gpu --window-size=1024,1024]
})
Capybara::Selenium::Driver.new(app,
browser: :remote,
:url => 'http://localhost:4444/wd/hub',
:desired_capabilities => chrome_capabilities)
end
Capybara.default_driver = :selenium
Capybara.javascript_driver = :selenium
Any idea what cause the difference in behavior?
You're using massively out of date versions of Capybara and selenium-webdriver. The WebDriver protocol only allows for resetting the cookies of the host you're on when reset occurs, so if you're moving between hosts the cookies for only one of them are going to get cleared. If, however, you switch to using a recent version of Capybara with a recent version of selenium-webdriver and Chrome then Capybara will clear cookies for all hosts (using CDP)

how to keep opened developer tools while running a selenium nightwatch.js test?

I am starting to write e2e tests using nightwatch.js and I noticed some errors that I would like to inspect manually in the target browser's console (developer tools). but always when I open the developer console, it is automatically closed by the browser. is this a intended feature of either selenium or nightwatch.js, and, if it is the case, how can I disable it?
I'm successfully using this config in nightwatch:
...
chrome: {
desiredCapabilities: {
browserName: 'chrome',
javascriptEnabled: true,
acceptSslCerts: true,
chromeOptions: {
'args': ['incognito', 'disable-extensions', 'auto-open-devtools-for-tabs']
}
}
},
...
Unfortunately it doesn't seem to be possible. See here:
When you open the DevTools window, ChromeDriver is automatically
disconnected. When ChromeDriver receives a command, if disconnected,
it will attempt to close the DevTools window and reconnect.
Chrome's DevTools only allows one debugger per page. As of 2.x,
ChromeDriver is now a DevTools debugging client. Previous versions of
ChromeDriver used a different automation API that is no longer
supported in Chrome 29.
See also this question.
You might be able to achieve this using Node Inspector: https://github.com/node-inspector/node-inspector
Put a debugger statement where you want the test to pause and run node-debug ./node_modules/.bin/nightwatch --config path/to/nightwatch.json --test yourTest.js

Timed out waiting for Protractor to synchronize -- happens on server, but not on localhost

I'm writing a suite of Protractor tests from the ground up for a new Angular webapp. My team has asked me to run my tests against their Test server instance. I've got my config file and a basic test spec set up, but as soon as I hit the main page and do a simple expect() statement, I get "Timed out waiting for Protractor to synchronize with the page after 11 seconds."
Here's my test:
describe('the home page', function() {
it('should display the correct user name', function(){
expect(element(by.binding('user.name')).getText()).toContain(browser.params.login.user);
});
});
I cloned the dev team's git repo, set it up on my localhost, changed my baseUrl and ran my Protractor test against it locally. Passed without a hitch.
After some conversation with the dev team, I've determined that it's not a $http or $timeout issue. We use both of those services, but not repeatedly (I don't think they're "polling"). None of them should happen more than once, and all the timeouts are a half-second long.
What else could cause Protractor to time out like that? I wish it failed on my localhost so I could go tweak the code until I find out what's causing the problem, but it doesn't.
I have discovered the solution: check for console errors.
Turns out, one of our $http requests wasn't coming back because my Protractor tests were accessing the page via https, but one of our xhtml endpoints was at a non-secured location. This resulted in a very helpful console error which I had not yet seen, because it only occurred when accessing the site with WebDriver.
The error text: "The page at [url] was loaded over HTTPS, but requested an insecure XMLHttpRequest endpoint. This request has been blocked; the content must be served over HTTPS."
I modified my baseUrl to access the site via regular http, and now everything is working fine.

Timeout::Error: timeout while waiting for angular with Capybara + rails

I have an integration test that is failing for the reason:
"Timeout::Error: timeout while waiting for angular"
I have run the test with selenium so that I can see what happens, and the page loads perfectly fine. I threw a debugger in my test so that I can browse around the app with the test fixtures-- and everything works perfectly...
Yet in the debugger, as soon as I type "page", to query what capybara thinks it sees, I get:
[5] pry(#<RSpec::Core::ExampleGroup::Nested_1>)> page
Timeout::Error: timeout while waiting for angular
from /Users/me/.rvm/gems/ruby-2.0.0-p451#my_app/gems/capybara-angular-0.0.4/lib/capybara/angular/waiter.rb:30:in `timeout!'
So basically it's lying to me because angular is fully loaded, api calls are happening and responding with json, the templates are getting interpolated... What the... ?
This gem has been updated to 0.1.0, which fixes the problem you are describing. Cheers!
Issue & Pull Request: https://github.com/wrozka/capybara-angular/issues/11

Resources