unclear (for me) error on loading url using chrome webdriver in selenium - selenium-webdriver

I had a working python script to do some basic data scraping from a local IP address using selenium and chromedriver. The working version used the executable path to define where chromedriver.exe was.
I had to change this as it stopped working (wouldn't load chrome anymore, chromedriver version definitely correct) so swapped the path to a service object.
My script now opens chrome again, but gets hung up with 'data:,' in the address bar, until it times out (after 1 minute exactly) and an error message 'selenium.common.exceptions.WebDriverException: Message: unknown error: DevToolsActivePort file doesn't exist'
Sidenote - a new bug I've noticed since changing this: if I have a chrome window open already, the script opens a new tab with url 'data:,' nut without the little bar saying chrome is being controlled by automated software, and I get an error saying 'selenium.common.exceptions.WebDriverException: Message: unknown error: Chrome failed to start: exited normally.'
from selenium import webdriver
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import Select
from webdriver_manager.chrome import ChromeDriverManager
s = Service(path to folder "chromedriver.exe")
url = "http://the page i'm after"
driver = webdriver.Chrome(service=s)
driver.get(url)`
after this, the script fills in a form, hits submit and does a bit of scraping. But currently it just gets stuck with a url of 'data:,' and times out after a minute and then I get the error - 'selenium.common.exceptions.WebDriverException: Message: unknown error: DevToolsActivePort file doesn't exist'
It was working a few days ago and I can't think of anything that changed on the site I'm using or anything else!
I think there are unnecessary imports, but I've tried a few things and got a bit lost...
Thanks a lot for any help or tips, I'm pretty new to coding in general, so was very quickly out of ideas
EDIT, running with chrome options "--headless" there is no problem. Can anyone explain why this works?

Related

React app with nginx returning blank page

I have a react application that I've deployed using nginx, which however only return a blank page. I've been looking for a solution for the last week and I can say that I tried almost anything... but nothing seems to work.
If I open the console I can see that all of the files are delivered with success, however, I got a "Loading failed for the with source ..." on firefox and a "net::ERR_HTTP2_PROTOCOL_ERROR 200" on Chrome. The weird thing is that both files are actually received (200 status) and can be viewed with the development tools. Moreover, if I visit the static resource link I get the full content without problems.
And of course, the issue is only when using the production environment, if I deploy it locally it works perfectly.
I really don't know what to do. I've tried updating the "homepage" directive, playing around with "react-router", changing the various nginx configurations and many other things but nothing.
If anyone could help me out I would really appreciate it!
I was running nginx with docker on Raspberry and, for some unknown reason, the timing on the container was completely messed up. I solved the problem by running the container in privileged mode ('privileged: true' on docker-compose)

React/Github-pages: Failed to load resource: the server responded with a status of 404 ()

I have successfully uploaded react app`s to github pages before, but this time when I try to upload a new page the code is pushed and everything looks fine in the repository, everything loads fine in the localhost. The problem is when I access the website in the console i get the error message above.
const res = await axios.get(' https://api.open5e.com/monsters/?limit=1086');
I am using this API, the limit is above 50 but I don`t believe this is the reason for the crash but I cannot think of anything else.
Website 1: (Currently redirects you to a 404 error page, as part of my attempt to solve the problem)
https://ottotsuma.github.io/MonsterSearch/
Repo - Website 1:
https://github.com/ottotsuma/MonsterSearch
Website 2:
https://ottotsuma.github.io/MonsterApiReact/
Repo all code:
https://github.com/ottotsuma/DnD-React
I also get this error in localhost and the hosted website: Uncaught TypeError: Cannot use 'in' operator to search for 'default' in undefined. But since the website was working as intended so far I have just ignored it.
I was hoping if anyone else has had a similar problem with gh-pages they might know what I have done wrong.
Kind regards,
Sheep

cannot open chrome settings or downloads page with Jmeter/Selenium/Java

Tried to open the downloads page in Chrome with Selenium/JMeter/Java and getting an unsupported protocol error when trying to open the page. What am I missing? Thanks!
Please note: opening web HTTP/S protocol works fine for me, this question is specific for when trying to open .get("chrome://... with Selenium/JMeter/Java
import org.openqa.selenium.*;
import org.openqa.selenium.support.ui.*;
import org.openqa.selenium.chrome.ChromeDriver;
import org.openqa.selenium.chrome.ChromeOptions;
WDS.browser.get("chrome://downloads/");
error is:
Response message:unknown protocol: chrome
My expectation is that the request itself succeeds, JMeter just fails to properly populate the request results, in particular it should fail at this line:
res.setURL(new URL(getWebDriver().getCurrentUrl()));
Because JMeter's SampleResult.setURL() function expects a valid URL and chrome://downloads is not something which is accepted.
You can "tell" JMeter to ignore this error by adding a Response Assertion as a child of this request and configuring it like:
This way JMeter will execute the sampler and it will be "green" despite the exception hence the error will not show up in the report and you will still be able to measure the response time:

Server up and running but webpage not rendering on heroku local

I am in the process of deploying a dynamic, React.js app to Heroku and wanted to test it out by using 'heroku local' to see if it works before pushing it to Heroku. Everything seemed to be working fine with the server - my database console.log message logs to the terminal signifying everything is going well - but when I try to access the website which is on localhost:5000, I get an error message of 'Cannot GET /' and the console prints a message saying , 'Failed to load resource: the server responded with a status of 404 (Not Found)'. My React.js files are all within the build folder which in conjunction with all of the other files, were pushed to git and commited, but for some reason my files are not showing up. I would greatly appreciate if someone could help me determine what went wrong. Sorry if my question is a little vague; please let me know if I can clarify something in better detail as I am fairly new to programming.
I've added a proxy to one of my package.json files to help with url routing; possibly this could be causing the issue?
Also, if it helps at all, a photo of my folders within Visual Studio code are listed below:

ERROR: Firebase Database (${JSCORE_VERSION}) INTERNAL ASSERT FAILED: Reference.ts has not been loaded

I get error message when I am loading my Webapp with IE10 and below IE version.
I am building an my Webapp with React 15.4.0 and firebase 5.5.0.
I have tested the Webapp with Chrome, Firefox, IE11 and it's working fine.
Try to comment line below in file "firebase-js-sdk/src/database/api/Query.ts" may help you to solve this error.
assert(__referenceConstructor, 'Reference.ts has not been loaded');
Other work around are also available in link below that you can try to check.
Firebase Database (4.3.1) INTERNET ASSERT FAILED: Reference.ts has not been loaded.

Resources