End To End Testing on Headless Server - selenium-webdriver

I am trying to set up an environment for end-to-end testing on a droplet running Ubuntu server 12.04.3 on digital ocean.
What I am trying to achieve in the end is for my jenkins (installed on the one droplet) to be able to run my end-to-end tests. Now, the server is ofcourse headless and the end-to-end tests need to run through a browser (I am using protractor with the selenium standalone server with chromedriver).
My question is: how do I spawn a browser on that machine? I have installed xorg and if I do startx on the server, log out and ssh -X to it, I can manually run the end-to-end tests (a browser pops up on my local machine). But I can get it to work without ssh -X to it, and since jenkins is on the same droplet where the tests are to be run. Well I dont get a browser to spawn.
NOTE: I know I might be missing something really trivial here since I don't fully understand the configuration nor the xorg.
Any hints or a complete answer is very much appreciated, this is giving me gray hair.
Edit: After a little digging I think i got the xorg stuff a bit wrong, i am guessing the purpose of X is to be able to spawn a window on a remote machine ( ie my local machine). And what i am after is more along the lines of a virtual frame buffer such as Xvfb...

There is PhantomJS but with Protractor is buggy and a dead-end.
You can still use Chrome & Firefox headless through docker-selenium or, if you don't like Docker you can do it yourself with ubuntu-headless sample. Both solutions provide Chrome & Firefox by using Xvfb even though there is no real DISPLAY.
UPDATE 2 Seems to be possible to run Xvfb in OSX: http://xquartz.macosforge.org/landing/
UPDATE 1 Mac OSX selenium headless solution:
Enable multi-user remote desktop access to OSX machine
So can test selenium headless on mac. Not headless really but as another user so it doesn't interfere with your current user display.
To do this you need kickstart: http://support.apple.com/en-us/HT201710
Begin using the kickstart utility
sudo /System/Library/CoreServices/RemoteManagement/ARDAgent.app/Contents/Resources/kickstart -restart -agent
Activate Remote Desktop Sharing, enable access privileges for all users and restart ARD Agent:
sudo /System/Library/CoreServices/RemoteManagement/ARDAgent.app/Contents/Resources/kickstart -activate -configure -access -on -restart -agent -privs -all
Apple Remote Desktop 3.2 or later only
Allow access for all users and give all users full access
sudo /System/Library/CoreServices/RemoteManagement/ARDAgent.app/Contents/Resources/kickstart -configure -allowAccessFor -allUsers -privs -all
Kickstart help command
sudo /System/Library/CoreServices/RemoteManagement/ARDAgent.app/Contents/Resources/kickstart -help

A lot of angular apps use Travis CI to perform Protractor based end-to-end integration tests on headless vms all the time. I do not know the details of exactly how they do it but I do know that they use a linux service called xvfb which is a headless x windows implementation. Looking at a typical Travis configuration file, it appears that all they do before firing up their web server, selenium server and kicking off Protractor is to call sh -e /etc/init.d/xvfb start to start this service.

Related

connection to selenium webdriver and airflow

so today i was trying to web scrape via apache-airflow but it is giving this error
File "/home/siva/.local/lib/python3.10/site-packages/selenium/webdriver/chromium/webdriver.py", line 89, in __init__
self.service.start()
File "/home/siva/.local/lib/python3.10/site-packages/selenium/webdriver/common/service.py", line 105, in start
raise WebDriverException("Can not connect to the Service %s" % self.path)
selenium.common.exceptions.WebDriverException: Message: Can not connect to the Service /d/apache-airflow/dags/chromedriver.exe
what should I do to connect the service so the web scrapping can be done I run my airflow test bench in ubuntu wsl so if there is any solution please provide it to get its work done in airflow
or if there are other ways to scrape in airflow do suggest.
Can not connect to the Service /d/apache-airflow/dags/chromedriver.exe
I don't have this set up to test with Airflow to be sure, but I have successfully run Chrome with Selenium under WSL2.
It sounds like you might be following some old instructions that were applicable for WSL1. Under WSL1, my understanding is that it is possible to use the Windows Chrome executable/webdriver.
You might want to try switching to WSL1, but I don't know for sure that Airflow will run there. It's very likely that it will.
However, if you do need to use WSL2, you'll have to use the Linux binaries.
This means that you'll need to install Google Chrome inside the WSL distribution and use the corresponding chromedriver_linux64.zip.
You'll also need to either ...
... be running WSL with the ability to run graphical applications - If you have Windows 11, this is automatic. If not, I recommend Xrdp as the next easiest path.
... or run Chrome in headless mode. I'm not sure off the top of my head how to do this with Airflow, unfortunately.

How to see console.logs of a running nodeJs application on ubuntu 18 EC2 instance?

I am new to the node world. I created a node js rest API. When I run npm start in my local machine or in the terminal for the first time, I can see console.log() in my terminal. Now, I am running the same application on an AWS Ec2 instance with Ubuntu as os. I run npm start and serve my app on port 80. I do this via ssh and after running my server I close the ssh connection. But when I reconnect via ssh, I want to see those console.log() messages in my terminal for some purpose.
I completely understand that logging messages in the terminal is not a good idea and there can be so many alternatives. Just want to know how to access the same terminal window/result as we see when I start my application.
if you are using pm2, you can try "pm2 logs"
So Nodemon won't work well in a production server or in any instance where you need to have the app going by itself.
Nodemon is a dev tool to enable you to restart your server during development. In a "real" vps you need to place the process in the background or it will be automatically killed when the connection times out.
Check out this YT series for a correct deploy architecture in an NGINX server with the use of pm2 and NGINX on a Red Hat server, I've personally used it more than once:
https://www.youtube.com/playlist?list=PLQlWzK5tU-gDyxC1JTpyC2avvJlt3hrIh

Docker and Chromium net::ERR_NETWORK_CHANGED

I have an AngularJS application that does an ajax call but it results in a chromium error:
net::ERR_NETWORK_CHANGED
I tried to disable any adapters that I don't need. I have multiple ones and multiple dockers containers running.
I disabled ipv6 on each adapter. I don't use any proxy and use default Chromium browser without any addon nor browser profile.
Disabled Wifi interface, only using ethernet.
Any idea how to fix this?
I was constantly getting ERR_NETWORK_CHANGED.
This is what finally worked for my current browsers:
Chromium, Opera and FlashPeak Slimjet.
sudo service docker stop
The following actions did not solve my issue:
Checked modem, router, and cables to isolate the issue.
Disabled IPv6 from my wired Network
Commands:
sudo sysctl -w net.ipv6.conf.default.disable_ipv6=1
sudo sysctl -w net.ipv6.conf.all.disable_ipv6=1
After I stopped Docker, I am not getting any more console errors.
I hope this can help someone saving hours of annoying troubleshooting.
Ron.
sudo service docker stop
But this is not a solution because I need docker in my daily work.
So I found out that docker networks cause this problem
docker network prune helped me
Or try to delete one by one except of none, bridge, host
Based on the original answers, I want to go into more detail what fixed it in my case.
Stopping the docker service sudo service docker stop in my case fixed the issue.
The underlying issue one of my docker-compose setups having restart=always.
Unfortunatly I had a bug causing a container to terminate and restart. This restart caused the network change.
It is determinable by running docker ps and noticing the container restarted.
I fixed the problem and ran docker-compose down for my docker compose setup. Both actions would fix it independently.
Furthermore a Bugreport for chromium exists regarding this issue, but it has the state wontfix.

Unix text based browser with javascript support

I need to perform smoke-test of my AngularJS application on Unix, from terminal.
I tried accessing application from
links2
links
w3m
elinks
lynx
All above-mentioned browsers show empty screen. In most of them I am able to view source using \ (backslash), so I could do basic verification if application server works properly at least.
Is there any unix text browser with javascript support? I am not looking for complete support (so application would be usable).
It would be great to have ability just to view some elements of the page
Try to install libmozjs (on debian/ubuntu aptitude install libmozjs185-dev), then (don't forget to do a sudo ldconfig first) compile from the source elinks.
wget http://elinks.or.cz/download/elinks-current-0.13.tar.bz2
tar xjvf elinks-current-0.13.tar.bz2
cd elinks-0.13*
./configure
make -j8
sudo make install
Check after ./configure if ECMAScript is flagged SpiderMonkey document scripting.
p.s. Ubuntu 14.04 here.
I guess the best way to do it would be with a headless browser such as PhantomJS http://phantomjs.org/. It has a programmable interface, so you can run a simple JS script to open your webpage, and do a simple check.
Or, you can write a more complete test using Protractor https://github.com/angular/protractor. It will give you a nice API to write tests targeting Selenium Webdriver http://www.seleniumhq.org/projects/webdriver/, that can target many browsers, including PhantomJS.
If you do a lot of AngularJS development you should take a look a Protractor anyway.

Anyone Else Having Trouble Registering Ghostdriver with Selenium Grid?

I know that there is documentation on the ghostdriver wiki on how to attach it to a selenium grid. For those that don't know you can find it here
I've compiled the special phantomjs twice, tried to attach it to selenium servers local, and remote using both Selenium versions 2.24 and 2.25 to no avail. It starts up Ghostdriver locally just like you expect, but there's certainly no registering going on.
I tried both ip/localhost:4444 and ip/localhost:4444/grid/register with no results. I also thought perhaps it just didn't show up on the grid console and tried to run tests against it anyway, which failed stating there was nothing populating the grid.
I've tried this on both CentOS 6 and Ubuntu 12.04 with no luck.
I'm out of ideas. Has anyone else had problems like this?
I had exactly the same problem and fixed it with using the lastest version of selenium-grid.
The good website: https://code.google.com/p/selenium/wiki/Grid2 (this is no longer http://selenium-grid.seleniumhq.org/).
Here the steps (version 2.31.0):
Download the selenium-server:
wget https://selenium.googlecode.com/files/selenium-server-standalone-2.31.0.jar
Launch selenium grid server:
java -jar selenium-server-standalone-2.31.0.jar -role hub
In a new terminal, launch GhostDriver:
phantomjs --webdriver=5555 --webdriver-selenium-grid-hub=http://localhost:4444
Check out available remote control on http://localhost:4444/grid/console.
You should see something like that:
listening on http://127.0.0.1:5555
test session time out after 300 sec.
Supports up to 1 concurrent tests from:
phantomjs
I was testing this commands on CentOS 6.3, I hope it works for you!

Resources