How to make Protractor work while using Cloud9? - angularjs

I am new to Cloud9 and I am trying to use Protractor for e2e testing. I am running the angular-phonecat examples.
The error is the folowing:
Using ChromeDriver directly...
/home/ubuntu/workspace/node_modules/protractor/node_modules/selenium-webdriver/lib/atoms/error.js:109
var template = new Error(this.message);
^
UnknownError: chrome not reachable
(Driver info: chromedriver=2.10.267518,platform=Linux 3.14.13-c9 x86_64)
at new bot.Error (/home/ubuntu/workspace/node_modules/protractor/node_modules/selenium-webdriver/lib/atoms/error.js:109:18)
..
I installed the chromedriver. The only thing is how to install the actual Chrome on cloud9 and run the tests?
Thank you in advance,
cheers,
Haytham

I'm a fan of webase IDE and Cloud9 is one of the best. Here a way to install Xvfb, chrome and Protractor for doing AngularJS end-to-end automated testing on Cloud9
Open a terminal (xvfb already installed on c9.io)
install X11 fonts
$ sudo apt-get install -y xfonts-100dpi xfonts-75dpi xfonts-scalable xfonts-cyrillic
install last chrome
$ wget -q -O - \
https://dl-ssl.google.com/linux/linux_signing_key.pub \
| sudo apt-key add -
$ sudo sh -c 'echo "deb http://dl.google.com/linux/chrome/deb/ stable main" \
>> /etc/apt/sources.list.d/google-chrome.list'
$ sudo apt-get update
$ sudo apt-get install -y google-chrome-stable
install protractor
$ npm install -g protractor
update webdriver
$ webdriver-manager update
use --no-sandbox option with chrome
As c9.io is running inside container this option is needed.
Update protractor conf.js to pass the option to chrome
capabilities: {
browserName: 'chrome',
'chromeOptions': {
args: ['--no-sandbox']
}
}
run protractor test on headless chrome
start webdriver with xvfb (headless)
$ xvfb-run webdriver-manager start
run the test on other terminal
$ protrator conf.js
From http://blog.maduma.com

Its not possible to 'install' browsers onto cloud9 to run browser-based end-to-end test scenarios. The selenium web driver is looking to load chrome on which to run the tests but is throwing an error as it isn't something which can be found on the cloud9 development environment.
If you are committed to running these tests on an online IDE like cloud9 your only option is to use a headless browser like phantomJS but a note of caution from protractor docs
We recommend against using PhantomJS for tests with Protractor. There are many reported issues with PhantomJS crashing and behaving differently from real browsers.
I would recommend downloading your app locally and running extensive E2E tests across the browsers which your users will actually be using to access your app.
Another option is to use something like Saucelabs (https://saucelabs.com/) for automated cloud-based cross-browser testing; this will need some configuration in the protractor_conf.js file. Note that there may be additional costs involved with cloud-based testing.

I just tested this and it is working for me on my chromebook. It contains all of the steps necessary to complete the first page of https://docs.angularjs.org/tutorial, including setting up the protractor tests.
create new blank workspace
run these commands
rm -rf * .c9
git clone --depth=16 https://github.com/angular/angular-phonecat.git
cd angular-phonecat
nvm install 7
nvm alias default node
npm install minimatch
sudo npm install npm -g
edit this file
angular-phonecat/package.json
"start": "http-server ./app -a $IP -p $PORT -c-1"
run these commands
npm start
click 'Share'
browse to url next to 'Application'
yay! the phonecat webapp should be running!
karma
add these lines to karma.conf.js
hostname: process.env.IP,
port: process.env.PORT
edit package.json
"test": "karma start karma.conf.js --no-browsers"
run this command
npm test
browse to http://<projectName>.<cloud9User>.c9.io:8081
go forth and test!
protractor
run these commands
sudo apt-get update
sudo apt-get install -y xvfb
wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | sudo apt-key add -
sudo sh -c 'echo "deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google-chrome.list'
sudo apt-get update
sudo apt-get install -y google-chrome-stable
edit protractor.conf.js
capabilities: {
'browserName': 'chrome',
'chromeOptions': {
args: ['--no-sandbox']
}
}
run these commands
npm install -g protractor
sudo webdriver-manager update
xvfb-run webdriver-manager start
edit protractor.conf.js
baseUrl: 'http://' + process.env.IP + ':' + process.env.PORT + '/'
seleniumAddress: 'http://127.0.0.1:4444/wd/hub'
run these commands
protractor protractor.conf.js

Related

Which ChromeDriver & Headless Chrome versions exist that are compatible with ruby 2.7?

The issue
I have a web scraper running in AWS lambda but in a few weeks AWS lambda will stop supporting Ruby 2.7. I built my scraper last year using this tutorial.
I need to find a version of chrome driver & headless chrome that is compatible with Ruby 2.7, But I don't know exactly where to start.
I have looked at the ChromeDriver's downloads portal But I don't see any indication there that Chrome driver will work for ruby 2.7 or any other specific version of ruby for that matter.
The code I have works by accessing the ChromeDriver binary and starting it inside a specific folder
I downloaded the specific binaries I am using by running these commands:
# serverless chrome
wget https://github.com/adieuadieu/serverless-chrome/releases/download/v1.0.0-37/stable-headless-chromium-amazonlinux-2017-03.zip
unzip stable-headless-chromium-amazonlinux-2017-03.zip -d bin/
rm stable-headless-chromium-amazonlinux-2017-03.zip
# chromedriver
wget https://chromedriver.storage.googleapis.com/2.37/chromedriver_linux64.zip
unzip chromedriver_linux64.zip -d bin/
rm chromedriver_linux64.zip
Solution
I found the solution to this problem. Ruby 2.7 that Lambda offers by default runs on top of Amazon Linux 2 (which lacks many important libraries & dependencies), unfortunately, there's nothing you can do to change that.
However, Amazon offers you the ability to run your code in a custom docker image that can be up to 10GB in size.
I fixed this problem by creating my own image using the following Dockerfile
FROM public.ecr.aws/lambda/ruby:2.7
# Install dependencies needed to run MySQL & Chrome
RUN yum -y install libX11
RUN yum -y install dejavu-sans-fonts
RUN yum -y install procps
RUN yum -y install mysql-devel
RUN yum -y install tree
RUN mkdir /var/task/lib
RUN cp /usr/lib64/mysql/libmysqlclient.so.18 /var/task/lib
RUN gem install bundler
RUN yum -y install wget
RUN yum -y groupinstall 'Development Tools'
# Ruby Gems
ADD Gemfile ${LAMBDA_TASK_ROOT}/
ADD Gemfile.lock ${LAMBDA_TASK_ROOT}/
RUN bundle config set path 'vendor/bundle' && \
bundle install
# Install chromedriver & chromium
RUN mkdir ${LAMBDA_TASK_ROOT}/bin
# Chromium
RUN wget https://github.com/adieuadieu/serverless-chrome/releases/download/v1.0.0-37/stable-headless-chromium-amazonlinux-2017-03.zip
RUN unzip stable-headless-chromium-amazonlinux-2017-03.zip -d ${LAMBDA_TASK_ROOT}/bin/
RUN rm stable-headless-chromium-amazonlinux-2017-03.zip
# Chromedriver
RUN wget https://chromedriver.storage.googleapis.com/2.37/chromedriver_linux64.zip
RUN unzip chromedriver_linux64.zip -d ${LAMBDA_TASK_ROOT}/bin/
RUN rm chromedriver_linux64.zip
# Copy function code
COPY app.rb ${LAMBDA_TASK_ROOT}
WORKDIR ${LAMBDA_TASK_ROOT}
RUN tree
RUN ls ${LAMBDA_TASK_ROOT}/bin
# Set the CMD to your handler (could also be done as a parameter override outside of the Dockerfile)
CMD [ "app.handle" ]
Notes
If your code was previously deployed using a zip file you will have to either destroy the previous function or create a second function with the code update, it all comes down to how you want to handle deployment.
It is possible to automate the deployment process using the serverless framework

Docker image error "Service chromedriver unexpectedly exited. Status code was: 127"

FROM python:3.7
WORKDIR /opt
RUN curl https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb -o /chrome.deb
RUN dpkg -i /chrome.deb || apt-get install -yf
RUN rm /chrome.deb
ENV CHROMEDRIVER_VERSION 89.0.4389.23
ENV CHROMEDRIVER_DIR /chromedriver
RUN mkdir -p $CHROMEDRIVER_DIR
# Download and install Chromedriver
RUN wget -q --continue -P $CHROMEDRIVER_DIR "http://chromedriver.storage.googleapis.com/$CHROMEDRIVER_VERSION/chromedriver_linux64.zip"
RUN unzip $CHROMEDRIVER_DIR/chromedriver* -d $CHROMEDRIVER_DIR
ENV PATH $CHROMEDRIVER_DIR:$PATH
RUN apt-get update
COPY . .
RUN pip3 install -r requirements.txt
CMD ["python3","code.py"]
code.py
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
try:
chrome_options = Options()
chrome_options.add_argument("--headless")
driver = webdriver.Chrome(options=chrome_options)
driver.get('google.com')
print(f'{driver.page_source}')
except Exception as e:
print(f'111111111 Exception: {e}')
try:
options = webdriver.ChromeOptions()
options.headless = True
driver = webdriver.Chrome(options=options)
driver.get('google.com')
print(f'{driver.page_source}')
except Exception as e:
print(f'222222 Exception: {e}')
print(f'h')
requirements.txt
selenium
When i build this image it build fine , but throws following error on running image
Service chromedriver unexpectedly exited. Status code was: 127
any idea , where i am doing wrong in the Dockerfile. I tried for other posts available but nothing worked for me.
My machine os is Mac catalina and was trying to configure the code for remote host, but this is not even working at my local system as well.
Is this is the issue I have another OS and configuring another
I tried this post also but nothing worked
Had the same problem. Fixed it by installing chrome-browser.
Please check this answer: How to install Google chrome in a docker container

How to run React Jest e2e tests in GitLab CI-CD pipeline?

Scenario:
I have configured e2e tests using Jest for a React web app. To run e2e tests locally, I had to start the server locally from a terminal window using npm start command and from another terminal window, execute the test command npm run test:e2e. I have both Chrome and Firefox browsers installed in my pc as a result e2e tests are running properly in local.
Now, I want to run these e2e tests as part of GitLab CI-CD pipeline and having issue with the following:
How to ensure that browsers (Chrome/Firefox) are available to the GitLab runner? I got some tutorials which suggested to install required browser(s) as part of the pipeline step. Is it the best approach?
Is it possible to achieve the same without installing the browser(s)? For example: using selenium_standalone-chrome images? If yes, how to do it?
Any reference to example link/code is highly appreciated. Thanks.
In GitLab CI-CD pipeline (for Chrome browser only at the moment):
E2Etest:
stage: e2e
image: node:10.15.3
variables:
CI_DEBUG_TRACE: "true"
allow_failure: false
script:
- set -e
- npm install
- wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add -
- sh -c 'echo deb http://dl.google.com/linux/chrome/deb/ stable main > /etc/apt/sources.list.d/google.list'
- apt-get update
- apt-get install -y xfonts-100dpi xfonts-75dpi xfonts-scalable xfonts-cyrillic xvfb x11-apps imagemagick google-chrome-stable
- npm run test:e2e:chrome
- pkill node
artifacts:
paths:
- coverage
expire_in: 1 hr

WebDriverError: Cannot define class using reflection

I am using gulp-angular-protractor for end-to-end testing of my web app. I am getting the following error recently but earlier it used to work perfectly.
Any help to fix this issue will be appreciated.
Details:
OS - Ubuntu 16.04,
node - v6.12.2,
npm - v3.10.10,
Vagrant - v1.9.3,
karma - v0.13.22,
gulp-angular-protractor - v1.1.1,
protractor - v5.1.2,
webdriver-manager - v12.0.6
Removing java-common and installing openjdk-8-jre fixed the issue for me.
To remove java-common and its dependencies, run below command
sudo apt-get purge --auto-remove java-common
To install openjdk-8-jre, run below command
sudo add-apt-repository ppa:openjdk-r/ppa
sudo apt-get update
sudo apt-get install openjdk-8-jdk
Additionally, if you have more than one Java versions installed on your system. Run below command set the default Java
sudo update-alternatives --config java
and type in a number to select a Java version.

Reusable docker image for AngularJS

We have an AngularJS application. We wrote a dockerfile for it so it's reusable on every system. The dockerfile isn't a best practice and it's maybe some weird build up (build and hosting in same file) for some but it's just created to run our angularjs app locally on each PC of every developer.
Dockerfile:
FROM nginx:1.10
... Steps to install nodejs-legacy + npm
RUN npm install -g gulp
RUN npm install
RUN gulp build
.. steps to move dist folder
We build our image with docker build -t myapp:latest .
Every developer is able to run our app with docker run -d -p 80:80 myapp:latest
But now we're developing other backends. So we have a backend in DEV, a backend in UAT, ...
So there are different URLS which we need to use in /config/xx.json
{
...
"service_base": "https://backend.test.xxx/",
...
}
We don't want to change that URL every time, rebuild the image and start it. We also don't want to declare some URLS (dev, uat, prod, ..) which can be used there. We want to perform our gulp build process with an environment variable instead of a hardcoded URL.
So we we can start our container like this:
docker run -d -p 80:80 --env URL=https://mybackendurl.com app:latest
Is there someone who has experience with this kind of issues? So we'll need an env variable in our json and building it and add the URL later on if that's possible.
EDIT : Better option is to use build args
Instead of passing URL at docker run command, you can use docker build args. It is better to have build related commands to be executed during docker build than docker run.
In your Dockerfile,
ARG URL
And then run
docker build --build-arg URL=<my-url> .
See this stackoverflow question for details
This was my 'solution'. I know it isn't the best docker approach but just for our developers it was a big help.
My dockerfile looks like this:
FROM nginx:1.10
RUN apt-get update && \
apt-get install -y curl
RUN sed -i "s/httpredir.debian.org/`curl -s -D - http://httpredir.debian.org/demo/debian/ | awk '/^Link:/ { print $2 }' | sed -e 's#<http://\(.*\)/debian/>;#\1#g'`/" /etc/apt/sources.list
RUN \
apt-get clean && \
apt-get update && \
apt-get install -y nodejs-legacy && \
apt-get install -y npm
WORKDIR /home/app
COPY . /home/app
RUN npm install -g gulp
RUN npm install
COPY start.sh /
CMD ["./start.sh"]
So after the whole include of the app + npm installation inside my nginx I start my container with the start.sh script.
The content of start.sh:
#!/bin/bash
sed -i 's#my-url#'"$DATA_ACCESS_URL"'#' configs/config.json
gulp build
rm -r /usr/share/nginx/html/
//cp right folders which are created by gulp build to /usr/share/nginx/html
...
//start nginx container
/usr/sbin/nginx -g "daemon off;"
So the build will happen when my container starts. Not the best way of course but it's all for the needs of the developers. Have an easy local frontend.
The sed command will perform a replace on the config file which contains something like:
{
"service_base": "my-url",
}
So my-url will be replaced by my the content of my environment variable which I willd define in my docker run command.
Than I'm able to perform.
docker run -d -p 80:80 -e DATA_ACCESS_URL=https://mybackendurl.com app:latest
And every developer can use the frontend locally and connect with their own backend URL.

Resources