Scenario:
I have configured e2e tests using Jest for a React web app. To run e2e tests locally, I had to start the server locally from a terminal window using npm start command and from another terminal window, execute the test command npm run test:e2e. I have both Chrome and Firefox browsers installed in my pc as a result e2e tests are running properly in local.
Now, I want to run these e2e tests as part of GitLab CI-CD pipeline and having issue with the following:
How to ensure that browsers (Chrome/Firefox) are available to the GitLab runner? I got some tutorials which suggested to install required browser(s) as part of the pipeline step. Is it the best approach?
Is it possible to achieve the same without installing the browser(s)? For example: using selenium_standalone-chrome images? If yes, how to do it?
Any reference to example link/code is highly appreciated. Thanks.
In GitLab CI-CD pipeline (for Chrome browser only at the moment):
E2Etest:
stage: e2e
image: node:10.15.3
variables:
CI_DEBUG_TRACE: "true"
allow_failure: false
script:
- set -e
- npm install
- wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add -
- sh -c 'echo deb http://dl.google.com/linux/chrome/deb/ stable main > /etc/apt/sources.list.d/google.list'
- apt-get update
- apt-get install -y xfonts-100dpi xfonts-75dpi xfonts-scalable xfonts-cyrillic xvfb x11-apps imagemagick google-chrome-stable
- npm run test:e2e:chrome
- pkill node
artifacts:
paths:
- coverage
expire_in: 1 hr
Related
I am trying run reach a React application inside Circleci, so it can be reached by Cypress e2e. First I tried running the app with the Webpack web server, but the issue is the same.
When Cypress runs the tests, it cannot react to the instance where the React is being served from, so I am trying to use http-server instead.
This is what I have at the moment:
version: 2.1
jobs:
cypress:
working_directory: ~/project
docker:
- image: cypress/base:18.12.1
environment:
CYPRESS_baseUrl: http://127.0.0.1:3000
steps:
- checkout:
path: ~/project
- run:
name: Install dependencies
command: npm ci --ignore-scripts
- run:
name: Build
command: npm run build
- run:
name: Install http-server
command: npm install -g http-server
- run:
name: Serve React app
command: |
http-server ./dist/ -a 127.0.0.1 -p 3000 &
# - run:
# name: Wait for server to start
# command: npx wait-on http-get:http://127.0.0.1:3000
- run:
name: cypress install
command: npx cypress install
- run:
name: Run cypress tests
command: npx cypress run
# Invoke jobs via workflows
# See: https://circleci.com/docs/2.0/configuration-reference/#workflows
workflows:
e2e_test:
jobs:
- cypress
When I run http-server ./dist/ -a 127.0.0.1 -p 3000 in the foreground, I can see the server is starting:
Starting up http-server, serving ./dist/
http-server version: 14.1.1
http-server settings:
CORS: disabled
Cache: 3600 seconds
Connection Timeout: 120 seconds
Directory Listings: visible
AutoIndex: visible
Serve GZIP Files: false
Serve Brotli Files: false
Default File Extension: none
Available on:
http://127.0.0.1:3000
Hit CTRL-C to stop the server
When I run it in the background and the script reaches the tests, I get this:
[667:0213/085129.291037:ERROR:gpu_memory_buffer_support_x11.cc(44)] dri3 extension not supported.
Cypress could not verify that this server is running:
> http://127.0.0.1:3000
We are verifying this server because it has been configured as your baseUrl.
Cypress automatically waits until your server is accessible before running tests.
We will try connecting to it 3 more times...
We will try connecting to it 2 more times...
We will try connecting to it 1 more time...
Cypress failed to verify that your server is running.
Please start this server and then run Cypress again.
Exited with code exit status 1
CircleCI received exit code 1
I tried waiting for the server with: npx wait-on http-get:http://127.0.0.1:3000 but it just stays waiting forever.
I had the same issue when I ran the React app with the Webpack server. The app and the tests run without issues in my dev env.
Any help would be greatly appreciated.
For anyone else facing this issue, I solved it by running the server and the test under the same process. I did not know that every command runs in a separate process as described in this post. The code changed to this thanks to ChatGpt suggesting the use of the nohup command, which runs the specified command with no hang-up signals so that it continues to run even if the terminal session is closed:
jobs:
cypress:
working_directory: ~/project
docker:
- image: cypress/base:18.12.1
environment:
CYPRESS_baseUrl: http://127.0.0.1:3000
steps:
- checkout:
path: ~/project
- run:
name: Install dependencies
command: npm ci --ignore-scripts
- run:
name: Build
command: npm run build
- run:
name: Install http-server
command: npm install -g http-server
- run:
name: cypress install
command: npx cypress install
- run:
name: Serve React app and run test
command: |
nohup http-server ./dist/ -a 127.0.0.1 -p 3000 > /dev/null 2>&1 &
npx wait-on http://127.0.0.1:3000 && npx cypress run
The dockerfile builds locally but the GitLab pipeline fails, saying:
Step 3/8 : RUN gem install bundler rake
ERROR: While executing gem ... (Gem::FilePermissionError)
You don't have write permissions for the /usr/local/bundle directory.
The project strucutre is a Ruby Sinatra backend and a React frontend.
The Dockerfile looks like this
FROM ruby:3.0-alpine
# Install Dependencies
RUN apk update && apk add --no-cache build-base mysql-dev rrdtool
RUN gem install bundler rake
# Copy the files and build
WORKDIR /usr/server
COPY . .
RUN bundler install
# Run bundler
EXPOSE 443
CMD ["bundle", "exec", "puma"]
I thought Docker was meant to solve the problem of "it runs on my machine"...
What I've tried
As per this post, I tried adding -n /usr/local/bundle but it did not fix the issue.
I am using gulp-angular-protractor for end-to-end testing of my web app. I am getting the following error recently but earlier it used to work perfectly.
Any help to fix this issue will be appreciated.
Details:
OS - Ubuntu 16.04,
node - v6.12.2,
npm - v3.10.10,
Vagrant - v1.9.3,
karma - v0.13.22,
gulp-angular-protractor - v1.1.1,
protractor - v5.1.2,
webdriver-manager - v12.0.6
Removing java-common and installing openjdk-8-jre fixed the issue for me.
To remove java-common and its dependencies, run below command
sudo apt-get purge --auto-remove java-common
To install openjdk-8-jre, run below command
sudo add-apt-repository ppa:openjdk-r/ppa
sudo apt-get update
sudo apt-get install openjdk-8-jdk
Additionally, if you have more than one Java versions installed on your system. Run below command set the default Java
sudo update-alternatives --config java
and type in a number to select a Java version.
We have an AngularJS application. We wrote a dockerfile for it so it's reusable on every system. The dockerfile isn't a best practice and it's maybe some weird build up (build and hosting in same file) for some but it's just created to run our angularjs app locally on each PC of every developer.
Dockerfile:
FROM nginx:1.10
... Steps to install nodejs-legacy + npm
RUN npm install -g gulp
RUN npm install
RUN gulp build
.. steps to move dist folder
We build our image with docker build -t myapp:latest .
Every developer is able to run our app with docker run -d -p 80:80 myapp:latest
But now we're developing other backends. So we have a backend in DEV, a backend in UAT, ...
So there are different URLS which we need to use in /config/xx.json
{
...
"service_base": "https://backend.test.xxx/",
...
}
We don't want to change that URL every time, rebuild the image and start it. We also don't want to declare some URLS (dev, uat, prod, ..) which can be used there. We want to perform our gulp build process with an environment variable instead of a hardcoded URL.
So we we can start our container like this:
docker run -d -p 80:80 --env URL=https://mybackendurl.com app:latest
Is there someone who has experience with this kind of issues? So we'll need an env variable in our json and building it and add the URL later on if that's possible.
EDIT : Better option is to use build args
Instead of passing URL at docker run command, you can use docker build args. It is better to have build related commands to be executed during docker build than docker run.
In your Dockerfile,
ARG URL
And then run
docker build --build-arg URL=<my-url> .
See this stackoverflow question for details
This was my 'solution'. I know it isn't the best docker approach but just for our developers it was a big help.
My dockerfile looks like this:
FROM nginx:1.10
RUN apt-get update && \
apt-get install -y curl
RUN sed -i "s/httpredir.debian.org/`curl -s -D - http://httpredir.debian.org/demo/debian/ | awk '/^Link:/ { print $2 }' | sed -e 's#<http://\(.*\)/debian/>;#\1#g'`/" /etc/apt/sources.list
RUN \
apt-get clean && \
apt-get update && \
apt-get install -y nodejs-legacy && \
apt-get install -y npm
WORKDIR /home/app
COPY . /home/app
RUN npm install -g gulp
RUN npm install
COPY start.sh /
CMD ["./start.sh"]
So after the whole include of the app + npm installation inside my nginx I start my container with the start.sh script.
The content of start.sh:
#!/bin/bash
sed -i 's#my-url#'"$DATA_ACCESS_URL"'#' configs/config.json
gulp build
rm -r /usr/share/nginx/html/
//cp right folders which are created by gulp build to /usr/share/nginx/html
...
//start nginx container
/usr/sbin/nginx -g "daemon off;"
So the build will happen when my container starts. Not the best way of course but it's all for the needs of the developers. Have an easy local frontend.
The sed command will perform a replace on the config file which contains something like:
{
"service_base": "my-url",
}
So my-url will be replaced by my the content of my environment variable which I willd define in my docker run command.
Than I'm able to perform.
docker run -d -p 80:80 -e DATA_ACCESS_URL=https://mybackendurl.com app:latest
And every developer can use the frontend locally and connect with their own backend URL.
I am new to Cloud9 and I am trying to use Protractor for e2e testing. I am running the angular-phonecat examples.
The error is the folowing:
Using ChromeDriver directly...
/home/ubuntu/workspace/node_modules/protractor/node_modules/selenium-webdriver/lib/atoms/error.js:109
var template = new Error(this.message);
^
UnknownError: chrome not reachable
(Driver info: chromedriver=2.10.267518,platform=Linux 3.14.13-c9 x86_64)
at new bot.Error (/home/ubuntu/workspace/node_modules/protractor/node_modules/selenium-webdriver/lib/atoms/error.js:109:18)
..
I installed the chromedriver. The only thing is how to install the actual Chrome on cloud9 and run the tests?
Thank you in advance,
cheers,
Haytham
I'm a fan of webase IDE and Cloud9 is one of the best. Here a way to install Xvfb, chrome and Protractor for doing AngularJS end-to-end automated testing on Cloud9
Open a terminal (xvfb already installed on c9.io)
install X11 fonts
$ sudo apt-get install -y xfonts-100dpi xfonts-75dpi xfonts-scalable xfonts-cyrillic
install last chrome
$ wget -q -O - \
https://dl-ssl.google.com/linux/linux_signing_key.pub \
| sudo apt-key add -
$ sudo sh -c 'echo "deb http://dl.google.com/linux/chrome/deb/ stable main" \
>> /etc/apt/sources.list.d/google-chrome.list'
$ sudo apt-get update
$ sudo apt-get install -y google-chrome-stable
install protractor
$ npm install -g protractor
update webdriver
$ webdriver-manager update
use --no-sandbox option with chrome
As c9.io is running inside container this option is needed.
Update protractor conf.js to pass the option to chrome
capabilities: {
browserName: 'chrome',
'chromeOptions': {
args: ['--no-sandbox']
}
}
run protractor test on headless chrome
start webdriver with xvfb (headless)
$ xvfb-run webdriver-manager start
run the test on other terminal
$ protrator conf.js
From http://blog.maduma.com
Its not possible to 'install' browsers onto cloud9 to run browser-based end-to-end test scenarios. The selenium web driver is looking to load chrome on which to run the tests but is throwing an error as it isn't something which can be found on the cloud9 development environment.
If you are committed to running these tests on an online IDE like cloud9 your only option is to use a headless browser like phantomJS but a note of caution from protractor docs
We recommend against using PhantomJS for tests with Protractor. There are many reported issues with PhantomJS crashing and behaving differently from real browsers.
I would recommend downloading your app locally and running extensive E2E tests across the browsers which your users will actually be using to access your app.
Another option is to use something like Saucelabs (https://saucelabs.com/) for automated cloud-based cross-browser testing; this will need some configuration in the protractor_conf.js file. Note that there may be additional costs involved with cloud-based testing.
I just tested this and it is working for me on my chromebook. It contains all of the steps necessary to complete the first page of https://docs.angularjs.org/tutorial, including setting up the protractor tests.
create new blank workspace
run these commands
rm -rf * .c9
git clone --depth=16 https://github.com/angular/angular-phonecat.git
cd angular-phonecat
nvm install 7
nvm alias default node
npm install minimatch
sudo npm install npm -g
edit this file
angular-phonecat/package.json
"start": "http-server ./app -a $IP -p $PORT -c-1"
run these commands
npm start
click 'Share'
browse to url next to 'Application'
yay! the phonecat webapp should be running!
karma
add these lines to karma.conf.js
hostname: process.env.IP,
port: process.env.PORT
edit package.json
"test": "karma start karma.conf.js --no-browsers"
run this command
npm test
browse to http://<projectName>.<cloud9User>.c9.io:8081
go forth and test!
protractor
run these commands
sudo apt-get update
sudo apt-get install -y xvfb
wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | sudo apt-key add -
sudo sh -c 'echo "deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google-chrome.list'
sudo apt-get update
sudo apt-get install -y google-chrome-stable
edit protractor.conf.js
capabilities: {
'browserName': 'chrome',
'chromeOptions': {
args: ['--no-sandbox']
}
}
run these commands
npm install -g protractor
sudo webdriver-manager update
xvfb-run webdriver-manager start
edit protractor.conf.js
baseUrl: 'http://' + process.env.IP + ':' + process.env.PORT + '/'
seleniumAddress: 'http://127.0.0.1:4444/wd/hub'
run these commands
protractor protractor.conf.js