I am looking to parameterize the baseURL to run in several different environments, like "local," "dev" "test" "prod" etc. I can think of two ways of doing this.
Pass the baseURL param at run time.
Create a separate GULP task for each baseURL.
I think I want to go with option #2 but wanted to check with some others on this.
Thanks,
Tyler
There is not much of a difference between 1. and 2., as you have to pass baseUrl param in a gulp task?
Both of them require change of code in case of new environment (that means commit-push - pull-request - find someone for code review... at least for me :) )
I think it is better to separate baseUrl from code completely into env variable:
onPrepare: function() {
//load env variables for testing
if (typeof process.env.BASE_URL !== "undefined") {
browser.baseUrl = process.env.BASE_URL;
console.log('Base URL = ' + browser.baseUrl);
}
Related
I have a different URL for our api depending if it's development or production for a react app.
Using webpack, how can I set an env var like __API_URL__ and then change that depending if it's built using webpack.config.dev vs webpack.config.prod
I thought the answer may be in webpack.DefinePlugin but no luck.
new webpack.DefinePlugin({
__API_URL__: 'localhost:3005',
}),
I'd expect __API_URL__ to be available as a global but no such luck.
What would be the right way to do this? Also key thing is that no express server on the prod deploy. So this has to happen during the build...
As Michael Rasoahaingo said, the DefinePlugin works similar like replacing values with regular expressions: It replaces the value literally in your source code. I would not recommend to use the DefinePlugin for this kind of task.
If you want to switch configs based on the environment, you could use resolve.alias for that. Just import your config like this:
var config = require("config");
and then add a mapping in your webpack.config.js:
resolve: {
alias: {
config$: require.resolve("path/to/real/config/file")
}
}
DefinePlugin is not working as you expected. It doesn't expose __API_URL__ as a global variable.
According to the documentation: "The values will be inlined into the code which allows a minification pass to remove the redundant conditional."
So, it will find all occurence of __API_URL__ and changes it.
var apiUrl = __API_URL__
and
__API_URL__: '"localhost:3005"' // note the ' and "
become
var apiUrl = "localhost:3005"
I'm trying to annotate and minify a systemjs angular project. Systemjs comes with a build function, but it is not `'gulp-aware'. There is the possibility to pass the builder an option to minify, but there is not one for ng-annotate, so I will need gulp to do both for me instead.
gulp.task('bundle', function () {
var options = {}
builder.buildStatic('./assets/app/app.js', options)
.then(function(data) {
console.log("then called");
// make data available for another task
});
How can I combine the above with
gulp.task('productionApp', function() {
return [source the output from 'bundle']
.pipe(ngannotate())
.pipe(uglify())
.pipe(gulp.dest('./dist'));
});
I could just output the first task to a file, and then .src that in, but that can't be the best way?
The simplest way is to save it inside a buffer (actually, a simple object), then make a stream of and continue as you would with src.
Gulp's repository contains a recipe how it's done.
Note: you should make all those load-* tasks to run at the very beginning, you can either use run-sequence as they've done or make them as dependencies of the "real" tasks.
The yargs package on npm exports the object argv, wich is a very clever representation of the command-line params. For example, the invocation
gulp -a test -b 123 my-task
is represented during the run by a param argv with value
{ a: 'test', b: 123 }
which is passed to the gulp task my-task and, before it, to all its predecessors.
If one of the predecessors assigns a new prop to argv
argv.newProp = 'newValue'
this prop will be available to all its successors, including the task that you really want to execute.
The instruction const { argv } = require('yargs') is to be put at the start of the gulpfile, and it can be enriched with aliases and defaults. Reference is here
In my project I build some of my HTTP requests like so:
var options = {
params:{
foo: 'bar'
hello: world
}
};
$http.get("my/service", options)
Which means that the final HTTP call looks something like my/service?foo=bar&hello=worldVar
How do I setup my $httpBackend to account for this?
The problems I see are:
I'm not guaranteed order in my parameters with this style, which means setting up the first parameter in expectGet will be hard.
Its hard to test calls when I really don't care about the parameters
The best solution I've found to this is to use the override of expect/when that accepts a function, and to mix it in with this answer. That gets me to something like
$httpBackend.expectGET(function(url){
var parser = document.createElement('a');
parser.href=url;
return parser.search.indexOf('foo=bar') > -1 &&
parser.search.indexOf('hello=world') > -1;
})
or if I don't care about the parameters I can just test for parser.pathname.
However, this still isnt perfect because its a pain to test that I dont have extra parameters, so Im still actively looking for an alternative solution.
if you don't care about the parameters, you can just use regex to match to the static part of the URL:
$httpBackend.whenGET(/my\/service/)
I have created tests for my application. Everything works but it runs slow and even though only 1/3 of the application is tested it still takes around ten minutes for protrator to create the test data, fill out the fields, click the submit button etc.
I am using Google Crome for the testing. It seems slow as I watch protractor fill out the fields one by one.
Here's an example of my test suite:
suites: {
login: ['Login/test.js'],
homePage: ['Home/test.js'],
adminPage: ['Admin/Home/test.js'],
adminObjective: ['Admin/Objective/test.js'],
adminObjDetail: ['Admin/ObjectiveDetail/test.js'],
adminTopic: ['Admin/Topic/test.js'],
adminTest: ['Admin/Test/test.js'],
adminUser: ['Admin/User/test.js'],
adminRole: ['Admin/Role/test.js']
},
This is one test group:
login: ['Login/test.js'],
homePage: ['Home/test.js'],
adminUser: ['Admin/User/test.js'],
adminRole: ['Admin/Role/test.js']
This is another test group:
adminPage: ['Admin/Home/test.js'],
adminObjective: ['Admin/Objective/test.js'],
adminObjDetail: ['Admin/ObjectiveDetail/test.js'],
adminTopic: ['Admin/Topic/test.js'],
adminTest: ['Admin/Test/test.js'],
The two groups can run independently but they must run in the order I have above. After the answers I did read about sharing but I am not sure if this helps my situation as my tests need to be run in order. Ideally I would like to have one set of tests run in one browser and the other set in another browser.
I read about headless browsers such as PhantomJS. Does anyone have experience with these being faster?
Any advice on how I could do this would be much appreciated.
We currently use "shardTestFiles: true" which runs our tests in parallel, this could help if you have multiple tests.
I'm not sure what you are testing here, whether its the data creation or the end result. If the latter, you may want to consider mocking the data creation instead or bypassing the UI some other way.
Injecting in Data
One thing that you can do that will give you a major boost in performance is to not double test. What I mean by this is that you end up filling in dummy data a number of times to get to a step. Its also one of the major reasons that people need tests to run in a certain order (to speed up data entry).
An example of this is if you want to test filtering on a grid (data-table). Filling in data is not part of this action. Its just an annoying thing that you have to do to get to testing the filtering. By calling a service to add the data you can bypass the UI and seleniums general slowness (Id also recommend this on the server side by injecting values directly into the DB using migrations).
A nice way to do this is to add a helper to your pageobject as follows:
module.exports = {
projects: {
create: function(data) {
return browser.executeAsyncScript(function(data, callback) {
var api = angular.injector(['ProtractorProjectsApp']).get('apiService');
api.project.save(data, function(newItem) {
callback(newItem._id);
})
}, data);
}
}
};
The code in this isnt the cleanest but you get the general gist of it. Another alternative is to replace the module with a double or mock using [Protractor#addMockModule][1]. You need to add this code before you call Protractor#get(). It will load after your application services overriding if it has the same name as an existing service.
You can use it as follows :
var dataUtilMockModule = function () {
// Create a new module which depends on your data creation utilities
var utilModule = angular.module('dataUtil', ['platform']);
// Create a new service in the module that creates a new entity
utilModule.service('EntityCreation', ['EntityDataService', '$q', function (EntityDataService, $q) {
/**
* Returns a promise which is resolved/rejected according to entity creation success
* #returns {*}
*/
this.createEntity = function (details,type) {
// This is your business logic for creating entities
var entity = EntityDataService.Entity(details).ofType(type);
var promise = entity.save();
return promise;
};
}]);
};
browser.addMockModule('dataUtil', dataUtilMockModule);
Either of these methods should give you a significant speedup in your testing.
Sharding Tests
Sharding the tests means splitting up the suites and running them in parallel. To do this is quite simple in protractor. Adding the shardTestFiles and maxInstences to your capabilities config should allow you to (in this case) run at most two test in parrallel. Increase the maxInstences to increase the number of tests run. Note : be careful not to set the number too high. Browsers may require multiple threads and there is also an initialisation cost in opening new windows.
capabilities: {
browserName: 'chrome',
shardTestFiles: true,
maxInstances: 2
},
Setting up PhantomJS (from protractor docs)
Note: We recommend against using PhantomJS for tests with Protractor. There are many reported issues with PhantomJS crashing and behaving differently from real browsers.
In order to test locally with PhantomJS, you'll need to either have it installed globally, or relative to your project. For global install see the PhantomJS download page (http://phantomjs.org/download.html). For local install run: npm install phantomjs.
Add phantomjs to the driver capabilities, and include a path to the binary if using local installation:
capabilities: {
'browserName': 'phantomjs',
/*
* Can be used to specify the phantomjs binary path.
* This can generally be ommitted if you installed phantomjs globally.
*/
'phantomjs.binary.path': require('phantomjs').path,
/*
* Command line args to pass to ghostdriver, phantomjs's browser driver.
* See https://github.com/detro/ghostdriver#faq
*/
'phantomjs.ghostdriver.cli.args': ['--loglevel=DEBUG']
}
Another speed tip I've found is that for every test I was logging in and logging out after the test is done. Now I check if I'm already logged in with the following in my helper method;
# Login to the system and make sure we are logged in.
login: ->
browser.get("/login")
element(By.id("username")).isPresent().then((logged_in) ->
if logged_in == false
element(By.id("staff_username")).sendKeys("admin")
element(By.id("staff_password")).sendKeys("password")
element(By.id("login")).click()
)
I'm using grunt-protractor-runner v0.2.4 which uses protractor ">=0.14.0-0 <1.0.0".
This version is 2 or 3 times faster than the latest one (grunt-protractor-runner#1.1.4 depending on protractor#^1.0.0)
So I suggest you to give a try and test a previous version of protractor
Hope this helps
Along with the great tips found above I would recommend disabling Angular/CSS Animations to help speed everything up when they run in non-headless browsers. I personally use the following code in my Test Suite in the "onPrepare" function in my 'conf.js' file:
onPrepare: function() {
var disableNgAnimate = function() {
angular
.module('disableNgAnimate', [])
.run(['$animate', function($animate) {
$animate.enabled(false);
}]);
};
var disableCssAnimate = function() {
angular
.module('disableCssAnimate', [])
.run(function() {
var style = document.createElement('style');
style.type = 'text/css';
style.innerHTML = '* {' +
'-webkit-transition: none !important;' +
'-moz-transition: none !important' +
'-o-transition: none !important' +
'-ms-transition: none !important' +
'transition: none !important' +
'}';
document.getElementsByTagName('head')[0].appendChild(style);
});
};
browser.addMockModule('disableNgAnimate', disableNgAnimate);
browser.addMockModule('disableCssAnimate', disableCssAnimate);
}
Please note: I did not write the above code, I found it online while looking for ways to speed up my own tests.
From what I know:
run test in parallel
inject data in case you are only testing a UI element
use CSS selectors, no xpath (browsers have a native engine for CSS, and the xpath engine is not performance as CSS engine)
run them on high performant machines
use as much as possible beforeAll() and beforeEach() methods for instructions that you repeat often in multiple test
Using Phantomjs will considerably reduce the duration it takes in GUI based browser, but better solution I found is to manage tests in such a way that it can be run in any order independently of other tests, It can be achieved easily by use of ORM(jugglingdb, sequelize and many more) and TDB frameworks, and to make them more manageable one can use jasmine or cucumber framework, which has before and after hookups for individual tests. So now we can gear up with maximum instances our machine can bear with "shardTestFiles: true".
I am developing a frontend using the Backbone.js and require.js and everything is going well till i need to create a file named it config.js to store some defaule values to use it in the whole of the application
below is the code of the config.js file
// Filename: config.js
define([''], function(){
var baseUrl = "http://localhost:8888/client/",
apiServer = "http://api-server:8888";
return function(type){
return eval(type);
};
});
in one of my views I would define the config.js then i can access the value of both
var baseUrl = "http://localhost:8888/client/",
apiServer = "http://api-server:8888";
via this line of code below that i put it inside any *.js file on my application
var baseUrl = config('baseUrl');
console.log(baseUrl); //prints out this > http://localhost:8888/client/
the problem here is i am using eval to get the value of what kind of value i need to retrieves, I know it's not safe method to use but could anyone suggest safe solution
RequireJS lets you define objects just like you define more complicated modules. You can have a config module and then use it in whichever other files that require it.
Inside config.js you can do:
define({
baseUrl:"http://localhost:8888/client/",
apiServer:"http://api-server:8888"
});
Then require it in other modules:
//someotherfile.js , defining a module
define(["config"],function(config){
config.baseUrl;// will return the correct value here
//whatever
});
Side note: You can use actual global state (defining the variable on window) but I strongly urge you not to since this will make testing hard, and will make the dependency implicit and not explicit. Explicit dependencies should always be preferred. In the above code and unlike the global it's perfectly clear that the configuration is required by the modules using it.
Note, if you want values that are not valid identifiers you can use bracket syntax too config["baseUrl"] the two (that and config.baseUrl) are identical in JavaScript.
As an alternative solution (and uglier than Benjamin's) you can put both urls into an object:
define([''], function(){
var urls = {
baseUrl: "http://localhost:8888/client/",
apiServer: "http://api-server:8888"
};
return function(type){
return urls[type];
};
});
Still, simply exporting an object is much cleaner.