Can somebody explain the `clean` dependency in this gulpfile? - angularjs

Trying to study https://github.com/jhades/angularjs-gulp-example/blob/master/gulpfile.js, and I have noticed that for the task build there is a dependency clean. However, for most of the other definitions clean is also specified as a dependency! So if I run build it will run clean, but then what about build-css and build-template-cache which both also have clean dependencies, etc... will it also run the clean for each of those dependencies? So basically will running the one command gulp build end up running clean more than once... wiping out the output from other dependencies.... or will running clean the first time explicitly satisfy the dependency for the other dependencies and prevent clean from running again!?
Any pointers will be appreciated.
Please Note
I am NOT asking about what the proper cleaning techniques are! I am specifically asking about the link that I posted... and how IT is handling the clean task.

Dependencies
Dependencies in a gulp file just say that it has to have run before it at least once. For example this code:
var gulp = require('gulp');
gulp.task('main', ['b', 'a'], function() {
return gulp;
});
gulp.task('a', ['b'], function() {
return gulp;
});
gulp.task('b', ['a'], function() {
return gulp;
});
Will run like this:
main
a
b
Not:
main
a
b
a
b
Repeating infinitely.
Dependencies execution order
However, it could just as easily be run in the order main, b, then a. This is because of synchronous tasks in gulp. The dependencies have to run before the task asynchronously, but they run together or synchronously, this answer explains the difference between synchronous and asynchronous.
To avoid this you can use this code in Gulp 4:
gulp.task('main', function() {
gulp.series('a', 'b');
});
This will always run in the order main, a, b
But below that, you need to use the run-sequence package. Here's an example:
var runSequence = require('run-sequence');
gulp.task('main', function() {
runSequence('a', 'b');
})

Related

Gulp Compiling Angularjs 1 Scripts in, incorrect order

Why does this happen? When I compile the scripts using GULP the console will display errors, explaining that my directives and/or my controllers are not registered. Then to correct this error I create the app variable within the controller file and it then renders a new error, then I put the app variable declaration back and everything works fine.
This is my Gulp Script
var gulp = require('gulp'),
plugins = require('gulp-load-plugins')({
pattern: ['gulp-*', 'gulp.*'],
replaceString: /\bgulp[\-.]/
});
var path = {
jsFiles: "./js/**",
scriptFile: "scripts.min.js",
output: "dist/assets/"
};
var options = {
ie8: true,
warnings: true,
mangle: true
};
gulp.task('scripts', function (cb) {
return gulp.src(path.jsFiles)
.pipe(plugins.sourcemaps.init())
.pipe(plugins.jsdoc3(cb))
.pipe(plugins.concat(path.scriptFile))
.pipe(plugins.babel())
.pipe(plugins.ngAnnotate())
.pipe(plugins.uglify(options))
.pipe(plugins.sourcemaps.write("../../maps"))
.pipe(gulp.dest(path.output))
})
TLDR: MY Gulp task sometimes compiles the AngularJS directives and controllers out of order rendering my app declaration undefined.
When you pass globe to the
gulp.src
No ordered is guaranteed, so it is possible to get wrong order time to time. But gulp.src also accepts array of the pathes you need to include and this should guarantee the order
So, try to split your bundle and pass path to the angular.min.js as a first element like this:
gulp.src(['path/to/angular.min.js', 'path/to/your/code'])
You should sort angular files, and there are some libs that does that.
https://www.npmjs.com/package/gulp-angular-filesort is one of them.

Pass data between gulp tasks without writing to disk

I'm trying to annotate and minify a systemjs angular project. Systemjs comes with a build function, but it is not `'gulp-aware'. There is the possibility to pass the builder an option to minify, but there is not one for ng-annotate, so I will need gulp to do both for me instead.
gulp.task('bundle', function () {
var options = {}
builder.buildStatic('./assets/app/app.js', options)
.then(function(data) {
console.log("then called");
// make data available for another task
});
How can I combine the above with
gulp.task('productionApp', function() {
return [source the output from 'bundle']
.pipe(ngannotate())
.pipe(uglify())
.pipe(gulp.dest('./dist'));
});
I could just output the first task to a file, and then .src that in, but that can't be the best way?
The simplest way is to save it inside a buffer (actually, a simple object), then make a stream of and continue as you would with src.
Gulp's repository contains a recipe how it's done.
Note: you should make all those load-* tasks to run at the very beginning, you can either use run-sequence as they've done or make them as dependencies of the "real" tasks.
The yargs package on npm exports the object argv, wich is a very clever representation of the command-line params. For example, the invocation
gulp -a test -b 123 my-task
is represented during the run by a param argv with value
{ a: 'test', b: 123 }
which is passed to the gulp task my-task and, before it, to all its predecessors.
If one of the predecessors assigns a new prop to argv
argv.newProp = 'newValue'
this prop will be available to all its successors, including the task that you really want to execute.
The instruction const { argv } = require('yargs') is to be put at the start of the gulpfile, and it can be enriched with aliases and defaults. Reference is here

Terminate gulp-watch task

I want my gulp watch task to run only once, then stop watching. Now, if I run 'gulp' from the command line, the watch tasks keeps the gulp task running. I have to hit Ctrl-C to stop it.
when you run gulp watch, it will keep watching for file changes until you cancel it with CTRL-C. I you do not want this, then you can make a separate gulp task to do your building.
You would have 2 tasks then: gulp watch and gulp build. As the build wouldn't run the watch task, it will stop when finished.
Please take a look at a GitHub project of mine where I do things like this: skeletonSPA on GitHub.
If you take a look at the gulpfile.js, you see that the default task will execute the build task.
The build task on its turn will execute info, clean, styles, scripts, images, copy, todo; leaving you with a full build and working frontend.
After running these tasks, it will stop and gives you focus to the command line.
Here's a very basic example GulpFile.js:
var gulp = require('gulp'),
watch = require('gulp-watch'),
runSequence = require('gulp-sequence'),
webpack = require('webpack-stream');
gulp.task('webpack', function() {
return gulp.src('client/index.js')
.pipe(webpack({
output: {
filename: 'dist/bundle.js'
},
devtool: 'source-map'
}))
.pipe(gulp.dest('.'));
});
// other tasks
gulp.task('build', ['webpack', 'other tasks... ']);
gulp.task('watch', function(callback) {
gulp.watch('./client/**/*.js', {}, ['webpack']);
// more watches if required
});
gulp.task('default', function(callback) {
runSequence('build', 'watch', callback);
});
$ gulp build > will just run build the once and then exit.
$ gulp watch > will start watching (ctrl-c to exit).
$ gulp > will execute build and after that's complete it will run watch (ctrl-c to exit).
Note: I'm using the gulp-sequence plug to ensure that when running $ gulp the build will complete before the watch begins.
gulp.watch returns a FSWatcher object, which has a .close() method you can call to cancel the watch task:
var watchStream = gulp.watch(
src,
{ignoreInitial: true},
myGulpTask)
.on("change", function (triggerFileName) {
watchStream.close();
});
ignoreInitial = true prevents the task from running when the watcher starts. watchStream.close() will cancel the watcher, but the task will run once. So all in all, you get one task run.
This is with Gulp 4.

Is there a way to speed up AngularJS protractor tests?

I have created tests for my application. Everything works but it runs slow and even though only 1/3 of the application is tested it still takes around ten minutes for protrator to create the test data, fill out the fields, click the submit button etc.
I am using Google Crome for the testing. It seems slow as I watch protractor fill out the fields one by one.
Here's an example of my test suite:
suites: {
login: ['Login/test.js'],
homePage: ['Home/test.js'],
adminPage: ['Admin/Home/test.js'],
adminObjective: ['Admin/Objective/test.js'],
adminObjDetail: ['Admin/ObjectiveDetail/test.js'],
adminTopic: ['Admin/Topic/test.js'],
adminTest: ['Admin/Test/test.js'],
adminUser: ['Admin/User/test.js'],
adminRole: ['Admin/Role/test.js']
},
This is one test group:
login: ['Login/test.js'],
homePage: ['Home/test.js'],
adminUser: ['Admin/User/test.js'],
adminRole: ['Admin/Role/test.js']
This is another test group:
adminPage: ['Admin/Home/test.js'],
adminObjective: ['Admin/Objective/test.js'],
adminObjDetail: ['Admin/ObjectiveDetail/test.js'],
adminTopic: ['Admin/Topic/test.js'],
adminTest: ['Admin/Test/test.js'],
The two groups can run independently but they must run in the order I have above. After the answers I did read about sharing but I am not sure if this helps my situation as my tests need to be run in order. Ideally I would like to have one set of tests run in one browser and the other set in another browser.
I read about headless browsers such as PhantomJS. Does anyone have experience with these being faster?
Any advice on how I could do this would be much appreciated.
We currently use "shardTestFiles: true" which runs our tests in parallel, this could help if you have multiple tests.
I'm not sure what you are testing here, whether its the data creation or the end result. If the latter, you may want to consider mocking the data creation instead or bypassing the UI some other way.
Injecting in Data
One thing that you can do that will give you a major boost in performance is to not double test. What I mean by this is that you end up filling in dummy data a number of times to get to a step. Its also one of the major reasons that people need tests to run in a certain order (to speed up data entry).
An example of this is if you want to test filtering on a grid (data-table). Filling in data is not part of this action. Its just an annoying thing that you have to do to get to testing the filtering. By calling a service to add the data you can bypass the UI and seleniums general slowness (Id also recommend this on the server side by injecting values directly into the DB using migrations).
A nice way to do this is to add a helper to your pageobject as follows:
module.exports = {
projects: {
create: function(data) {
return browser.executeAsyncScript(function(data, callback) {
var api = angular.injector(['ProtractorProjectsApp']).get('apiService');
api.project.save(data, function(newItem) {
callback(newItem._id);
})
}, data);
}
}
};
The code in this isnt the cleanest but you get the general gist of it. Another alternative is to replace the module with a double or mock using [Protractor#addMockModule][1]. You need to add this code before you call Protractor#get(). It will load after your application services overriding if it has the same name as an existing service.
You can use it as follows :
var dataUtilMockModule = function () {
// Create a new module which depends on your data creation utilities
var utilModule = angular.module('dataUtil', ['platform']);
// Create a new service in the module that creates a new entity
utilModule.service('EntityCreation', ['EntityDataService', '$q', function (EntityDataService, $q) {
/**
* Returns a promise which is resolved/rejected according to entity creation success
* #returns {*}
*/
this.createEntity = function (details,type) {
// This is your business logic for creating entities
var entity = EntityDataService.Entity(details).ofType(type);
var promise = entity.save();
return promise;
};
}]);
};
browser.addMockModule('dataUtil', dataUtilMockModule);
Either of these methods should give you a significant speedup in your testing.
Sharding Tests
Sharding the tests means splitting up the suites and running them in parallel. To do this is quite simple in protractor. Adding the shardTestFiles and maxInstences to your capabilities config should allow you to (in this case) run at most two test in parrallel. Increase the maxInstences to increase the number of tests run. Note : be careful not to set the number too high. Browsers may require multiple threads and there is also an initialisation cost in opening new windows.
capabilities: {
browserName: 'chrome',
shardTestFiles: true,
maxInstances: 2
},
Setting up PhantomJS (from protractor docs)
Note: We recommend against using PhantomJS for tests with Protractor. There are many reported issues with PhantomJS crashing and behaving differently from real browsers.
In order to test locally with PhantomJS, you'll need to either have it installed globally, or relative to your project. For global install see the PhantomJS download page (http://phantomjs.org/download.html). For local install run: npm install phantomjs.
Add phantomjs to the driver capabilities, and include a path to the binary if using local installation:
capabilities: {
'browserName': 'phantomjs',
/*
* Can be used to specify the phantomjs binary path.
* This can generally be ommitted if you installed phantomjs globally.
*/
'phantomjs.binary.path': require('phantomjs').path,
/*
* Command line args to pass to ghostdriver, phantomjs's browser driver.
* See https://github.com/detro/ghostdriver#faq
*/
'phantomjs.ghostdriver.cli.args': ['--loglevel=DEBUG']
}
Another speed tip I've found is that for every test I was logging in and logging out after the test is done. Now I check if I'm already logged in with the following in my helper method;
# Login to the system and make sure we are logged in.
login: ->
browser.get("/login")
element(By.id("username")).isPresent().then((logged_in) ->
if logged_in == false
element(By.id("staff_username")).sendKeys("admin")
element(By.id("staff_password")).sendKeys("password")
element(By.id("login")).click()
)
I'm using grunt-protractor-runner v0.2.4 which uses protractor ">=0.14.0-0 <1.0.0".
This version is 2 or 3 times faster than the latest one (grunt-protractor-runner#1.1.4 depending on protractor#^1.0.0)
So I suggest you to give a try and test a previous version of protractor
Hope this helps
Along with the great tips found above I would recommend disabling Angular/CSS Animations to help speed everything up when they run in non-headless browsers. I personally use the following code in my Test Suite in the "onPrepare" function in my 'conf.js' file:
onPrepare: function() {
var disableNgAnimate = function() {
angular
.module('disableNgAnimate', [])
.run(['$animate', function($animate) {
$animate.enabled(false);
}]);
};
var disableCssAnimate = function() {
angular
.module('disableCssAnimate', [])
.run(function() {
var style = document.createElement('style');
style.type = 'text/css';
style.innerHTML = '* {' +
'-webkit-transition: none !important;' +
'-moz-transition: none !important' +
'-o-transition: none !important' +
'-ms-transition: none !important' +
'transition: none !important' +
'}';
document.getElementsByTagName('head')[0].appendChild(style);
});
};
browser.addMockModule('disableNgAnimate', disableNgAnimate);
browser.addMockModule('disableCssAnimate', disableCssAnimate);
}
Please note: I did not write the above code, I found it online while looking for ways to speed up my own tests.
From what I know:
run test in parallel
inject data in case you are only testing a UI element
use CSS selectors, no xpath (browsers have a native engine for CSS, and the xpath engine is not performance as CSS engine)
run them on high performant machines
use as much as possible beforeAll() and beforeEach() methods for instructions that you repeat often in multiple test
Using Phantomjs will considerably reduce the duration it takes in GUI based browser, but better solution I found is to manage tests in such a way that it can be run in any order independently of other tests, It can be achieved easily by use of ORM(jugglingdb, sequelize and many more) and TDB frameworks, and to make them more manageable one can use jasmine or cucumber framework, which has before and after hookups for individual tests. So now we can gear up with maximum instances our machine can bear with "shardTestFiles: true".

Code coverage for Protractor tests in AngularJS

I am running some e2e tests in my angularJS app with protractor (as recommended in the angularJS documentation).
I've googled around and cannot find any information on how to measure coverage for my protractor tests.
I think I'm missing something here... is there any way to get a code coverage report for protractor e2e tests? Or is it simply a feature for unit tests?
This is achievable using Istanbul. Here is the process, with some example configurations that I've extracted from our project (not tested):
Instrument your code using the command istanbul instrument. Make sure that istanbul's coverage variable is __coverage__.
// gulpfile.js
gulp.task('concat', function () {
gulp.src(PATH.src)
// Instrument for protractor-istanbul-plugin:
.pipe(istanbul({coverageVariable: '__coverage__'}))
.pipe(concat('scripts.js'))
.pipe(gulp.dest(PATH.dest))
});
Configure Protractor with the plugin protractor-istanbul-plugin.
// spec-e2e.conf.js
var istanbulPlugin = require('protractor-istanbul-plugin');
exports.config = {
// [...]
plugins: [{ inline: istanbulPlugin }]
};
Run your tests.
Extract the reports using istanbul report.
This approach has worked for me and is easy to combine with coverage reports from unit tests as well. To automate, I've put step 1 into my gulpfile.js and step 3 and 4 in the test and posttest scripts in package.json, more or less like this:
// In package.json:
"scripts": {
"test": "gulp concat && protractor tests/spec-e2e.conf.js",
"posttest": "istanbul report --include coverage/**/.json --dir reports/coverage cobertura"
},
if you are using grunt - you can use grunt-protractor-coverage plugin, it will do the job for you. You will have to instrument the code first and then use the mentioned plugin to create coverage reports for you.
To add to ryanb's answer, I haven't tried this but you should be able to use something like gulp-istanbul to instrument the code and override the default coverage variable, then define an onComplete function on the jasmineNodeOpts object in your Protractor config file. It gets called once right before everything is closed down.
exports.config = {
// ...
jasmineNodeOpts: {
onComplete: function(){
browser.driver.executeScript("return __coverage__;").then(function(val) {
fs.writeFileSync("/path/to/coverage.json", JSON.stringify(val));
});
}
}
};
I initially tried the onComplete method suggested by daniellmb, but getting the coverage results only at the end will not include all the results if there were multiple page loads during the tests. Here's a gist that sums up how I got things working, but basically I had to create a reporter that added coverage results to the instanbul collector every time a spec finished, and then wrote the reports in the onComplete method. I also had to use a "waitPlugin" as suggested by sjelin to prevent protractor from exiting before the results were written.
https://gist.github.com/jbarrus/286cee4294a6537e8217
I managed to get it working, but it's a hack at the moment. I use one of the existing grunt istanbul plugins to instrument the code. Then I made a dummy spec that grabs the 'coverage' global variable and write it to a file. After that, you can create a report with any of the reporting plugins.
The (very over-simplified) test looks like:
describe('Output the code coverage objects', function() {
it('should output the coverage object.', function() {
browser.driver.executeScript("return __coverage__;").then(function(val) {
fs.writeFileSync("/path/to/coverage.json", JSON.stringify(val));
});
});
});

Resources