Removing items from Angular library and compressing - angularjs

I was looking at the unminified version of the 1.2.8 angular library (https://code.angularjs.org/1.2.8/angular.js). There are a number of directives that i will never use in my application, such as:
scriptDirective
ngBindDirective
ngBindHtmlDirective
ngNonBindableDirective
ngPluralizeDirective
ngTranscludeDirective
I removed all of the above, and corresponding code/functions. After recompiling my application, it performs as expected.
However, when i attempted to minify/uglify the file, im left with a file ~ 140kb in size. Yet, the original minified file size was ~98kb.
Ive used minify/uglify from Gulp and various online compressors.
How can i remove elements from the library, minify it, and the result file be smaller than original?

Related

Automatically find dependencies and create CMakeLists.txt with CMake (or CMake Tools in Visual Studio Code) [duplicate]

CMake offers several ways to specify the source files for a target.
One is to use globbing (documentation), for example:
FILE(GLOB MY_SRCS dir/*)
Another method is to specify each file individually.
Which way is preferred? Globbing seems easy, but I heard it has some downsides.
Full disclosure: I originally preferred the globbing approach for its simplicity, but over the years I have come to recognise that explicitly listing the files is less error-prone for large, multi-developer projects.
Original answer:
The advantages to globbing are:
It's easy to add new files as they
are only listed in one place: on
disk. Not globbing creates
duplication.
Your CMakeLists.txt file will be
shorter. This is a big plus if you
have lots of files. Not globbing
causes you to lose the CMake logic
amongst huge lists of files.
The advantages of using hardcoded file lists are:
CMake will track the dependencies of a new file on disk correctly - if we use
glob then files not globbed first time round when you ran CMake will not get
picked up
You ensure that only files you want are added. Globbing may pick up stray
files that you do not want.
In order to work around the first issue, you can simply "touch" the CMakeLists.txt that does the glob, either by using the touch command or by writing the file with no changes. This will force CMake to re-run and pick up the new file.
To fix the second problem you can organize your code carefully into directories, which is what you probably do anyway. In the worst case, you can use the list(REMOVE_ITEM) command to clean up the globbed list of files:
file(GLOB to_remove file_to_remove.cpp)
list(REMOVE_ITEM list ${to_remove})
The only real situation where this can bite you is if you are using something like git-bisect to try older versions of your code in the same build directory. In that case, you may have to clean and compile more than necessary to ensure you get the right files in the list. This is such a corner case, and one where you already are on your toes, that it isn't really an issue.
The best way to specify sourcefiles in CMake is by listing them explicitly.
The creators of CMake themselves advise not to use globbing.
See: https://cmake.org/cmake/help/latest/command/file.html?highlight=glob#glob
(We do not recommend using GLOB to collect a list of source files from your source tree. If no CMakeLists.txt file changes when a source is added or removed then the generated build system cannot know when to ask CMake to regenerate.)
Of course, you might want to know what the downsides are - read on!
When Globbing Fails:
The big disadvantage to globbing is that creating/deleting files won't automatically update the build-system.
If you are the person adding the files, this may seem an acceptable trade-off, however this causes problems for other people building your code, they update the project from version-control, run build, then contact you, complaining that"the build's broken".
To make matters worse, the failure typically gives some linking error which doesn't give any hints to the cause of the problem and time is lost troubleshooting it.
In a project I worked on we started off globbing but got so many complaints when new files were added, that it was enough reason to explicitly list files instead of globbing.
This also breaks common git work-flows(git bisect and switching between feature branches).
So I couldn't recommend this, the problems it causes far outweigh the convenience, when someone can't build your software because of this, they may loose a lot of time to track down the issue or just give up.
And another note, Just remembering to touch CMakeLists.txt isn't always enough, with automated builds that use globbing, I had to run cmake before every build since files might have been added/removed since last building *.
Exceptions to the rule:
There are times where globbing is preferable:
For setting up a CMakeLists.txt files for existing projects that don't use CMake.Its a fast way to get all the source referenced (once the build system's running - replace globbing with explicit file-lists).
When CMake isn't used as the primary build-system, if for example you're using a project who aren't using CMake, and you would like to maintain your own build-system for it.
For any situation where the file list changes so often that it becomes impractical to maintain. In this case it could be useful, but then you have to accept running cmake to generate build-files every time to get a reliable/correct build (which goes against the intention of CMake - the ability to split configuration from building).
* Yes, I could have written a code to compare the tree of files on disk before and after an update, but this is not such a nice workaround and something better left up to the build-system.
In CMake 3.12, the file(GLOB ...) and file(GLOB_RECURSE ...) commands gained a CONFIGURE_DEPENDS option which reruns cmake if the glob's value changes.
As that was the primary disadvantage of globbing for source files, it is now okay to do so:
# Whenever this glob's value changes, cmake will rerun and update the build with the
# new/removed files.
file(GLOB_RECURSE sources CONFIGURE_DEPENDS "*.cpp")
add_executable(my_target ${sources})
However, some people still recommend avoiding globbing for sources. Indeed, the documentation states:
We do not recommend using GLOB to collect a list of source files from your source tree. ... The CONFIGURE_DEPENDS flag may not work reliably on all generators, or if a new generator is added in the future that cannot support it, projects using it will be stuck. Even if CONFIGURE_DEPENDS works reliably, there is still a cost to perform the check on every rebuild.
Personally, I consider the benefits of not having to manually manage the source file list to outweigh the possible drawbacks. If you do have to switch back to manually listed files, this can be easily achieved by just printing the globbed source list and pasting it back in.
You can safely glob (and probably should) at the cost of an additional file to hold the dependencies.
Add functions like these somewhere:
# Compare the new contents with the existing file, if it exists and is the
# same we don't want to trigger a make by changing its timestamp.
function(update_file path content)
set(old_content "")
if(EXISTS "${path}")
file(READ "${path}" old_content)
endif()
if(NOT old_content STREQUAL content)
file(WRITE "${path}" "${content}")
endif()
endfunction(update_file)
# Creates a file called CMakeDeps.cmake next to your CMakeLists.txt with
# the list of dependencies in it - this file should be treated as part of
# CMakeLists.txt (source controlled, etc.).
function(update_deps_file deps)
set(deps_file "CMakeDeps.cmake")
# Normalize the list so it's the same on every machine
list(REMOVE_DUPLICATES deps)
foreach(dep IN LISTS deps)
file(RELATIVE_PATH rel_dep ${CMAKE_CURRENT_SOURCE_DIR} ${dep})
list(APPEND rel_deps ${rel_dep})
endforeach(dep)
list(SORT rel_deps)
# Update the deps file
set(content "# generated by make process\nset(sources ${rel_deps})\n")
update_file(${deps_file} "${content}")
# Include the file so it's tracked as a generation dependency we don't
# need the content.
include(${deps_file})
endfunction(update_deps_file)
And then go globbing:
file(GLOB_RECURSE sources LIST_DIRECTORIES false *.h *.cpp)
update_deps_file("${sources}")
add_executable(test ${sources})
You're still carting around the explicit dependencies (and triggering all the automated builds!) like before, only it's in two files instead of one.
The only change in procedure is after you've created a new file. If you don't glob the workflow is to modify CMakeLists.txt from inside Visual Studio and rebuild, if you do glob you run cmake explicitly - or just touch CMakeLists.txt.
Specify each file individually!
I use a conventional CMakeLists.txt and a python script to update it. I run the python script manually after adding files.
See my answer here:
https://stackoverflow.com/a/48318388/3929196
I'm not a fan of globbing and never used it for my libraries. But recently I've looked a presentation by Robert Schumacher (vcpkg developer) where he recommends to treat all your library sources as separate components (for example, private sources (.cpp), public headers (.h), tests, examples - are all separate components) and use separate folders for all of them (similarly to how we use C++ namespaces for classes). In that case I think globbing makes sense, because it allows you to clearly express this components approach and stimulate other developers to follow it. For example, your library directory structure can be the following:
/include - for public headers
/src - for private headers and sources
/tests - for tests
You obviously want other developers to follow your convention (i.e., place public headers under /include and tests under /tests). file(glob) gives a hint for developers that all files from a directory have the same conceptual meaning and any files placed to this directory matching the regexp will also be treated in the same way (for example, installed during 'make install' if we speak about public headers).

Project-wide obfuscation with Google Closure

I'm using Google's closure compiler (set to compilation_level=ADVANCED_OPTIMIZATIONS) to successfully minify/obfuscate my javascript code (I'm currently doing this semi-manually with a Sublime text plugin).
The vast majority of my javascript is in a single .js file, but of course if I obfuscate this code, and there's other snippets of javascript in my project's html files (perhaps referring to pre-obfuscation function names), then I'm going to run into problems.
What's the best approach to dealing with this dilemma? Ideally I could run a whole project through the compiler which would recognise javascript inside html files and obfuscate them in a consistent way.
Export the functions that you need to call from HTML code, those will not be renamed (minified) by the compiler. Either use the #export tag as part of the type definition, or call goog.exportSymbol or goog.exportProperty after they are defined. See the section in this wiki page about #export.
See the section Solution: Export the Symbols You Want to Keep on the page about Advanced Compilation and Externs for discussion and yet another way:
function displayNoteTitle(note) {
alert(note['myTitle']);
}
// Store the function in a global property referenced by a string:
window['displayNoteTitle'] = displayNoteTitle;
You can use obscure names for the things that are exported if you need to. If you have a lot of code in the html files, move that code to functions in your single file and call those functions from html. Closure Compiler will not compile code that is inside an html file.

How can I exclude browserify generated code from code coverage numbers?

I use browserify to bundle all our angular js code into one file. We use karma + jasmine to unit test this one file, app.js. As part of the bundling that browserify does, it injects a single line of code at the beginning of the file:
(function e(t,n,r){function s(o,u){if(!n[o]){if(!t[o]){var a=typeof require=="function"&&require;if(!u&&a)return a(o,!0);if(i)return i(o,!0);var f=new Error("Cannot find module '"+o+"'");throw f.code="MODULE_NOT_FOUND",f}var l=n[o]={exports:{}};t[o][0].call(l.exports,function(e){var n=t[o][1][e];return s(n?n:e)},l,l.exports,e,t,n,r)}return n[o].exports}var i=typeof require=="function"&&require;for(var o=0;o<r.length;o++)s(r[o]);return s})({1:[function(require,module,exports){
I tried putting a /*instanbul ignore next*/ above that line but causes the whole file to be ignored. this one line is killing my branch coverage numbers. is there anyway to ignore this generated code?
It is always preferred to write the unit test for each file,before the bundle process. If you have a number of files, will be difficult to mock the dependencies and keep track of them. And we have only a number of options available in istanbul. And what you want to do is to skip the function definition header. There is no possible way to ignore that particular line only. But you can solve this issue by separating the unit test's into different files. That is preferred and easy to test.

Using header only libraries in biicode

Short:
How do I use header only libraries with biicode?
Medium:
When I try to build a block it includes example directories even though I try to set the dependencies explicitly in the biicode.conf of the published block.
Long:
I'm trying to get the unity framework up and running, using biicode.
Unity is great as a unit testing framework for C because you do not need to compile any libraries. If you do your own mocks, you don't even have to run any scripts - there is just a single .c file to include in your compile and you are golden.
I've published the git repo to my biicode block paulbendixen/Unity and since there is no need for any compilation step beyond the c file that accompanies the header that should be included there is nothing else to do.
However, when I include the file, using #include "paulbendixen/Unity/src/unity.h" I get the error when doing bii cpp:build:
Code.c:2:28: fatal error: ProductionCode.h: No such file or directory
#include "ProductionCode.h"
This is in the examples folder and should therefore not be compiled, when I just want to use the unit testing part. Changing the [dependencies] to include unity.h = unity.c unity_internals.h hasn't helped either.
I'm pretty sure the problem should be resolved in the Unity/biicode.conf, but I haven't been able to find a thorough description of this file anywhere.
The simplicity of the Unity library should make it ideal for a build system such as bii, but it seems quite complex to set up.
If it helps, I've used the simple layout and the -r [github for throwtheswitch] option
It is not that simple. Unity uses Rakefiles to build and run the tests, and they have lots of configuration. What can be done for quickly upload it to biicode is just to ignore the tests and publish just the files. This can be done writing a ignore.bii file with the contents:
docs/*
test/*
examples/*
*test*
Wrt to the biicode.conf, the only configuration necessary are the include paths:
[paths]
src
extras/fixture/src
You can check that the manual definition of dependencies is not necessary, if you run $ bii deps --files *unity.h
With these changes, it is possible to publish it. Nothing to build.
Then, to use it in other projects, I have been able to build simple tests:
#include "unity.h"
void testTrue(void){
TEST_ASSERT(1);
TEST_ASSERT_TRUE(1);
}
int main() {
testTrue();
}
Just adding the following to the biicode.conf of the new project:
[requirements]
diego/unityfork: 0
[includes]
unity.h: diego/unityfork/src
It would probably be much easier to make biicode run and build the tests without ignoring them if it used the more typical CMake configuration instead of Rakefiles

How to smartly include all the necessary files?

Right now my Karma config has the following files to include:
files: [
'vendor/vendor.js',
'vendor/angular-mocks/angular-mocks.js',
'src/components/security/index.js',
'src/components/security/authentication.js',
'src/components/security/login-controller.js',
'src/components/filters/index.js',
'src/components/filters/toDate.js',
'src/components/services/index.js',
'src/components/services/alert.js',
'src/menu/index.js',
'src/menu/menu-controller.js',
'src/user/index.js',
'src/manage/index.js',
'src/manage/user/manage-user-controller.js',
'src/manage/channel/manage-channel-controller.js',
'src/stream/index.js',
'src/stream/stream-controller.js',
'src/messages/index.js',
'src/messages/messages-controller.js',
'src/app.js',
'src/**/*.spec.js'
],
The file vendor/vendor.js is automatically created by a Gulp task concatenating all vendor files using my bower config. It's all my own Javascript code that's so hard to include because order matters a great deal. The index.js files within a folder define the module (and its routes) and thus must be loaded before the individual files. And app.js always has to be last.
So my question is how I do this a bit smarter, for example with a glob that first includes all the index.js files and then all the others?
This is exactly what RequireJS is for. You can use it in your deployed code, but you can also just use it for your tests.
If you do this, your "karma.conf.js" ends up being a little shorter and less volatile. Your RequireJS config file specifies some dependency mapping. Each test spec ends up specifying what dependencies it needs, either in a "declarative" fashion in the "define" call, or sometimes manually through the "require" function (you sometimes need the latter to deal with circular reference problems).
Hm I guess asking the question gave me the idea I needed :) This works:
files: [
'vendor/vendor.js',
'vendor/angular-mocks/angular-mocks.js',
'src/*/**/index.js',
'src/*/**/*.js',
'src/app.js',
'src/**/*.spec.js'
],
Adding the *.js after index.js results in debug messages like this:
src/manage/index.js ignored. Already in the list.
Excellent!

Resources