what (amd) script loader to use for mobile site - mobile

I'm starting work on a new version of a mobile site. I am looking into using an amd script loader and have pretty much narrowed it down to require and lsjs. I know there are many pro's and con's to both, but I am trying to figure all of those out for the mobile version of my site. Does anyone have experience with this lib's at the mobile level? Just trying to get a discussion going here of what people think the best way to go is. (anyone with a 1500 rep want to create an lsjs tag :) ). Maybe either of the creators of these libraries (todd burke or richard backhouse) have an opinion on this
thanks
EDIT:
thanks to Simon Smith for the great info down below. has anyone used lsjs? it looks very promising in terms of speed, but does not have the user base, documentation, or (i think) features of require/curl, but still looks very promising

I would say use RequireJS until you're ready to go to production. Then compile your scripts and replace RequireJS with Almond. It's a bare-bones library made by James Burke (author of RequireJS) so you can rely on it to work seamlessly:
Some developers like to use the AMD API to code modular JavaScript,
but after doing an optimized build, they do not want to include a full
AMD loader like RequireJS, since they do not need all that
functionality. Some use cases, like mobile, are very sensitive to file
sizes.
By including almond in the built file, there is no need for RequireJS.
almond is around 1 kilobyte when minified with Closure Compiler and
gzipped.
https://github.com/jrburke/almond
EDIT:
Curl.js is also an option. I haven't used it but know that is a lot smaller than RequireJS. Did a bit of research as to why:
RequireJS does the following over Curl (via James Burke):
Supports multiversion/contexts, useful for mock testing, but you can get by without it
Supports loading plain JS files via require, does not have to be an AMD module
Supports special detection and work with older versions of jQuery (should not be an issue if you use jQuery 1.7.1 or later)
(At the moment) better support for simplified wrapped commonjs style: define(function(require) {});
In short, if you are only going to deal with AMD modules in your app,
do not need the multiversion/context support, and are not using the
simplified commonjs wrapping style, or using an older jQuery, then
curl can be a good choice.
https://groups.google.com/forum/?fromgroups=#!topic/requirejs/niUyLZrivgs
And the author of Curl:
RequireJS runs in more places than curl.js, including WebWorkers and
node.js. It's also got more "battle testing" than curl.js, which may
mean it has less bugs around edge cases. curl.js is also missing a few
important features, such as preloading of implicit dependencies and
support for AMD-wrapped commonjs modules. These are both coming in
version 0.6 (late next week).
On the plus side, curl.js...
is as small as 1/4 the size of RequireJS -- even when bundled with the
js! and domReady! plugins it is still less than half the size.
is faster at loading modules than RequireJS, but only meaningfully so in
IE6-8 or in development (non-build) environments.
supports pluggable
module loaders for formats other than AMD (we're working on unwrapped
CJSM/1.1 and CJSM/2.0, for instance).
supports configuration-based
dependency management via IOC containers like wire.js (via cram.js).
supports inlining of css (via cram.js) and concatenation of css (via
cram.js 0.3 by end of year)
https://github.com/cujojs/curl/issues/35#issuecomment-2954344

Back in 2014 I faced the same problem. I had some extra requirements though in order to make the site fast on mobile:
Small enough to be inlined (to avoid paying an extra request-tax to
get the loader onboard).
Inlined config file (get rid of a request).
Config file in pure javascript (no parsing overhead).
Let the browser do the actual loading (browsers are smart these days) of files.
Connect all asynchronously loaded modules together.
Support for single page apps that include legacy code that uses sprinkled $(function(){...}) constructs, yet I insist on loading jQuery late and asynchronously to speed things up.
After evaluating RequireJS, curl, lsjs and a bunch of others, I concluded that none of them came close enough to what I needed for my projects. Eventually I decided to create my own lockandload AMD-loader. I didn't open-source it at the time, because that meant writing documentation. But I recently open-sourced it with fresh docs in case it benefits others.

Related

How to build angular project to compatible with Chrome Extension obfuscation policy

We have been working on an Angular project with TypeScript (Visual Studio Code). We are deploying this project as Chrome Extension in Google Webstore, it was worked fine, but two days back when we try to re-publish the extension with the latest changes, the Chrome Store reject the request with the below reason.
Your item did not comply with the following section of our Program
Policies:
"Content Policies"
Developers must not obfuscate code or conceal functionality of their
extension. This also applies to any external code or resource fetched
by the extension package.
Your item was found to have one or more files that does not comply
with this policy.
Please note that minification is allowed in the following forms:
Removal of whitespace, newlines, code comments, and block delimiters
Shortening of variable and function names
Collapsing the number of JavaScript files
For more information, please review these recommended Minification
Techniques for Google Developers.
We are build our angular project with ng build command.
Our environment parameters:
Angular CLI: 1.5.2
Node: 12.13.0
OS: win32 x64
Angular: 5.2.3
... common, compiler, compiler-cli, core, forms, http
... language-service, platform-browser, platform-browser-dynamic
... router
#angular/animations: 5.2.11
#angular/cdk: 5.2.5
#angular/cli: 1.5.2
#angular/material: 5.2.5
#angular-devkit/build-optimizer: 0.0.42
#angular-devkit/core: 0.2.0
#angular-devkit/schematics: 0.0.52
#ngtools/json-schema: 1.1.0
#ngtools/webpack: 1.8.2
#schematics/angular: 0.1.17
typescript-require: 0.2.9-1
typescript: 2.4.2
webpack: 3.8.1
Can anyone help us how to build our project so that it will be compatible with Chrome Extension policies?.
Interesting, Sorry to let you down but I'm afraid that's impossible (without authors help) to convert typescript into readable unobfuscated code.
I was looking into that, I'm afraid that your only option is to ask google to review your source code and check sum it but that will be a pain for both you and the reviewer to do every time theres an update to your extension from your side.
Your only realistic option leftout at that point will be to convert your typescript angular project into purejs angular project, using pure javascript is actually not that scary and should technically speed up different parts of the process if not all of the processes all together, a wise man once said that a good javascript developer can write code that is much more efficient and faster than a group of typescript developers, I'm afraid that's true.. the transcription process that occur when typescript compiles into javascript will always generate "machine code" or in other words unreadable obfuscated code, at least from my point of view, this is all purely my opinions based on the knowledge and exprience I gathered from the start of this computer age (Yes I lived through all of it).
So just to be clear, In it's own weird way: typescript is javascript, which means that some of the core codes like database interactions, classes, functions and html can easily be converted into pure javascript without fuz.
Check this out:
https://www.google.com/search?q=convert+angular+typescript+to+javascript

What is flat bundling and why is Rollup better at this than Webpack?

I have recently been looking into rollup and seeing how it differs from Webpack and other bundlers. One thing I came across was that it is better for libraries because of "flat bundling". This is based on a tweet and from a recent PR for React to utilize Rollup.
In my experience, Rollup is better at building libraries due to better optimizations around flat bundling (e.g. hoisting). 1/2
Webpack 2 may be better for you if you're bundling apps with code-splitting etc though. 2/2
I'm not entirely sure I understand what that means though. What does flat bundling refer to? I know Rollup's documentation mentions treeshaking to help reduce bundle size but Webpack also has a way of doing this. Perhaps I just don't understand the concept entirely.
Please note this is NOT a comparison question regarding Rollup vs Webpack. For people interested in that, there is a comparison chart for that by Webpack. This is primarily asking what flat bundling is? And potentially what does Rollup do internally to achieve this?
Edit: Rollup supports code splitting - read article
Edit: Webpack now supports scope hoisting in some situations — read the blog post here
We probably all have different definitions for this stuff, but I think flat bundling simply means 'taking your modules and turning them into a single bundle' — i.e, the 'flat' is redundant. The big difference in React 16 is that you'll consume a premade bundle by default, rather than your app being responsible for bundling React's source modules (though there was always a prebuilt UMD bundle of React available, built with Browserify).
Rather, the big difference between the two is what happens at the module boundaries. The way webpack works is that it wraps each module in a function, and creates a bundle that implements a loader and a module cache. At runtime, each of those module functions is evaluated in turn to populate the module cache. This architecture has lots of advantages — it makes it possible to implement advanced features like code-splitting and on-demand loading, and hot module replacement (HMR).
Rollup takes a different approach — it puts all your code at the same level (rewriting identifiers as necessary to avoid conflicts between variable names etc). This is often referred to as 'scope hoisting'. Because of it, there's no per-module overhead, and no per-bundle overhead. Your bundle is guaranteed to be smaller, and will also evaluate faster because there's less indirection (more information on that — The cost of small modules). The trade-off is that this behaviour relies on ES2015 module semantics, and it means that some of webpack's advanced features are much harder to implement (e.g. Rollup doesn't support code-splitting, at least not yet!).
In short, webpack is generally a better fit for apps, and Rollup is generally a better fit for libraries.
I've put together a small gist illustrating the differences. You can also get a feel for Rollup's output by tinkering with the Rollup REPL.

Integrating markdown into angularjs?

I've started writing a simple app using AngularJS + NodeJS to learn more about the stack, and it appears that getting markdown to work is a bit tricky and not that well supported. I'm coming from a ruby background, and I used the redcarpet markdown library, which was pretty standard and straightforward.
I've come across the angular-markdown-directive:
Pros
Simple to setup
Uses ngSanitize to clean user-submitted markdown. This library is supported by the official Angular team.
Cons
It uses showdown under the hood, which seems to have died a while back, but small progress seems to be picking up with the new maintainer. However, it has quite a few outstanding bugs, two particular bug reports dating back to 2013 and 2014 are worrying:
(1) Underscores are apparently interpreted to be italicized (will create malformed links):
https://github.com/showdownjs/showdown/issues/96
(2) Security issue that allows XSS still not patched:
https://github.com/showdownjs/showdown/issues/57
I'm not sure if (2) will be an issue in my case, since ngSanitize may help.
There is another library called markdown-it, but this library handles markdown in Nodejs instead of Angular. But their examples doesn't say much about best security practices.
--
Are there any full examples on how markdown can be securely integrated into a Node/Angular app? angular-markdown-directive seems like a good fit but has some painful problems, and most other markdown libraries are either dying/dead or they gloss over security in a production environment.
I decided to use markdown-it. It's pretty flexible; it actually allows parsing from either server or the client so it's up to you how and where you'd like to parse the markdown.
For me, I've opted to save the markdown text in the database and then parse the markdown on the client, and it works very well.
As for security, markdown-it comes with some built-in security measures, which is very nice. There is also a separate security module that you can use with it that offers additional features.

is it good to use amd version of backbone, jquery, underscore, etc?

I am starting my project structure from scratch. I am using require.js, backbone, underscore, bootstrap, etc. I was thinking to use shim config to load non AMD compatible of backbone, underscore,etc. But, now, i think its better to use AMD (Asynchronous Module Definition) compatible version of them since it allows to load parallely the resources. But, where can i find reliable source for AMD compatible underscore, backbone and bootstrap? And can i be assured that I will alz get latest version of backbone, bootstrap and underscore AMD compatble version. Will they cause any break later?
In word, can anyone suggest me to use AMD Compatible version of them or tade off to use shim config to load non-amd version of them against loading time. I am planning to use require-jquery AMD.
I can only provide one point of view, but from my experience, at this stage, it's better just to shim the dependencies. I don't feel that amd is widely adopted enough yet to get the kind of support you'll need to make everything work nicely together using the amd versions.
In particular, I had a problem with testing (Jasmine), where my Jasmine tests would be referring to one "jQuery" and my application code would be referring to another one, because neither were globals. I just gave up and switched back to using shims, and managed to get the tests to work (although not without some work).
Not sure if it will help, but here are my personal notes on integrating RequireJS into a BackboneJS/Rails stack. The section on stubbing dependencies might be of interest if you'll be testing your client-side code. I hit quite a few snags along the way...
Yes, it is better. I can say that after developing largescale apps with require and backbone - they work great together. Use a build process that uses r.js to boil your app js down to a single file so there isn't a dependency on production obviously. We have had no problems integrating this with jasmine as a unit tester in response to the answer above (not that I would personally bother with unit testing, stick with behavioural testing instead).
This is a good starting point for getting an idea of how it fits together: http://net.tutsplus.com/tutorials/javascript-ajax/a-requirejs-backbone-and-bower-starter-template/
Though consider jam as a package manager or none at all, and grunt to create build tasks etc but still useful just don't treat stuff as gospel try it yourself!
Personally I don't think using AMD version library is better.
Because
1. rely on community to maintain the AMD version
2. use shim and export the global is better
3. you cannot expect all libraries have AMD version
I have spent hours dig bugging why the optimizely code via rjs says that Backbone is not found and had to remove some code in the backbone source to make it works.
In short, use shim.

What are best development practices for multi JRE version support?

Our application needs to support 1.5 and 1.6 JVMs. The 1.5 support needs to stay clear of any 1.6 JRE dependencies, whereas the 1.6 support needs to exploit 1.6-only features.
When we change our Eclipse project to use a 1.5 JRE, we get all the dependencies flagged as errors. This is useful to see where our dependencies are, but is not useful for plain development. Committing source with such compile errors also feels wrong.
What are the best practices for this kind of multi JRE version support?
In C land, we had #ifdef compiler directives to solve such things fairly cleanly. What's the cleanest Java equivalent?
If your software must run on both JRE 1.5 and 1.6, then why don't you just develop for 1.5 only? Is there a reason why you absolutely need to use features that are only available in Java 6? Are there no third-party libraries that run on Java 1.5 that contain equivalents for the 1.6-only features that you want to use?
Maintaining two code bases, keeping them in sync etc. is a lot of work and is probably not worth the effort compared to what you gain.
Java ofcourse has no preprocessor, so you can't (easily) do conditional compilation like you can do in C with preprocessor directives.
It depends ofcourse on how big your project is, but I'd say: don't do it, use Java 5 features only, and if there are Java 6 specific things you think you need, then look for third-party libraries that run on Java 5 that implement those things (or even write it yourself - in the long run, that might be less work than trying to maintain two code bases).
Compile most of your code as 1.5. Have a separate source directory for 1.6-specific code. The 1.6 source should depend upon the 1.5, but not vice-versa. Interfacing to the 1.6 code should be done by subtyping types from the 1.5 code. The 1.5 code may have an alternative implementation rather than checking for null everywhere.
Use a single piece of reflection once to attempt to load an instance of a root 1.6 class. The root class should check that it is running on 1.6 before allowing an instance to be created (I suggest both using -target1.6` and using a 1.6 only method in a static initialiser).
There are a few approaches you could use:
Compile against 1.6 and use testing to ensure functionality degrades gracefully; this is a process I've worked with on commercial products (1.4 target with 1.3 compatibility)
Use version-specific plugins and use the runtime to determine which to load; this requires some sort of plugin framework
Compile against 1.5 and use reflection to invoke 1.6 functionality; I would avoid this because of the added complexity over the first approach at reduced performance
In all cases, you'll want to isolate the functionality and ensure the generated class files have a version of 49.0 (compile with a 1.5 target). You can use reflection to determine method/feature availability when initializing your façade classes.
You could use your source control to help you a little if it does branching easily (git, svn or Perforce would work great). You could have two branches of code, the 1.5 only branch and then a 1.6 branch that branches off of the 1.5 line.
With this you can develop in 1.5 on the 1.5 branch and then merge your changes/bugfixes into the 1.6 branch as needed and then do any code upgrades for specific 1.6 needs.
When you need to release code you build it from whichever branch is required.
For your Eclipse, you can either maintain two workspaces, one for each branch or you can just have two sets of projects, one for each branch, though you will need to have different project names which can be a pain. I would recommend the workspaces approach (though it has its own pains).
You can then specify the required JVM version for each project/workspace as needed.
Hope this helps.
(Added: this would also make for an easy transition at such time when you no longer need the 1.5 support, you just close down that branch and start working only in the 1.6 branch)
One option would be to break the code into 3 projects.
One project would contain common stuff which would work on either version of java.
One project would contain the java6 implementations and would depend on the common project.
One project would contain the java5 implementations and would depend on the common project.
Breaking things into interfaces with implementations that implement those interfaces, you could eliminate any build dependencies. You would almost certainly need dependency injection of one kind or another to help wire your concrete classes together.
Working in Eclipse, you could set the java6 project to target java6, and the other 2 projects to target java5. By selecting the JRE on a project by project basis, you'd show up any dependencies you missed.
By getting a little clever with your build files, you could build the common bit both ways, and depend on the correct version, for deployment - though I'm not sure this would bring much benefit.
You would end up with two separate versions of your application - one for java6, one for java5.

Resources