I am very wary of using an all-inclusive application framework for building my SPA. I have done a lot of reading on the subject, but I haven't found any articles on whether these frameworks are composable or not. I have a long-held architectural belief that frameworks should not try to do too much, rather they should:
do one thing only
do it very well
easily compose with other frameworks
Having said this, I'm trying to think outside the "should I use Angular, Ember, or Backbone" box and ask if there is a way to use more than one. For example, would it be possible to use Angular's templating (which I've heard is awesome) but use also Ember's routing (which supposedly rocks)?
My goal is to make the "all-important SPA framework decision" less important, so that I can change it later. If we go with a mixed approach, it buys us 2 major benefits:
we can rip out the "templating" engine or the "routing" engine or the "whatever" engine individually, and thus not need an entire application re-write to change something we don't like
by figuring out how to make them play nice together, we would be able to switch out individual routes/controllers/views allowing use to switch frameworks/approaches in small granular steps
Would this be a reasonable choice or totally fraught with annoying difficulties?
If the answer is the latter, then these frameworks are fundamentally flawed and I will not be using them.
The innards of both Angular and Ember are tightly coupled. You could do portions of a page with just Ember or just Angular, but mixing the routing/templating/data-binding from one framework to another would be an extremely difficult task.
Related
I have been working with Angular for some time now, and I fail to see how it is an improvement from my previous way of coding.
First, I can't see what's wrong with having a central object to hold your project. After all, the injector is a singleton that looks up your dependencies into a central place, so Angular does have a central object, it's just hidden. Namespacing doesn't necessarily mean coupling, if it's done properly. And even when it's done, you don't need every single object of your code to be loosely coupled with the others. Besides, anytime you create a standalone JS script, you have to wrap it into Angular to make them play nice together.
Second, it's very verbose to declare all your dependencies everytime (especially with minification), so there is no gain from the readability point of view compared to proper namespacing.
Third, the performance gain is minimal. It forces me to use singletons everywhere, but I can do that on my own if I need to, and most of the time, I don't (network and DOM manipulations are my bottleneck, not JS objects).
In the end, I like the "enhanced" HTML and the automatic two-way bindings, but I can't see how the injection makes it any better than the way other frameworks deal with dependencies, given that it doesn't even provide dynamic loading like require.js. I haven't see any use case where I say to myself "oh, this is where it's so much better than before, I see" while coding.
Could you explain to me what benefits this technical choice brings to a project?
I can see only one for now: convention and best practice enforcement. It's a big one to create a lib ecosystem, but for now I don't see the fruit of it in the Angular community.
For me there are few aspects of how Angular's dependency injection is improving my projects, which I will list here. I hope that this will show you how OTHERS can benefit from it, but if you are well organised and experienced JS developer, then perhaps it might not be the same case for you. I think at some point this is just the matter of developing your own tools and coding guide.
Unified, declarative dependency resolving
JS is dynamic language (that's new, huh?) which gives a lot of power and even more responsibility to the programmer. Components can interact with each other on various ways by passing around all sorts of objects: regular objects, singletons, functions, etc. They can even make use of blocks of code which were not even mentioned to be used by other components.
JS has never had (and most likely never will) a unified way of declaring public, private or package (module) scopes like other languages have (Java, C, C#). Of course there are ways of encapsulating logic, but ask any newcomer to the language and he will simply don't know how to use it.
What I like about DI (not only in Angular, but in general) is the fact that you can list dependencies to your component, and you are not troubled how this dependency got constructed. This is very important for me, especially that DI in Angular allows you to resolve both kinds of components: these from the framework itself (like $http), or custom ones (like my favorite eventBus which I'm using to wrap $on event handlers).
Very often I look at the declaration of a service and I know what it does and how it does it just by looking at dependencies!
If I was to construct and/or make use of all those objects deep in the component itself, then I would always have to analyze implementation thoroughly and check it from various aspects. If I see localStorage in dependencies list, I know for the fact that I'm using HTML5 local storage to save some data. I don't have to look for it in the code.
Lifespan of components
We don't need to bother anymore about order of initialization of certain components. If A is dependent on B then DI will make sure that B is ready when A needs it.
Unit testing
It helps a lot to mock out components when you are using DI. For instance, if you have controller: function Ctrl($scope, a, b, c, d) then you instantly know what it is dependent on. You inject proper mocks, and you are making sure that all parties talking and listening to your controller are isolated. If you have troubles writing tests then you most likely messed up levels of abstraction or are violating design principles (Law Of Diameter, Encapsulation, etc.)
Good habits
Yes, most likely you could use namespacing to properly manage the lifespan of your objects. Define singleton where its needed and make sure that noone messes up your private members.
But honestly, would you need that if the framework can do it for you?
I haven't been using JS "the right way" just until I learned Angular. It's not that I didn't care, I just didn't have to since I was using JS just for some tweeks of UI, primarly based on jquery.
Now its different, I got a nice framework which forces me a bit to keep up with good practices, but it also gives me great power to extend it and make use of the best features that JS has.
Of course poor programmers can still break even the best tool, but from what I've learned by recently reading "JS the good parts" by D. Crockford people were doing reeeeeealy nasty stuff with it. Thanks to great tools like jQuery, Angular and others we now have some nice tool which helps to write good JS applications and sticking to best practices while doing so.
Conclusion
As you have pointed out, you CAN write good JS applications by doing at least those three things:
Namespacing - this avoids adding stuff to global namespace, avoids potential conflicts and allows for resolving proper components easily where needed
Creating reusable components / modules - by, for instance, using function module pattern and explicitly declaring private and public members
Managing dependencies between components - by defining singletons, allowing for retrieving dependencies from some 'registry', disallowing of doing certain stuff when certain conditions are not meet
Angular simply does that by:
Having $injector which manages dependencies between components and retrieves them when needed
Forcing to use factory function which has both PRIVATE and PUBLIC APIs.
Let the components talk to each other either directly (by being dependent of one another) or by shared $scope chain.
We are re-writing some of our web applications from asp.net MVC + jquery and angular in some places to asp.net web api (for now it's asp.net 4.6, but future plan is to have asp.net core here) + angular js. The main reason for rewriting is that we don't want html rendering on server side.
Now, some people want to have a NodeJS in between web api and angular js and some people cannot see any reason for having it (including me), it could be that no reasons are seen because of lack of knowledge about NodeJS, but my thoughts are:
If we have angular js + web api, why would we want to have something in between like a proxy (which is NodeJS in this case), and go through that proxy instead of going directly to web api? Is there any scenarios that cannot be done without node js in between which can be done with only web api? Our applications are simple: fetch some data from api and present it in ui. Some authentication is involved as well.
When having backend technology, one is enough, having two (web api and node) at the same time just adds complexity to application and is harder to maintan?
Should we or should we not use node in this case? Having in mind that in the team we do not have a lot of experience with NodeJS, but I hear these arguments that node is very easy to learn, so that's not a big problem.
This is not so much an answer as an extended comment because there isn't an outright question here.
Ultimately it depends on what the reason for wanting to use NodeJS are. To address your thoughts:
Why would you want a proxy
There are a couple of reasons for having a proxy, such as security and scalabilty.
For example, suppose you wanted to have your back-end implemented as a series of Micro services. Without a proxy, the client-side has to know about all of these service's endpoints to it can talk to them. This is exposing them to the outside world which might not be desirable from a security standpoint.
It also makes the client-side more complex since it now has to co-ordinate calls to the different services and you'll have to deal with things like CORS on the back-end; having the client side call a single proxy, that also acts as a coordinator, so it can "fan out" the various calls to the back-end services, tends to be simpler.
It allows you to scale them independently; some services might need to be scaled more than others depending on how heavily they are used. But the client-side is still hitting a single endpoint so it's much easier to manage.
Why multiple back-end technologies is not necessarily a bad thing
Having two or more back-end technologies is a trade-off; yes it can increase the complexity of the system, and can be more difficult to maintain, but it can also make it much easier to implement certain functionality where one technology is better at doing X than another.
For example, there are many NodeJS modules that do X, Y or Z which may be more accessible to you than corresponding functionality written in C# (I'm deliberately not going to list any examples here to avoid muddying the waters).
If you have Javascript developers who want to get involved with the back-end, they might feel more comfortable working with NodeJs rather than having to ramp up on C#/ASP.NET thus making them (initially anyway) more productive.
I find NodeJS really useful for quickly knocking up prototype services so that you can test how they are consumed, etc. Using something like HapiJS, you can have a simple HTTP API up and running with just Notepad in a few minutes. This can be really useful when you're in a hurry :)
If you take the proxy / microservices approach, you can stop worrying too much about what technology is used to implement each service, as long as it supports a common communication protocol (HTTP, Message Queues, etc) within the system.
Ultimately, you need to have conversations about this with your team.
You haven't mentioned if this is something that your peers are pushing for or if this is a decision being pushed by technical leadership; I would take the view that as the incumbent, any new technology needs to prove that there is a good reason for its adoption, but YMMV since this may be a decision that's out of your hands.
My personal recommendation in this case is, don't use NodeJS for the proxy; use ASP.NET WebAPI instead but look hard at your system and try to find ways to split it out into Micro-like services. This lets you keep things simpler in terms of ecosystem, but also clears the way to let you introduce NodeJS for some parts of the application where it has been proven that it is a better tool for the job than .Net. That way everyone is happy :)
There is a very good breakdown comparison here which can be used as part of the discussion, the benchmark included in it is a little old but it shows that there is not that much of a difference performance wise.
The app which I am developing will have the same functionality for all users/clients/projects (call 'em what you will).
However, the HTML forms presented to the user and the AJAX used to send them to the server will vary for each project.
I was thinking of using Angular constants, with ng-show / ng-hide (maybe even ng-if) on the HTML and a switch in the controller, based on a constant for the AJAX send & receive.
Is this a good approach? I can see things getting complex with more than a handful of projects. Should I use a different view/controller for each project? I might lose out on some common code that way, but it's less likely t become spaghetti.
I would suggest taking a domain/(test) driven approach. Don't generalize too much code up front.
Building generalized code will create dependencies that are all a potential victim in need of future refactoring. Even in the case of simple changes. Nothing is more time consuming / frustrating than those small modifications that cause an avalanche. I've seen alot of projects run out of time because the code base was over engineered at the start.
My approach, especially for the more complex projects with no clear overview of overlapping functionalities, is to just start designing/building the functionalities separate of each other in small steps. Just like any agile workflow, deliver a complete working feature (a working form) and when you're working on feature and notice that there's a shared functionality build earlier on, make a (wise) decision to refactor/promote the existing code into generalized code. At this stage you'll be in a better position to make such a judgement. If you've taken the test driven approach (which I highly recommend) refactoring the existing code can be done without too much effort.
Working this way gives a greater guarantee to deliver and to end up with good readable optimised code.
TL;DR
It all comes down to common sense.
am new to learning about MVC.
I am wondering if there is a heuristic (non programatically speaking) out there for dividing and deciding what logic goes on the front-end as opposed to the back-end especially when using front-end libraries like backbone.js.
That is, libraries like backbone.js separate data from DOM elements which makes it useful for creating sophisticated client side logic that, perhaps, used to be carried out on the server side.
Thanks in advance
Joey
The "classic" way to do Model - View - Controller is to have all three on the server. The View layer output of HTML and some JS is then rendered by the browser.
Rails is an excellent example of this.
The "new cool" way is to treat the browser as the main computing engine with the backend server providing services via APIs.
In this case, the Model, View and Controller software all run (as Javascript or coffeescript) on the client. Backbone is often a part of the browser-side solution but it has alternatives such as spine, angularJS and others.
On the backend server, you run the dbms and a good API system. There are some good frameworks being built on Ruby/Rack. See posts by Daniel Doubrovkine on code.dblock.org You have many choices here.
Advantages of MVC on the client
Responsive user interface for the user
Cool Ajaxy single page effects
Single page webapps can provide much faster UI to user than regular web sites
Good architecture, enabler for purpose build iPhone/Android apps
Depending on the app, can be used to create standalone webapps which work without a network connection.
This is what many cool kids are doing these days
Disadvantages
Need to decide on approach for old browsers, IE, etc
Making content available for search engines can be tricky. May require shadow website just for the search engines
Testing can be a challenge. But see new libs such as AngularJS which include a testability focus
This approach involves more software: takes longer to write and test.
Choosing
It's up to you. Decision depends on your timeframe, resources, experience, needs, etc etc. There is no need to use backbone or similar. Doing so is a tradeoff (see above). It will always be faster/easier not to use it but doing without it (or similar) may not accomplish your goals.
You can build a great MVC app out of just Rails, or PHP with add-on libs or other MVC solutions.
I think you're using the word heuristic in a non-programmatic sense correct? I.e. you're using it to mean something along the lines of 'rule of thumb'?
As a rule of thumb:
You want the server to render the initial page load for UX and SEO reasons.
You could also have subsequent AJAX partial page loads rendered by the server for the same reasons. Profile to see which is faster: having the server render and transfer extra data (the markup) over-the-wire vs. sending a more concise payload (with JSON) and having the client render it. There are tradeoffs especially if you take into consideration mobile devices where maybe rendering on the client will be slower, but then again there are mobile devices out there with slower internet connections...
Like any client-server architecture: You want the client to do things that require fast responsiveness on the client and then send some asynchronous operation to the server that performs performs the same task.
The take away is vague, but true: it's all about tradeoffs and you have to decide what your products needs are.
The first two things to come to mind for me were security & search..
You will always want to restrict read/write access on the server.
in most instances you will want to have your search functionality as close to the data as possible.
I've created unmaintainable websites using PHP because it was so easy to do things quick and dirty. I don't want to do the same thing with Python/Django on Google's appengine.
Is there any good architecture references for creating websites using Django and appengine? (E.g. where to put business logic, where to put data access logic, how to cleanly separate the views, how to do unit testing, etc.)
Django by its nature will make it harder to put things in the wrong places. That is one of the cool things about the new generation of MVC frameworks, you have to work at it to create a ball of mud.
If you decide to not use Django, these hints from Werkzeug team might be interesting. This application structure takes what's best from Django but gives you complete freedom over actual layout (no need to have models.py even if you do not have any model in application...).
As already mentioned, by choosing Django, you have already taken a big step in avoiding spaghetti. Django provides you with an MVC framework (Model Template View to be Django specific). Thus, your job now is to study and properly follow the MVC design pattern which Django is guiding you with. Where you place your business logic will depend on your specific application and requirements. In some cases, some business logic is placed closer to the data in the models, and in other times its placed in the controller. Furthermore, GAE doesn't require Django and in some cases GAE's webapp framework should suffice.