AngularJS + NodeJS + web api application architecture - angularjs

We are re-writing some of our web applications from asp.net MVC + jquery and angular in some places to asp.net web api (for now it's asp.net 4.6, but future plan is to have asp.net core here) + angular js. The main reason for rewriting is that we don't want html rendering on server side.
Now, some people want to have a NodeJS in between web api and angular js and some people cannot see any reason for having it (including me), it could be that no reasons are seen because of lack of knowledge about NodeJS, but my thoughts are:
If we have angular js + web api, why would we want to have something in between like a proxy (which is NodeJS in this case), and go through that proxy instead of going directly to web api? Is there any scenarios that cannot be done without node js in between which can be done with only web api? Our applications are simple: fetch some data from api and present it in ui. Some authentication is involved as well.
When having backend technology, one is enough, having two (web api and node) at the same time just adds complexity to application and is harder to maintan?
Should we or should we not use node in this case? Having in mind that in the team we do not have a lot of experience with NodeJS, but I hear these arguments that node is very easy to learn, so that's not a big problem.

This is not so much an answer as an extended comment because there isn't an outright question here.
Ultimately it depends on what the reason for wanting to use NodeJS are. To address your thoughts:
Why would you want a proxy
There are a couple of reasons for having a proxy, such as security and scalabilty.
For example, suppose you wanted to have your back-end implemented as a series of Micro services. Without a proxy, the client-side has to know about all of these service's endpoints to it can talk to them. This is exposing them to the outside world which might not be desirable from a security standpoint.
It also makes the client-side more complex since it now has to co-ordinate calls to the different services and you'll have to deal with things like CORS on the back-end; having the client side call a single proxy, that also acts as a coordinator, so it can "fan out" the various calls to the back-end services, tends to be simpler.
It allows you to scale them independently; some services might need to be scaled more than others depending on how heavily they are used. But the client-side is still hitting a single endpoint so it's much easier to manage.
Why multiple back-end technologies is not necessarily a bad thing
Having two or more back-end technologies is a trade-off; yes it can increase the complexity of the system, and can be more difficult to maintain, but it can also make it much easier to implement certain functionality where one technology is better at doing X than another.
For example, there are many NodeJS modules that do X, Y or Z which may be more accessible to you than corresponding functionality written in C# (I'm deliberately not going to list any examples here to avoid muddying the waters).
If you have Javascript developers who want to get involved with the back-end, they might feel more comfortable working with NodeJs rather than having to ramp up on C#/ASP.NET thus making them (initially anyway) more productive.
I find NodeJS really useful for quickly knocking up prototype services so that you can test how they are consumed, etc. Using something like HapiJS, you can have a simple HTTP API up and running with just Notepad in a few minutes. This can be really useful when you're in a hurry :)
If you take the proxy / microservices approach, you can stop worrying too much about what technology is used to implement each service, as long as it supports a common communication protocol (HTTP, Message Queues, etc) within the system.
Ultimately, you need to have conversations about this with your team.
You haven't mentioned if this is something that your peers are pushing for or if this is a decision being pushed by technical leadership; I would take the view that as the incumbent, any new technology needs to prove that there is a good reason for its adoption, but YMMV since this may be a decision that's out of your hands.
My personal recommendation in this case is, don't use NodeJS for the proxy; use ASP.NET WebAPI instead but look hard at your system and try to find ways to split it out into Micro-like services. This lets you keep things simpler in terms of ecosystem, but also clears the way to let you introduce NodeJS for some parts of the application where it has been proven that it is a better tool for the job than .Net. That way everyone is happy :)
There is a very good breakdown comparison here which can be used as part of the discussion, the benchmark included in it is a little old but it shows that there is not that much of a difference performance wise.

Related

Possible to Build a Frontend Connected Directly to a Database?

I typically build web apps with a frontend (using something like React) and a backend (using Ruby/Rails or Java/Spring). I recently became curious if it's possible to forgo a backend completely and just have your client code directly call your database. No apis, etc. Just frontend directly connecting to db for everything you need. No use of node, express, etc or other "backend" solutions for JS.
If someone has a tutorial or example to show, that would be great.
If it is possible, what are the disadvantages to doing this besides that it's not standard? I am guessing there are probably security issues (but they aren't coming to mind). You are shipping more code than is necessary to the client, perhaps causing performance issues. Lastly, it might be a little harder to encapsulate business logic. The advantage to me seems to be simplicity, only dealing with 1 technology, etc.

Logic app or Web app?

I'm trying to decide whether to build a Logic App or a Web App.
It has to do things I'm quite comfortable doing in C#: receive messages in various formats (a few thousand per day), translate them, make API calls and forward them. None of the endpoints are widely used, so the out-of-the-box connectors won't be a benefit. Some require custom headers, the contents of which are calculated using a hashing algorithm. Some of the work involves converting Json into XML and vice-versa.
From what I've read, one of the key points of difference of Logic Apps are that you don't have to write any code. Since our organisation is actually quite comfortable with code, that doesn't feel like it'll actually be a benefit.
Am I missing something? Are there any compelling reasons why a Logic App would be better than a Web App in this instance?
Using Logic Apps has a few additional benefits over just writing code which include:
Out of box monitoring. For every execution you get to see exactly what happened in each step of the process with a monitoring view that replicates your Logic App design view.
Built in failure handling. Logic Apps will automatically retry calls on failure cases and also allows you to either customize the retry policy or have a custom retry policy with a do-until pattern.
Out of box alerting. You can configure alerts to inform you of failures.
Serverless. You don't worry about sizing or scaling and you pay by consumption.
Faster development. Logic Apps allows you to build out the solution faster especially as you consider that you don't have to code for monitoring views, alerting, and error handling that comes out of the box with Logic Apps.
Easy to extend. If you are already using a Logic App access to over a 125 connectors to various services will make it easy to add business value or making it smarter by including things like cognitive services to your workflow with very little extra effort.
I've decided to keep away from Logic Apps for these reasons:
It is not supported outside Azure. We aren't tied to any other providers, and to use Logic Apps would break that independence.
I don't know how much of the problem is readily soluble using Logic Apps. (It seems I will be solving all sorts of problems which wouldn't be problems if I was using C#. This article details some issues encountered while developing a simple process using an earlier version of Logic Apps.)
Nobody has come up with an argument more compelling than the reasons I've given above (especially the first one) why we should use it, so it would be a gamble with little to gain and plenty to lose.
You can think of Logic Apps as an orchestrator - something that takes external pieces of functionality, and weaves a workflow together.
It has nothing to do with your requirement of "writing code" - your code can be external functions on any platform - on-prem, AWS, Azure, Zendesk, and all of your code can be connected together using Logic Apps.
Regardless of which platform you choose, you will still have cross-cutting concerns such as monitoring, logging, alerting, deployments, etc, and Logic Apps addresses very robustly all of those requirements.

React Redux data-fetching : differentiate browser / server-side method in isomorphic app?

Most examples I come across seem to always fetch the data from URL.
But on server side, would it not make sense to fetch data directly from the database ?
So in the action creator it would look something like:
if(typeof document !== 'undefined')
fetch("url/posts").then(response => dispatch(receivedPosts(response))
else
db.select("posts").then(response => dispatch(receivedPosts(response))
What are the Pros and Cons of this method relative to other data-fetching methods ?
Most React/Redux applications are built in an environment where there is a separation between API development and UI development. Often these same APIs power mobile apps as well as other desktop apps, thus a direct db query like you've shown wouldn't be available to the server side render.
In your case it seems your building full stack app with a single codebase. There's nothing wrong with that necessarily however there are some things you should probably consider. First off is establishing a likely lifecycle of the application. There's a big difference between a fun little side project done to learn more about a stack, a startup racing to get and MVP to market, and a large enterprise building a platform that'll have to scale out the gate. Each of these scenarios could lead you to different considerations about how to design the tiers of your app. One important question specific to what your working on is whether other apps/platforms may need to access this same data, and whether different teams may eventually maintain the back-end and front-end. With node and mongo it's very easy to create API endpoints that serve your React app initially but could be used by say a native IOS app later. You also would get the benefit of separation of concerns in maintenance and enhancement of your app. Debugging is often easier when you have data access and UI logic completely separated as you can call your APIs directly with something like postman to identify if they're providing the correct data.
In your case it seems like may already be serving API data from /posts, so you could potentially get all of those benefits I mentioned but still also skip a network hop by bypassing the API as your code snippet suggests. This would provide one less point of failure in the server render and would be a little faster, but you probably won't gain much speed and if you have network issues with your APIs they'll show up right away on the client side of the app, so the benefits don't go to far.
I would personally just make the fetch calls in the React app and separate out all my data access/API logic in a way in which moving the back-end and front-end to two separate repos wouldn't be too painful in the future. This will put the app in a good place for it's potential growth in the future. The benefits of the separation of concerns out weight any slight performance bump. If you are experiencing slow page loads, then there are probably numerous places to tune, often starting with the db queries themselves.

Heuristic for dividing front-end and back-end logic in libraries like backbone.js

am new to learning about MVC.
I am wondering if there is a heuristic (non programatically speaking) out there for dividing and deciding what logic goes on the front-end as opposed to the back-end especially when using front-end libraries like backbone.js.
That is, libraries like backbone.js separate data from DOM elements which makes it useful for creating sophisticated client side logic that, perhaps, used to be carried out on the server side.
Thanks in advance
Joey
The "classic" way to do Model - View - Controller is to have all three on the server. The View layer output of HTML and some JS is then rendered by the browser.
Rails is an excellent example of this.
The "new cool" way is to treat the browser as the main computing engine with the backend server providing services via APIs.
In this case, the Model, View and Controller software all run (as Javascript or coffeescript) on the client. Backbone is often a part of the browser-side solution but it has alternatives such as spine, angularJS and others.
On the backend server, you run the dbms and a good API system. There are some good frameworks being built on Ruby/Rack. See posts by Daniel Doubrovkine on code.dblock.org You have many choices here.
Advantages of MVC on the client
Responsive user interface for the user
Cool Ajaxy single page effects
Single page webapps can provide much faster UI to user than regular web sites
Good architecture, enabler for purpose build iPhone/Android apps
Depending on the app, can be used to create standalone webapps which work without a network connection.
This is what many cool kids are doing these days
Disadvantages
Need to decide on approach for old browsers, IE, etc
Making content available for search engines can be tricky. May require shadow website just for the search engines
Testing can be a challenge. But see new libs such as AngularJS which include a testability focus
This approach involves more software: takes longer to write and test.
Choosing
It's up to you. Decision depends on your timeframe, resources, experience, needs, etc etc. There is no need to use backbone or similar. Doing so is a tradeoff (see above). It will always be faster/easier not to use it but doing without it (or similar) may not accomplish your goals.
You can build a great MVC app out of just Rails, or PHP with add-on libs or other MVC solutions.
I think you're using the word heuristic in a non-programmatic sense correct? I.e. you're using it to mean something along the lines of 'rule of thumb'?
As a rule of thumb:
You want the server to render the initial page load for UX and SEO reasons.
You could also have subsequent AJAX partial page loads rendered by the server for the same reasons. Profile to see which is faster: having the server render and transfer extra data (the markup) over-the-wire vs. sending a more concise payload (with JSON) and having the client render it. There are tradeoffs especially if you take into consideration mobile devices where maybe rendering on the client will be slower, but then again there are mobile devices out there with slower internet connections...
Like any client-server architecture: You want the client to do things that require fast responsiveness on the client and then send some asynchronous operation to the server that performs performs the same task.
The take away is vague, but true: it's all about tradeoffs and you have to decide what your products needs are.
The first two things to come to mind for me were security & search..
You will always want to restrict read/write access on the server.
in most instances you will want to have your search functionality as close to the data as possible.

Are "WCF Data Services" going in the right direction?

I really like the idea of "WCF Data Services" but how does it work in a real life scenario? WCF Data Services provide just a nice way for the client to CRUD the data. However it's very limited in what you can pass and get back. So one ends up having all the business logic written on a client side. It's probably ok for small applications who just need a database back-end. You don't want that in serious enterprise applications, your client side will grow too large and if your business logic is some kind of know-how it can be easily disassembled.
Don't be misled that SOAP is for enterprise and REST is for sucky little side web apps. Many people have wasted a lot of time on SOAP frameworks, me included and the trouble that these frameworks cause for inter-enterprise communication would count into the Billions of dollars.
REST provides an opportunity to only care about the data being passed too and from services and the semantics used to operate against services, the rest (excuse the pun) is handled by transport level mechanisms. Do you want encrypted data channels? Well HTTPs is there for that. Do you need authentication? there are plenty of frameworks on HTTP that support this already rather than use complex WS-* protocols. Do you want Reliable Messaging? you can engineer it quite simply using message queue software - I have only ever seen one SOAP framework handle this well and it wasn't very interoperable at that point.
Whilst I am not discounting SOAP as enterprise-grade, all I am saying it don't discount REST based services as an excellent way for your enterprise modules to communicate.
I personally have integrated multi-million dollar systems using REST and SOAP and currently prefer REST based services for their ease of development and 3rd party integration, understanding, ease of documentation and their ability to rapidly deploy services across businesses.
I can understand your confusion given the naming... WCF Data Services are REST based which are notoriously poor for enterprise environments. Howvever, you can have normal SOAP based WCF services which work fine for the enterprise.

Resources