Consume back-end REST API with React application - repository vs. service - reactjs

I work on a React app consuming RESTful API. My API on top of basic CRUD operations exposes business functions falling under controller archetype (executable functions, with parameters and possibly return values). E.g.:
http://api.example.com/carts
http://api.example.com/users
http://api.example.com/documents
http://api.example.com/carts/{id}/checkout
http://api.example.com/users/{id}/block
http://api.example.com/documents/{id}/print
Within the team however we have disagreement regarding how API access layer should be called. We struggle to choose between *Repository and *Service.
On one hand these functionalities can be perceived as some collections abstractions and facades and it make sense to call them repositories. On the other hand however they expose some business, domain logic which definitely does not fall under repository pattern and is definitely a service.
Normally (in the back end) a repository could be either a service dependency or can be used standalone. Repository serves the purpose of accessing and modifying a collection, while a service is facade for some more complex business operations.
This separation however does not seem to make a lot of sense in the front end, where the only difference is the API endpoint website will call. Thus it's pointless to separate them, since realistically they are within the boundaries of the same concern.
What are your opinions about it? What kind of nomenclature do you use in your apps? Thanks for help and suggestions ;)

Related

AngularJS + NodeJS + web api application architecture

We are re-writing some of our web applications from asp.net MVC + jquery and angular in some places to asp.net web api (for now it's asp.net 4.6, but future plan is to have asp.net core here) + angular js. The main reason for rewriting is that we don't want html rendering on server side.
Now, some people want to have a NodeJS in between web api and angular js and some people cannot see any reason for having it (including me), it could be that no reasons are seen because of lack of knowledge about NodeJS, but my thoughts are:
If we have angular js + web api, why would we want to have something in between like a proxy (which is NodeJS in this case), and go through that proxy instead of going directly to web api? Is there any scenarios that cannot be done without node js in between which can be done with only web api? Our applications are simple: fetch some data from api and present it in ui. Some authentication is involved as well.
When having backend technology, one is enough, having two (web api and node) at the same time just adds complexity to application and is harder to maintan?
Should we or should we not use node in this case? Having in mind that in the team we do not have a lot of experience with NodeJS, but I hear these arguments that node is very easy to learn, so that's not a big problem.
This is not so much an answer as an extended comment because there isn't an outright question here.
Ultimately it depends on what the reason for wanting to use NodeJS are. To address your thoughts:
Why would you want a proxy
There are a couple of reasons for having a proxy, such as security and scalabilty.
For example, suppose you wanted to have your back-end implemented as a series of Micro services. Without a proxy, the client-side has to know about all of these service's endpoints to it can talk to them. This is exposing them to the outside world which might not be desirable from a security standpoint.
It also makes the client-side more complex since it now has to co-ordinate calls to the different services and you'll have to deal with things like CORS on the back-end; having the client side call a single proxy, that also acts as a coordinator, so it can "fan out" the various calls to the back-end services, tends to be simpler.
It allows you to scale them independently; some services might need to be scaled more than others depending on how heavily they are used. But the client-side is still hitting a single endpoint so it's much easier to manage.
Why multiple back-end technologies is not necessarily a bad thing
Having two or more back-end technologies is a trade-off; yes it can increase the complexity of the system, and can be more difficult to maintain, but it can also make it much easier to implement certain functionality where one technology is better at doing X than another.
For example, there are many NodeJS modules that do X, Y or Z which may be more accessible to you than corresponding functionality written in C# (I'm deliberately not going to list any examples here to avoid muddying the waters).
If you have Javascript developers who want to get involved with the back-end, they might feel more comfortable working with NodeJs rather than having to ramp up on C#/ASP.NET thus making them (initially anyway) more productive.
I find NodeJS really useful for quickly knocking up prototype services so that you can test how they are consumed, etc. Using something like HapiJS, you can have a simple HTTP API up and running with just Notepad in a few minutes. This can be really useful when you're in a hurry :)
If you take the proxy / microservices approach, you can stop worrying too much about what technology is used to implement each service, as long as it supports a common communication protocol (HTTP, Message Queues, etc) within the system.
Ultimately, you need to have conversations about this with your team.
You haven't mentioned if this is something that your peers are pushing for or if this is a decision being pushed by technical leadership; I would take the view that as the incumbent, any new technology needs to prove that there is a good reason for its adoption, but YMMV since this may be a decision that's out of your hands.
My personal recommendation in this case is, don't use NodeJS for the proxy; use ASP.NET WebAPI instead but look hard at your system and try to find ways to split it out into Micro-like services. This lets you keep things simpler in terms of ecosystem, but also clears the way to let you introduce NodeJS for some parts of the application where it has been proven that it is a better tool for the job than .Net. That way everyone is happy :)
There is a very good breakdown comparison here which can be used as part of the discussion, the benchmark included in it is a little old but it shows that there is not that much of a difference performance wise.

React Redux data-fetching : differentiate browser / server-side method in isomorphic app?

Most examples I come across seem to always fetch the data from URL.
But on server side, would it not make sense to fetch data directly from the database ?
So in the action creator it would look something like:
if(typeof document !== 'undefined')
fetch("url/posts").then(response => dispatch(receivedPosts(response))
else
db.select("posts").then(response => dispatch(receivedPosts(response))
What are the Pros and Cons of this method relative to other data-fetching methods ?
Most React/Redux applications are built in an environment where there is a separation between API development and UI development. Often these same APIs power mobile apps as well as other desktop apps, thus a direct db query like you've shown wouldn't be available to the server side render.
In your case it seems your building full stack app with a single codebase. There's nothing wrong with that necessarily however there are some things you should probably consider. First off is establishing a likely lifecycle of the application. There's a big difference between a fun little side project done to learn more about a stack, a startup racing to get and MVP to market, and a large enterprise building a platform that'll have to scale out the gate. Each of these scenarios could lead you to different considerations about how to design the tiers of your app. One important question specific to what your working on is whether other apps/platforms may need to access this same data, and whether different teams may eventually maintain the back-end and front-end. With node and mongo it's very easy to create API endpoints that serve your React app initially but could be used by say a native IOS app later. You also would get the benefit of separation of concerns in maintenance and enhancement of your app. Debugging is often easier when you have data access and UI logic completely separated as you can call your APIs directly with something like postman to identify if they're providing the correct data.
In your case it seems like may already be serving API data from /posts, so you could potentially get all of those benefits I mentioned but still also skip a network hop by bypassing the API as your code snippet suggests. This would provide one less point of failure in the server render and would be a little faster, but you probably won't gain much speed and if you have network issues with your APIs they'll show up right away on the client side of the app, so the benefits don't go to far.
I would personally just make the fetch calls in the React app and separate out all my data access/API logic in a way in which moving the back-end and front-end to two separate repos wouldn't be too painful in the future. This will put the app in a good place for it's potential growth in the future. The benefits of the separation of concerns out weight any slight performance bump. If you are experiencing slow page loads, then there are probably numerous places to tune, often starting with the db queries themselves.

GAE: multiple modules vs. multiple applications

App Engine allows you to have multiple modules within a single application. I'm trying to understand what the benefits of this are over having multiple App Engine projects.
In my situation, I have three components
A back-end component that does all the processing, stores all data, and is accessible with a REST API
A first front-end (e.g., request handlers) component under a first domain name that probably doesn't need its own datastore
A second front-end component under a second domain name that also probably doesn't need its own datastore.
Whether I use multiple modules or multiple apps, the communications between components are done using HTTP requests.
With modules, all the modules use the same datastore and memcache, but with different projects, they will each have their own memcache and datastore. I don't think this matters for me, because only the backend component needs a datastore.
I'm leaning towards using separate applications instead of separate modules because it seems easier to have complete separation.
Is there any reason I should prefer separate applications over modules or vice versa?
The question is somewhat opinion-based but there are many more reasons to use services (as they are now known) over separate projects.
You cite the main reason in your question: shared back-end services. Although you don't think that matters as they probably don't need Datastore, I would rather assume they may need them in the future than not (and then have to integrate via your other application's HTTP interface instead of direct Datastore RPC).
By using different services in the the same project, you benefit from simpler access to other Cloud Platform services (e.g. BigQuery) through things like service accounts.
You also get things like service discovery through the Modules Service. If you were to deploy as separate projects, App Engine doesn't know your projects from mine.
By using separate projects, you get pretty much the same separation as using services, but forego the benefits above.
Some people might want to use a separate project to benefit from an extra 28 free instance hours, but that wouldn't be a great long term design goal for my liking.

Heuristic for dividing front-end and back-end logic in libraries like backbone.js

am new to learning about MVC.
I am wondering if there is a heuristic (non programatically speaking) out there for dividing and deciding what logic goes on the front-end as opposed to the back-end especially when using front-end libraries like backbone.js.
That is, libraries like backbone.js separate data from DOM elements which makes it useful for creating sophisticated client side logic that, perhaps, used to be carried out on the server side.
Thanks in advance
Joey
The "classic" way to do Model - View - Controller is to have all three on the server. The View layer output of HTML and some JS is then rendered by the browser.
Rails is an excellent example of this.
The "new cool" way is to treat the browser as the main computing engine with the backend server providing services via APIs.
In this case, the Model, View and Controller software all run (as Javascript or coffeescript) on the client. Backbone is often a part of the browser-side solution but it has alternatives such as spine, angularJS and others.
On the backend server, you run the dbms and a good API system. There are some good frameworks being built on Ruby/Rack. See posts by Daniel Doubrovkine on code.dblock.org You have many choices here.
Advantages of MVC on the client
Responsive user interface for the user
Cool Ajaxy single page effects
Single page webapps can provide much faster UI to user than regular web sites
Good architecture, enabler for purpose build iPhone/Android apps
Depending on the app, can be used to create standalone webapps which work without a network connection.
This is what many cool kids are doing these days
Disadvantages
Need to decide on approach for old browsers, IE, etc
Making content available for search engines can be tricky. May require shadow website just for the search engines
Testing can be a challenge. But see new libs such as AngularJS which include a testability focus
This approach involves more software: takes longer to write and test.
Choosing
It's up to you. Decision depends on your timeframe, resources, experience, needs, etc etc. There is no need to use backbone or similar. Doing so is a tradeoff (see above). It will always be faster/easier not to use it but doing without it (or similar) may not accomplish your goals.
You can build a great MVC app out of just Rails, or PHP with add-on libs or other MVC solutions.
I think you're using the word heuristic in a non-programmatic sense correct? I.e. you're using it to mean something along the lines of 'rule of thumb'?
As a rule of thumb:
You want the server to render the initial page load for UX and SEO reasons.
You could also have subsequent AJAX partial page loads rendered by the server for the same reasons. Profile to see which is faster: having the server render and transfer extra data (the markup) over-the-wire vs. sending a more concise payload (with JSON) and having the client render it. There are tradeoffs especially if you take into consideration mobile devices where maybe rendering on the client will be slower, but then again there are mobile devices out there with slower internet connections...
Like any client-server architecture: You want the client to do things that require fast responsiveness on the client and then send some asynchronous operation to the server that performs performs the same task.
The take away is vague, but true: it's all about tradeoffs and you have to decide what your products needs are.
The first two things to come to mind for me were security & search..
You will always want to restrict read/write access on the server.
in most instances you will want to have your search functionality as close to the data as possible.

Does business logic belong in the service layer?

I've got a set of classes, namely, a data transfer object, a service implementation object, and a data access object. I currently have business logic in the service implementation object; it uses the dao to get data to populate the dto that is shipped back to the client/gui code.
The issue is that I can't create a lightweight junit test of the service implementation object(it's a servlet); I think the business logic should be elsewhere, but the only thing I can think of is putting business logic in the dao or in yet another layer that goes between the dao and the service implementation.
Are there other options, or am I thinking about this the wrong way?
It's a GWT/App Engine project.
I don't understand why you can't unit-test the servlet, e.g. as per this SO question (there are others on similar themes) -- can you please explain?
Edit: if there's no special reason, I suggest you should the business logic in the service layer (where it seems to belong) and unit-test it there -- the approaches suggested in the SO question I just quoted, for example, seem reasonably lightweight (though I didn't test them specifically).
You can put your business logic in it's own jar file and test this component independently from the integration with the web (servlet)
The servlet is just a protocol, it is not your business logic, more an integration point.
It must be easy to imagine to expose your same business logic through a thick client.
Also in that case, you should not hide the business logic under buttons or links.
One more note: you might want to look into the MVC framework; struts. Your model will hold the business logic.
Hope this helps.
The servlet is the controller , it is a very big mistake, to put the business logic there.

Resources