I want to understand does AWS Lambda and React.js is performant solution for serverless Single Page Application with Server Side Rendering. Maybe someone has already used it on production and can share about how bad or good it's working.
Also, would be interesting how it's ease/hard to build and support SPA with routing and server side rendering based on AWS Lambda.
Yes, it is.
My team and I have built several ReactJS + Serverless + DB and so far it seems very responsive and scales nicely. Like any other app stack you will choose, the biggest bottlenecks turn out to be data fetching and manipulation such as joins in the DB etc. Obviously, architecting your app's data structure is key here because most of the delays we have experienced so far come from poor DB queries or missing indexes, etc.
A clean request which goes: DB query -> node6 lambda fetch -> send data through API gateway -> client side, will take around 300-400 millisecond. So, as long as you have a good data structure and solid code your SPA should be performant. The frontend of our apps are hosted with cloudfront - which is very solid and super fast.
We haven't even started performance optimization or adding layers of caching to boost performance, so I suspect soon you will see full apps built with this stack that are super efficient and load fast.
Note: the recent (April/May 2017) upgrade to node6 LTS was a big improvement in performance.
Related
I typically build web apps with a frontend (using something like React) and a backend (using Ruby/Rails or Java/Spring). I recently became curious if it's possible to forgo a backend completely and just have your client code directly call your database. No apis, etc. Just frontend directly connecting to db for everything you need. No use of node, express, etc or other "backend" solutions for JS.
If someone has a tutorial or example to show, that would be great.
If it is possible, what are the disadvantages to doing this besides that it's not standard? I am guessing there are probably security issues (but they aren't coming to mind). You are shipping more code than is necessary to the client, perhaps causing performance issues. Lastly, it might be a little harder to encapsulate business logic. The advantage to me seems to be simplicity, only dealing with 1 technology, etc.
I'm developing several React SPAs and have not yet decided how the apps will be packaged/deployed/hosted.
Most of my experience is with back-end development, so I am not very familiar with methods for packaging and deploying SPAs. I have some other team members who are well versed in those processes in general; less so with React.
I have used G-WAN in the past to create RESTful APIs (works great!).
Suggestions are greatly appreciated!
I don't have any experience with ReactJS but at TWD (home of G-WAN) we have worked on a smaller project for the Global-WAN console (a powerful application albeit with a minimal tab-based and form-based UI).
Our motivations were to transfer the UI in one single (tab-based) page, and only have data to travel after that point (either using arrays or JSON). Some G-WAN examples illustrate the AJAX and event-based techniques we have used.
G-WAN's low-latency did marvels in our case, achieving a much more responsive user interface to the point that end-users believed it was a local application.
Sometimes, re-ordering and re-formating data is key to achieve high database concurrency, just like the scalability demo at the ORACLE OpenWorld 2012.
One point that I have to mention for the sake of fairness, after the testing stage our app was delivered and operated through Global-WAN's L2 P2P VPN (featuring proprietary compression) - which greatly enhanced scalability (and latency) as compared to HTTP or TLS.
Hope it helps.
ReactJS by itself is a front-end technology, once you build a production package the code compiles to plain old .html and .js that you can serve from any web server. You will need to prepare URL rewrite rules though (in development ReactJS dev server does it for you)
Things get complicated when other technologies are involved, such as isomorphic rendering which require the app itself to run on serverside using NodeJS runtime.
We are re-writing some of our web applications from asp.net MVC + jquery and angular in some places to asp.net web api (for now it's asp.net 4.6, but future plan is to have asp.net core here) + angular js. The main reason for rewriting is that we don't want html rendering on server side.
Now, some people want to have a NodeJS in between web api and angular js and some people cannot see any reason for having it (including me), it could be that no reasons are seen because of lack of knowledge about NodeJS, but my thoughts are:
If we have angular js + web api, why would we want to have something in between like a proxy (which is NodeJS in this case), and go through that proxy instead of going directly to web api? Is there any scenarios that cannot be done without node js in between which can be done with only web api? Our applications are simple: fetch some data from api and present it in ui. Some authentication is involved as well.
When having backend technology, one is enough, having two (web api and node) at the same time just adds complexity to application and is harder to maintan?
Should we or should we not use node in this case? Having in mind that in the team we do not have a lot of experience with NodeJS, but I hear these arguments that node is very easy to learn, so that's not a big problem.
This is not so much an answer as an extended comment because there isn't an outright question here.
Ultimately it depends on what the reason for wanting to use NodeJS are. To address your thoughts:
Why would you want a proxy
There are a couple of reasons for having a proxy, such as security and scalabilty.
For example, suppose you wanted to have your back-end implemented as a series of Micro services. Without a proxy, the client-side has to know about all of these service's endpoints to it can talk to them. This is exposing them to the outside world which might not be desirable from a security standpoint.
It also makes the client-side more complex since it now has to co-ordinate calls to the different services and you'll have to deal with things like CORS on the back-end; having the client side call a single proxy, that also acts as a coordinator, so it can "fan out" the various calls to the back-end services, tends to be simpler.
It allows you to scale them independently; some services might need to be scaled more than others depending on how heavily they are used. But the client-side is still hitting a single endpoint so it's much easier to manage.
Why multiple back-end technologies is not necessarily a bad thing
Having two or more back-end technologies is a trade-off; yes it can increase the complexity of the system, and can be more difficult to maintain, but it can also make it much easier to implement certain functionality where one technology is better at doing X than another.
For example, there are many NodeJS modules that do X, Y or Z which may be more accessible to you than corresponding functionality written in C# (I'm deliberately not going to list any examples here to avoid muddying the waters).
If you have Javascript developers who want to get involved with the back-end, they might feel more comfortable working with NodeJs rather than having to ramp up on C#/ASP.NET thus making them (initially anyway) more productive.
I find NodeJS really useful for quickly knocking up prototype services so that you can test how they are consumed, etc. Using something like HapiJS, you can have a simple HTTP API up and running with just Notepad in a few minutes. This can be really useful when you're in a hurry :)
If you take the proxy / microservices approach, you can stop worrying too much about what technology is used to implement each service, as long as it supports a common communication protocol (HTTP, Message Queues, etc) within the system.
Ultimately, you need to have conversations about this with your team.
You haven't mentioned if this is something that your peers are pushing for or if this is a decision being pushed by technical leadership; I would take the view that as the incumbent, any new technology needs to prove that there is a good reason for its adoption, but YMMV since this may be a decision that's out of your hands.
My personal recommendation in this case is, don't use NodeJS for the proxy; use ASP.NET WebAPI instead but look hard at your system and try to find ways to split it out into Micro-like services. This lets you keep things simpler in terms of ecosystem, but also clears the way to let you introduce NodeJS for some parts of the application where it has been proven that it is a better tool for the job than .Net. That way everyone is happy :)
There is a very good breakdown comparison here which can be used as part of the discussion, the benchmark included in it is a little old but it shows that there is not that much of a difference performance wise.
Most examples I come across seem to always fetch the data from URL.
But on server side, would it not make sense to fetch data directly from the database ?
So in the action creator it would look something like:
if(typeof document !== 'undefined')
fetch("url/posts").then(response => dispatch(receivedPosts(response))
else
db.select("posts").then(response => dispatch(receivedPosts(response))
What are the Pros and Cons of this method relative to other data-fetching methods ?
Most React/Redux applications are built in an environment where there is a separation between API development and UI development. Often these same APIs power mobile apps as well as other desktop apps, thus a direct db query like you've shown wouldn't be available to the server side render.
In your case it seems your building full stack app with a single codebase. There's nothing wrong with that necessarily however there are some things you should probably consider. First off is establishing a likely lifecycle of the application. There's a big difference between a fun little side project done to learn more about a stack, a startup racing to get and MVP to market, and a large enterprise building a platform that'll have to scale out the gate. Each of these scenarios could lead you to different considerations about how to design the tiers of your app. One important question specific to what your working on is whether other apps/platforms may need to access this same data, and whether different teams may eventually maintain the back-end and front-end. With node and mongo it's very easy to create API endpoints that serve your React app initially but could be used by say a native IOS app later. You also would get the benefit of separation of concerns in maintenance and enhancement of your app. Debugging is often easier when you have data access and UI logic completely separated as you can call your APIs directly with something like postman to identify if they're providing the correct data.
In your case it seems like may already be serving API data from /posts, so you could potentially get all of those benefits I mentioned but still also skip a network hop by bypassing the API as your code snippet suggests. This would provide one less point of failure in the server render and would be a little faster, but you probably won't gain much speed and if you have network issues with your APIs they'll show up right away on the client side of the app, so the benefits don't go to far.
I would personally just make the fetch calls in the React app and separate out all my data access/API logic in a way in which moving the back-end and front-end to two separate repos wouldn't be too painful in the future. This will put the app in a good place for it's potential growth in the future. The benefits of the separation of concerns out weight any slight performance bump. If you are experiencing slow page loads, then there are probably numerous places to tune, often starting with the db queries themselves.
I am wondering how well the MEAN stack (MongoDB, Express, Angular, Node) would fit to build a community websites, intra and/or extranet
I know I can use things like Drupal, Liferay and so on but I am just trying to understand the proper use case for the MEAN stack.
Suppose that I have to build a new community website or portal from the ground up.
Would the MEAN stack be a good fit or is the LAMP stack still better in such a use case?
I am looking to learn the MEAN stack and I had to idea to build a "fake" community website which has lots of features, ideal to learn a technology stack like that, however if the technology is not ideal for such a purpose than I have to look into something else.
Why to use the MEAN stack:
One language for server, client, application model
Nodejs concurrent connections handling (permanent connection client-server)
Nodejs fits perfectly real-time applications
Nodejs performances take advantage from the Google V8 Engine
Nodejs asynchronous IO management guarantees more concurrent connections than other Web Server technologies (ex. Apache)
Horizontal scalability (more trafic => more nodes, mongodb sharding)
Where not to use nodejs:
High CPU usage operations because it's one threaded nature
I have realised that learning JavaScript is a helpful before learning the MEAN stack and you need to be well at home with JavaScript. You proposed project is one that requires a good grasp of both client-side JavaScript and server-side JavaScript. Learning an effective way to control the quirks of JavaScript will also help. Know JavaScript before learning MEAN and you should be fine.