With AngularJS based single page apps hosted on premise, how to connect to AWS cloud servers - angularjs

Maybe this is a really basic question, but how do you architect your system such that your single page application is hosted on premise with some hostname, say mydogs.com but you want to host your application services code in the cloud (as well as database). For example, let's say you spin up an Amazon EC2 Container Service using docker and it is running NodeJS server. The hostnames will all have ec2_some_id.amazon.com. What system sits in from of the Amazon EC2 instance where my angularjs app connects to? What architecture facilitate this type of app? Especially AWS based services.

One of the important aspects setting up the web application and the backend is to server it using a single domain avoiding cross origin requests (CORS). To do this, you can use AWS CloudFront as a proxy, where the routing happens based on URL paths.
For example, you can point the root domain to index.html while /api/* requests to the backend endpoint running in EC2. Sample diagram of the architecture is shown below.
Also its important for your angular application to have full url paths. One of the challenges having these are, for routes such as /home /about and etc., it will reload a page from the backend for that particular path. Since its a single page application you won't be having server pages for /home and /about & etc. This is where you can setup error pages in CloudFront so that, all the not found routes also can be forwarded to the index.html (Which serves the AngularJS app).

The only thing you need to care about is the CORS on whatever server you use to host your backend in AWS.
More Doc on CORS:
https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS
Hope it helps.

A good approach is to have two separated instances. It is, an instance to serve your API (Application Program Interface) and another one to serve your SPA (Single Page Application).
For the API server you may want more robust service because it's the one that will suffer the most receiving tons of requests from all client instances, so this one needs to have more performance, band, etc. In addition, you probably want your API server to be scalable when needed (depends on the load over it); maybe not, but is something to keep in mind if your application is supposed to grow fast. So you may invest a little bit more on this one.
The SPA server in the other hand, is the one that will only serve static resources (if you're not using server side rendering), so this one is supposed to be cheaper (if not free). Furthermore, all it does is to serve the application resources once and the application actually runs on client and most files will end up being cached by the browser. So you don't need to invest much on this one.
Anyhow, you question about which service will fit better for this type of application can't be answered because it doesn't define much about that you may find the one that sits for the requisites you want in terms of how your application will be consumed by the clients like: how many requests, downloads, or storage your app needs.
Amazon EC2 instance types

Related

Connect to multiple microservices using the same subdomain from React

I am having trouble understanding how to use a microservices model. The idea of a microservice is that I have multiple local servers, each serving a different port. Connecting to these local servers can be easily done locally (e.g., using an Express hosted website). But if I am using a frontend application, such as React, how am I supposed to call the different APIs.
The only solution I can seem to think is to create a subdomain per API, but this seems far-fetched and impractical since I would need to create a lot of entries inside the Names Server (e.g., Cloudflare).
If I am using an application like Apache or Nginx, is there a way to publicly access the APIs using a single domain? Or using subsubdomains such as api1.subdomain.domain.com, api2.subdomain.domain.com ... but without adding each of these subdomains to the name server?
An alternative I can think of is creating a public API whose job is to connect to local services, but this seems to defeat the purpose of microservices.
I can't find anything online and all tutorials always use localhost which does not work in production code.
Thanks in advance!
You should research API Gateways / Edge-Services.
Personally, I like hosting the containers for microservices in Kubernetes and forwarding all traffic to *.mydomain.tld to the kubernetes cluster and configuring the load balancing (in this case: which subdomain should be routed to which service) there.

Service Fabric (On-premise) Routing to Multi-tenancy Containerized Application

I'm trying to get a proof of concept going for a multi-tenancy containerized ASP.NET MVC application in Service Fabric. The idea is that each customer would get 1+ instances of the application spread across the cluster. One thing I'm having trouble getting mapped out is routing.
Each app would be partitioned similar to this SO answer. The plan so far is to have an external load balancer route each request to the SF Reverse Proxy service.
So for instance:
tenant1.myapp.com would get routed to the reverse proxy at <SF cluster node>:19081/myapp/tenant1 (19081 is the default port for SF Reverse Proxy), tenant2.myapp.com -> <SF Cluster Node>:19081/myapp/tenant2, etc and then the proxy would route it to the correct node:port where an instance of the application is listening.
Since each application has to be mapped to a different port, the plan is for SF to dynamically assign a port on creation of each app. This doesn't seem entirely scaleable since we could theoretically hit a port limit (~65k).
My questions then are, is this a valid/suggested approach? Are there better approaches? Are there things I'm missing/overlooking? I'm new to SF so any help/insight would be appreciated!
I don't think the Ephemeral Port Limit will be an issue for you, is likely that you will consume all server resources (CPU + Memory) even before you consume half of these ports.
To do what you need is possible, but it will require you to create a script or an application that will be responsible to create and manage configuration for the service instances deployed.
I would not use the built-in reverse proxy, it is very limited and for what you want will just add extra configuration with no benefit.
At moment I see traefik as the most suitable solution. Traefik enables you to route specific domains to specific services, and it is exactly what you want.
Because you will use multiple domains, it will require a dynamic configuration that is not provided out of the box, this is why I suggested you to create a separate application to deploy these instances. A very high level steps would be:
You define your service with the traefik default rules as shown here
From your application manager, you deploy a new named service of this service for the new tenant
After the instance is deployed you configure it to listen in a specific domain, setting the rule traefik.frontend.rule=Host:tenant1.myapp.com to the correct tenant name
You might have to add some extra configurations, but this will lead you to the right path.
Regarding the cluster architecture, you could do it in many ways, for starting, I would recommend you keep it simple, one FrontEnd node type containing the traefik services and another BackEnd node type for your services, from there you can decide how to plan the cluster properly, there is already many SO answers on how to define the cluster.
Please see more info on the following links:
https://blog.techfabric.io/using-traefik-reverse-proxy-for-securing-microservices-on-azure-service-fabric/
https://docs.traefik.io/configuration/backends/servicefabric/
Assuming you don't need an instance on every node, you can have up to (nodecount * 65K) services, which would make it scalable again.
Have a look at Azure API management and Traefik, which have some SF integration options. This works a lot nicer than the limited built-in reverse proxy. For example, they offer routing rules.

Is it best practice to connect directly to an aws db instance in an app

I am new to web development, and have seen posts such as these . If one is using AWS and is connecting to an AWS rds instance through Node, is that still considered a direct connection as opposed to a web service?
You're probably going to get a bunch of conflicting opinions on this. My personal opinion is a web service in front of your database makes sense in some scenarios. Multiple applications connecting to the web service instead of directly to the db gives several advantages, security, caching, etc.
That being said, if this is just a single app then most of those advantages disappear and in fact just make things more complex for you. You're going to have to setup your web service for the db as well as your actual code.
If one is using AWS and is connecting to an AWS rds instance through Node, is that still considered a direct connection as opposed to a web service?
No, if Node.js is running on a server or in "serverless" containers (e.g. AWS Lambda) that is not a direct connection. That is a web service, and that's what you want.
A direct connection means the app connects to the database itself... but that requires embedding credentials in the app.
You do not want to embed anything in your app that you would not willingly hand over to an arbitrary user -- such as database credentials and API keys -- because you cannot trust that the app won't be reverse-engineered.
You should design the app in such a way that you would have no security concerns if the entire source code of the app were exposed, because knowing everything about the app's internals would give a malicious actor no valuable information. How? The code on the server side (e.g. in Node.js) should treat every request from the app as potentially suspicious, untrustworthy, etc., and validates every request to do anything.
This layer of separation is one of the strongest reasons why you never give the app direct access to the database. Code running in a trusted place -- your web server/API layer -- needs to vet every database interaction. This topology also decouples the app user from tying up resources on the database server when not actually interacting with the database, which is far less practical with a direct connection.

How to deploy angular app that completely relies on external API to retrieve and store data?

For an angular app that completely relies on external API to retrieve and store data, is NodeJS necessary for deployment? What are the other possible methods of deployment? Currently, I use it for local development and plan on using it in combination with Nginx for production. However, NodeJS is not doing anything except launching index.html. So should I remove NodeJS altogether and simply use Nginx alone?
One solution is to host it using any web host, they all equally host HTML only sites, and this solution is pretty inexpensive. Hosts like Hostgator, web.com etc., will allow you to upload the site via FTP.
A second choice is to host it using a web server (Nginx), but this is probably the most costly. You can host your own server in any cloud service (Amazon would be EC2 for instance) and then host your files there. This is probably not a good option for you. The only reason to use this type of solution is if you need the server to host code, so if you were using node to talk to a database for instance.
A 'pro' option may be to put them in S3 on AWS, and host it that way, it is pretty inexpensive.
Here is a link explaining how to host on Amazon - http://docs.aws.amazon.com/AmazonS3/latest/dev/WebsiteHosting.html
http://docs.aws.amazon.com/AmazonS3/latest/dev/website-hosting-custom-domain-walkthrough.html

Single Page Application Server Separation of Concern

Im just in the process of putting together the base framework for a multi-tenant enterprise system.
The client-side will be a asp.net mvc webpage talking to the database through the asp.net web api through ajax.
My question really resides around scalability. Should I seperate the client from the server? i.e. Client-side/frontend code/view in one project and webapi in another seperate project server.
Therefore if one server begins (server A) to peak out with load/size than all need to do is create another server instance (server B) and all new customers will point to the webapi's on server B.
Or should it be all integrated as one project and scale out the sql server side of things as load increase (dynamic cloud scaling)?
Need some advice before throwing our hats into the ring.
Thanks in advance
We've gone with the route of separating out the API and the single page application. The reason here is that it forces you to treat the single page application as just another client, which in turn results in an API that provides all the functionality you need for a full client...
In terms of deployment we stick the single page application as the website root with a /api application containing the API deployment. In premise we can then use application request routing or some content-aware routing mechanism to throw things out to different servers if necessary. In Azure we use the multiple websites per role mechanism (http://msdn.microsoft.com/en-us/library/windowsazure/gg433110.aspx) and scale this role depending upon load.
Appearing to live in the same domain makes things easier because you don't have to bother with the ugliness that is JSONP or CORS!
Cheers,
Dean

Resources