I have two frontend apps. First one uses Static Generation (for SEO purposes), and the Second one uses Client Side Rendering (for all stuff behind auth flow).
I want to have both of them under the same purchased domain, with the base endpoints to be something like:
mydomain.com\public\* : for all my public facing statically generated content using the first App.
mydomain.com\auth\*: for all the stuff that lies behind the auth flow.. made using the second app.
So the question is:
How to map these two separate apps to two base endpoints under the same domain? I was reading this post Share an API Endpoint Between Services, but it seemed to be for the backend.
In case anyone is interested to know why two separate apps:
It's because the Static generation is done using Next.js, while the Client side stuff is done using simple create-react-app. This post explains why this combination needs to be separately deployed.
Refer to this
Create two separate s3 buckets for your NextJs and React app. Attach them to a CloudFront distribution. Attach a lambda function to your CloudFront distribution and route requests to different origin based on whether the request.uri.startsWith('/public') or not.
Related
I'm asking for some help, maybe I'm missunderstanding some concepts and finally I dont know how to solve this requirement:
I have a create-react-app application deployed using Netlify.
Also my backend is deployed on AWS ECS.
I'm using AWS route 53 for routing frontend and backend to myapp.mydomain.com and api.mydomain.com respectively.
A client has a specific network config so only *.mydomain.com requests are allowed for his organization.
The problem resides on frontend because it uses many libraries, for example:
Checking network tab on browser I noticed thw following:
I'm using a giphy library, so it makes requests to api.giphy.com.
I'm using some google stuff like analytics and fonts, so I assume it will make requests to some google domain.
And so on...
As I understand this kind of fetches will be blocked by client network "firewall".
Adding more rules to said firewall is not an option (That was my first proposal to client but they only allows *.mydomain.com and no more)
So my plan B was implement a proxy ... but I dont have any idea of how to implement such solution.
It's possible to "catch" third party fetches, redirect them to my backend like api.mydomain.com/forward, so my backend would make real fetch and returns response given by said fetch to frontend?
The result desired should be, for example again, all fetches made to api.giphy.com should be redirected to api.mydomain.com/forward/giphy and same for all third-party fetches
I Googled a lot and now I'm very confused, any help is welcome!! Thanks devs!
I just started using nx.dev to migrate from a single app repo to a monorepo as I have added a very basic static documentation app to the mix which is deployed to a subdomain docs.company.com.
My main application is currently deployed to main route company.com. However, one could think of it as different apps as well where there is app1, app2 and admin for example. I do like the idea of having everything as a single application as it can be easily deployed with nx and Vercels monorepo support.
I am just not sure what the go to approach is here. Of course, I could split up the main app into multiple apps and deploy them independently to subdomains such as:
admin.company.com
app1.company.com
app2.company.com
If understood correctly, I could also use multi zones support if I do not like subdomains and use one domain instead.
This main app, that could logically be split up in multiple apps is non public and authentication is required. It is completely client side rendered while apollo client is used to interatct with the GraphQL API. This API server also sets an http-only JWT cookie for authentication. I am quite sure I could mitigate the issue with subdomains in this regard by setting the domain cookie setting, such that the cookie is also valid for subdomains.
However, as the cookie is http-only I cannot access it from the client and need to keep track of the logged in status in my global state mangement (which is overmind.js). Splitting up the app would add some extra complexity to persist global state between subdomain apps.
I am not sure whether this is worth it or if it is better sticking to the one app approach. I would love to hear your opinion and maybe I have forgotten some major issues. Some questions that come to my mind:
What are the advantages and disadvantages of using subdomains?
Is it more preferable to use multi zones and only one app?
How could the auth/global state issues be solved if switching to multiple apps?
What are your thoughts?
Background:
We use a single-tenant-per-application model
All tenants run the same frontend and backend code that is deployed and hosted separately under different subdomains for each tenant
We provision separate AzureAD applications for each tenant, resulting in in a different ClientIDs for each
Problem:
As per the ADAL js wiki found here: https://github.com/AzureAD/azure-activedirectory-library-for-js/wiki/Config-authentication-context, our frontend application must specify the ClientID and backend API Endpoints when initializing ADAL.
Since each of our tenants has their own ClientID, we ended up adding all the possible ClientIDs and Endpoints into our code, and figuring out the values at runtime based on the current URL. This obviously doesn't scale very well as it requires a code change for each new tenant. We are thinking moving this work to the the CI/CD process, but are trying to understand if there is a better solution.
Is there a better way to manage multiple, single-tenant apps with ADAL js?
Since each instance of your application is registered separately (thus has its own ClientId), ADAL.js doesn't provide you a better solution.
You can either work with Angular Environments e. g.
environment.tenant1.ts
enviornment.tenant2.ts
And create a build artifact for each tenant using ng build --prod ---configuration=tenant1. I don't like this solution since you have multiple build artifacts.
Or you expose a middleware / REST API that returns the configuration for a specific client by its URL. This will be the only endpoint your client needs to know. However, you have to ensure the middleware is always up (single point of failure).
Maybe this is a really basic question, but how do you architect your system such that your single page application is hosted on premise with some hostname, say mydogs.com but you want to host your application services code in the cloud (as well as database). For example, let's say you spin up an Amazon EC2 Container Service using docker and it is running NodeJS server. The hostnames will all have ec2_some_id.amazon.com. What system sits in from of the Amazon EC2 instance where my angularjs app connects to? What architecture facilitate this type of app? Especially AWS based services.
One of the important aspects setting up the web application and the backend is to server it using a single domain avoiding cross origin requests (CORS). To do this, you can use AWS CloudFront as a proxy, where the routing happens based on URL paths.
For example, you can point the root domain to index.html while /api/* requests to the backend endpoint running in EC2. Sample diagram of the architecture is shown below.
Also its important for your angular application to have full url paths. One of the challenges having these are, for routes such as /home /about and etc., it will reload a page from the backend for that particular path. Since its a single page application you won't be having server pages for /home and /about & etc. This is where you can setup error pages in CloudFront so that, all the not found routes also can be forwarded to the index.html (Which serves the AngularJS app).
The only thing you need to care about is the CORS on whatever server you use to host your backend in AWS.
More Doc on CORS:
https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS
Hope it helps.
A good approach is to have two separated instances. It is, an instance to serve your API (Application Program Interface) and another one to serve your SPA (Single Page Application).
For the API server you may want more robust service because it's the one that will suffer the most receiving tons of requests from all client instances, so this one needs to have more performance, band, etc. In addition, you probably want your API server to be scalable when needed (depends on the load over it); maybe not, but is something to keep in mind if your application is supposed to grow fast. So you may invest a little bit more on this one.
The SPA server in the other hand, is the one that will only serve static resources (if you're not using server side rendering), so this one is supposed to be cheaper (if not free). Furthermore, all it does is to serve the application resources once and the application actually runs on client and most files will end up being cached by the browser. So you don't need to invest much on this one.
Anyhow, you question about which service will fit better for this type of application can't be answered because it doesn't define much about that you may find the one that sits for the requisites you want in terms of how your application will be consumed by the clients like: how many requests, downloads, or storage your app needs.
Amazon EC2 instance types
So, I've been fiddle:ing with some isomorphic React + Flux lately and have found some concepts quite confusing to be honest. I've been looking into best practices about how to structure isomorphic apps and are looking for advice.
Suppose you are creating a webapp as well as a mobile app backed by the same REST API. Do you bundle your REST API together with the webapp? I've seen people advocating both bundling and having a separate codebase for the REST API.
Any advice or suggested reading is appreciated!
Fluxible (atleast from the examples) does advocate using the service layer inside the application calling it directly from the server and via xhr from the client without duplicating the code
https://github.com/gpbl/isomorphic500/blob/master/src/app.js
This is an example I followed religiously while building the isomorphic app
The idea is very simple. Let's assume you have SPA and a backend wich provides REST API.
SPA (in browser) <====> Backend REST API
in isomorphic case, it is absolutely the same, except you will run your SPA on the server too.
So, it will work like that:
SPA (in browser) <====> Backend REST API
SPA (on server) <====> Backend REST API
If you have a mobile app then it will be:
SPA (in browser) <====> Backend REST API
SPA (on server) <====> Backend REST API
Mobile app <====> Backend REST API
Here is a real isomorphic production application opened by us to the community - https://github.com/WebbyLab/itsquiz-wall . You can just clone it and run.
Here is my post which describes all the ideas behind the app in details.
Let's see if I can help you.
Please keep in mind that Isomorphic Javascript is quite new and it is hard to find clear definitions for every use case.
By definition, if you create a RESTful application you should have a clear separation between server and client:
"A uniform interface separates clients from servers. This separation
of concerns means that, for example, clients are not concerned with
data storage, which remains internal to each server, so that the
portability of client code is improved. Servers are not concerned with
the user interface or user state, so that servers can be simpler and
more scalable. Servers and clients may also be replaced and developed
independently, as long as the interface between them is not altered."
Regarding isomorphic applications, the main benefits are:
Not having a blank page when the user first enter the site (points for UX)
Therefore it is SEO friendly
And you can share one logic between server/client (for example regarding React Components)
This means you should deliver rendered React Components from the server to the client when the user first enters a URL. After that you will keep using your REST API as usual, rendering everything on the client.
If you can, share more details about your case and it will be easier help.
I wouldn't recommend you to bundle the REST API in the browser, as you are limited to using browser-compatible modules in your API, and you won't be able to make any direct database calls.
There's a library that makes it so you can build your APIs in an isomorphic fashion, and re-use it in the client and server without bloating or breaking the bundle. This is what we're currently using in a big single-page application.
It's called Isomorphine, and you can find it here: https://github.com/d-oliveros/isomorphine.
Disclaimer: I'm the author of this library.