Which parts of a built NextJS app should I protect? - reactjs

I'm running a NextJS app from an Express server and I have authentication set up. The Express server will forward most requests to the NextJS app with some preprocessing. In the NextJS app, there are the explicit routes and api routes that I set up myself in the /pages directory of the source code, but there are also a lot of assets being requested from paths starting with /_next/.
When running an authenticated service, it is clear that you normally don't have to protect ALL of your assets. For example robots.txt and favicon.ico belong to the category of assets that do not need authentication and the same goes for many static assets.
My question is, which ones of NextJS's own routes should I protect? It seems that /_next/static/* mostly contain static source files that do not need any custom input data and that /_next/data/* contains personalized information that should be protected, thus it seems reasonable to protect the latter but not the former.
Are there other routes that NextJS serves from a built app (other than routes to assets in the /public folder and routes in the pages directory) that I should know about and is it reasonable to only protect /_next/data/* or should I protect all of /_next/*?

Related

Serving a JSON file from an AWS Amplify React Project

We have an AWS Amplify React project associated with our domain, which leads to all files and contents being sourced by the underlying react router.
In order to support backend API communications with Microsoft APIs, we need to host a specific JSON file at a particular location within our domain, such as mydomain.com/.well-known/microsoft-identity-association.json.
I am unsure how to do this. My first question is whether this is best accomplished via static routes within the react router or, instead, configuring Cloud Front and Route 53 to serve up the JSON file for this exact URL.
I have been trying the second approach and have created a distribution in Cloud Front for a specific S3 bucket storing the JSON file. I have named the S3 bucket "mydomain" with a subfolder ".well-known" and a contained JSON filed entitled "microsoft-identity-association.json". My problem is that I do not know how to configure Route 53 to route to this distribution as my root domain (mydomain.com) is associated with my Amplify project and is handled by the react router. I'm not sure if I can somehow configure a specific route or alias to serve up the exact JSON file.
I have reviewed this post (How do I return a json file from s3 to a specific url, but only that url) but it seems to be addressing a slightly different problem.
Any and all guidance appreciated.
Addressed this issue by splitting my site. I used a static S3-hosted site for public pages (including the JSON file) and redirected the React app to a subdomain.

How to direct www to non-www domain on Google App Engine (GAE)

How do I direct the www. subdomain to just domain.tld without www? I'm used to firebase doing this automatically. Should I look into configuring the app.yaml, dispatch.yaml, or another method?
What you're describing is called a "naked domain", and this is described in the documentation on Custom Domains. The documentation provides the steps for mapping a custom domain to your app and updating the DNS records at your domain registrar once your service has already been mapped to your custom domain in App Engine.
To redirect your requests, you can use wildcard mappings with services in App Engine by using the dispatch.yaml file. You can find instructions on how to do that here. If you would like to know more about routing requests, you can take a look at this documentation as well which also highlights creating a dispatch file. Handlers are limited to handle URLs by executing application code, or by serving static files uploaded with the code, such as images, CSS, or JavaScript. Therefore, they cannot directly redirect one URL to another.
You would need to handle your URL by running a script that executes code that will redirect your URL.
The comment shows a complete example as the script runs main.py which then redirects the URL

Static hosting - ReactJS app on Azure Blob storage with Azure CDN

I would like to host my ReactJS app as static on Azure Blob. The problem is Azure Blob doesn't support default document. To overcome this, I have set Azure CDN with URL Rewrite rules
for the first source pattern, set to ((?:[^\?]*/)?)($|\?.*)
for the first destination pattern, set to $1index.html$2
for the second source pattern, set to
((?:[^\?]*/)?[^\?/.]+)($|\?.*)
for the second destination pattern, set to $1/index.html$2
This is from the Hao's tutorial
This successfully resolves myapp.azureedge.net but when the client-side routing is used directly e.g. myapp.azureedge.net\react\route the app will return ResourceNotFound.
Meaning when the user inputs myapp.azureedge.net\react\route as his URL and tries to navigate to the page, he will get an error.
I suspect I need to redirect every path, that is not to a static specific file, to index.html. However, I do not know if that's the right solution or how to achieve it
Thank you for Any help!
Azure CDN supports static website hosting now. More information here:
https://learn.microsoft.com/en-us/azure/storage/blobs/storage-blob-static-website
You can host a single page app without using URL rewrites by setting the default document and the error document to be index.html
I encountered the similar issue before. Assuming that the structure of your static files under Azure Blob container looks like this:
Note: The cdn is the container name.
You could configure the following URL Rewrite rules for setting default page and rewriting all requests to index.html along with the possible query string and your images and scripts under cdn/scripts and cdn/images could correctly accessed.
Additionally, you could use Azure Web App to host your static website and choose the proper pricing tier. Details you could follow Pricing calculator.
There is a new Azure static web app service, currently in preview mode but it is super easy to deploy a modern frontend SPA. You can set up a fallback route (route.json) to redirect everything to index.html, you can see more here: https://learn.microsoft.com/en-us/azure/static-web-apps/

How to use NodeJS to combat social sharing and search engines issues when using single-page frameworks like AngularJS

I read an article about social sharing issues in AngularJS and how to combat by using Apache as a proxy.
The solution is usable for small websites. But if a web app has 20+ different pages, I have to url-write and create static files for all of them. Moreover, a different stack is added to the app by using PHP and Apache.
Can we use NodeJS as the proxy and re-write the url, and what's the approach?
Is there a way to minimize static files creation?
Is there a way to remove proxy, url-rewrite, and static files all together? For example, inside our NodeJS app to check the user agent, if it is facebook bot or twitter and the like, we use request module to download our page and return the raw html code for them, is it a plausible solution?
Normally when someone shares a url in a social network, that social network request that page to generate a preview/thumbnail (aka "scrape").
Most likely those scrapers won't run javascript, so they need a static html version of that page.
Same applies for search engines (even though Google and others are starting to support javascript sites).
Here's a good approach for an SPA to still support scrapers:
use history.pushState in angular to get virtual urls when navigating thru your app (ie. urls without a #)
server-side (node.js or any), detect if a request comes from a user or a bot (eg. check the User-Agent using this lib https://www.npmjs.com/package/is-bot )
if the request url has a file extension, it's probably a static resource request (images, .css, .js), proxy to get the static file
if the request url is a page, for real users, if the url is a page (ie. not a static resource) always serve your index.html that loads your angular app (pro tip: keep this file cached in memory)
if the request url is a page, serve a pre-rendered version of the requested url (they won't run javascript), this is the hard part (side note: ReactJS makes this problem much simpler), you can use a service like https://prerender.io/ they'd take care of loading your angular app, and saving each page as html (if you're curious, they use a headless/virtual browser in memory called PhantomJS to do that, simulating what a real user would do clicking "Save As..."), then you can request and proxy those prerendered pages to bot requests (like social network scrappers). If you want, it's possible to run a prerender instance on your own servers.
All this server-side process I described is implemented in this express.js middleware by prerender:
https://github.com/prerender/prerender-node/blob/master/index.js
(even if you don't like prerender, you can use that code as implementation guide)
Alternatively, here's an implementation example using only nginx:
https://gist.github.com/thoop/8165802

Structuring a Rails and Angular app

I'd like to build a new single page app using Rails 4 and Angular and Capistrano for deployment process.
I want all the front end to be a static app on Amazon S3, but I'm openminded for other suggestions.
What's important to me is a fast developing process with the ability to scale up easily.
I was wondering what is the best structure I should use:
keep all assets in app/assets and set Bower path to vender directory.
that way i can use rails precompile methods and enjoying Rails html tags for index.html, but i'm sure it will be easy to upload it to S3 and keep it separated.
keep all assets including Bower components in public/app directory, which will keep it as a complete separate application, but then i need to use Grunt or any other service for precompiling assets.
any other idea?
From my experience, I found this approach to work really well:
API app (Rails/Sinatra/Grape/Node/whatever) serves only JSON APIs. Deploys to a server, say api.yourapp.com. Serves Access-Control headers.
Static web app: started by generating with yeoman an AngularJS, Gulp, Bower app. Deploys using gulp aws deploy module to S3.
There's no real reason to have both views and apis in the same app or built with the same technology (as in Rails).
Now there are issues:
S3 doesn't support nicely Angular's HTML5 mode URLs. So pure S3 website isn't an option.
Facebook doesn't read OpenGraph tags that are not in the source of the page.
Couldn't figure out the state of Google/SEO and Angular apps. I didn't see the content in the search results.
So as a solution I introduced another web server app. Can be based on anything - pure rack, node etc. I chose rack.
Solutions to the problems:
Web server app was hosted on www.yourapp.com and proxied (and cached) requests to S3. It supported all URLs (html5Mode) - just proxied to index.html.
OpenGraph meta tags - the API had an endpoint that gets a URL or ID of an object and returns meta tags information. Web server issues a request to API once per URL (caches the response) and injects it in the served index.html.
SEO - as a middleware, used prerender for rack that rendered pages on the server.
As a bonus -
Most apps today have a landing page/marketing site and the actual app. Sometimes it's better to maintain these separately. The web sever knows according to a cookie which app to present on www.yourapp.com - actual app or marketing site. On sign in - set a cookie on client side and voila.
First, I think there's a bit of a confusion here, let me try to clear it up.
There are a couple of ways for achieving this
Pure client -> API
When you have a static application, there's no need to go through the Rails asset pipeline, there are far better ways to manage assets when you are using the tooling for client side applications.
For example, your client application will be an Angular application and you will manage assets with a combination of bower (dependencies) and grunt (build and distribution).
There's no point of deploying to S3 with Capistrano, if it's a pure static application, you can use aws CLI in order to just upload your content.
I'd go through a CDN as well. Something like Fastly works really well over Amazon S3.
I have a Rake task that uploads to S3 and then clears the cache on Fastly (if I need to).
As for your Rails application, it would act as an API, it should not have any assets
Combined
If you have a combined application, some of the actions are served by the server (Rails) and just invokes some client side code (Angular).
If this is the case, I would go through Rails asset pipeline and just keep everything as Rails best practice with compilation pre-deploy etc...
It's one of those questions where "it depends" is the answer really, it all depends on what you want to achieve.
When I have a client application, I try to have a pure client and have the server only as an API, with no assets at all, this way, I separate the concerns.
EDIT 9/9/15
I'd have to say that as long as you can, I'd keep the apps separate.
It's not always possible, especially with more complex apps.
Most apps I have seen in the recent months have kept the client side and the server side code separate, I have seen less use of rails and more use of rails-api because of that (some even ditched rails completely for thinner solutions).

Resources