Caching an React/Meteor webpage - reactjs

I want to speed up the initial load of an React/Meteor web page.
One of several Ideas is to cache data. So fare so good...
This was tried with service workers. This was only possible for me under "/public/" folder but in addition I want to cache data from eg. "/client/" for caching more data.
Is this possible to cache more data from other folders?
I did pretty much the same as described here under "Step 1 - Add a service worker ":
https://dev.to/jankapunkt/transform-any-meteor-app-into-a-pwa-4k44
UPDATES:
we are using this Web page only in an intranet without internet connection.

things with React and Meteor don't really work that way.
It is expected to have an up to 1MB JS bundle to deliver to your client. A medium size app should look at 400-500kb of gzipped bundle size.
Don't use the public folder for assets, put everything in a CDN with edge cacheing like the AWS Cloudfront (store in S3 and expose via Cloudfront), or any other storage.
In your CDN you can set the expiry and cache-control (max-age) and this is used by the client (browser) for cacheing assets.
Deliver your JS and CSS bundles from a CDN.
Use split code extensively (ideally at routing level).
Whenever possible use async libraries form maps, players etc instead of NPM (which builds into your bundle).
With a PWA environment you will cache your bundle files not your public folder. The tutorial you followed for PWA is incomplete and useless. It only looks to how to get a green badge in the audit, it does nothing of any use.
One more thing, your Meteor bundle size impacts the traffic on your Meteor server. This is why you better deliver the bundle and all assets from CDN's.
Cacheing more with service workers only leads to flickers, inconsistencies between tabs and browsers, errors.

Related

Caching problem on Azure CDN + Cloudflare structure

Our site is being made available with the following structure:
Static Blob Container Azure > CDN > Cloudflare > User.
The React app build is made available in an Azure Static Blob Container that is accessed by an Azure CDN. When we access the app via the CDN URL, we never have a cache problem. We also use cloudflare to manage the DNS and supposedly improve the cache. But when we access the app through cloudflare, we have a serious cache problem, returning extremely old versions for users who have accessed the site before.
Even after turning off all cache options available in Cloudflare's dashboard and its graphics show that cache consumption has dropped, the bug still persists. We were unable to identify where our problem is in the structure mentioned above.
The problem is because a CDN uses multiple nodes to serve the content. The proper way to 'solve' this is appending a version in the filename or path, this way, whenever you need to change something, the CDN will download the latest version. Just using a regular 'app.js' is not enough.
More info:
How to force the browser to reload cached CSS and JavaScript files
https://stackoverflow.com/a/34604256/1384539

CloudFront cache invalidation for React app

I have a React/redux app which is deployed on CloudFront + s3. There is no static hosting enabled on the bucket. I understand that invalidating cache on a new deployment clears cache in all the edge locations and the new changes will be served up. But what happens to the active prod users when the cache is invalidated? Are they able to continue on the app without any errors? Does it get worse for the active users if the redux store structure changed in the new version?
Clearing the cloudfront cache will bring up the fresh content from your origin. However, that would not affect the existing production users. They would continue to be served from the cached content as long as their session continues.
That being said, they would be served the fresh content when their session restarts.
There would be no errors whatsoever.
Hope this helps.
I've been wondering the same thing for my React website, which is made up of many chunks. I wouldn't worry about your Redux state unless you're saving it to cookie/localstorage and loading it again. In that case you could write some migration check during loading. Or even have it versioned in some way.
Regarding caching, I don't recommend deleting any files for up to a year. That way your active users would still be able to download chunks while they're active on your website.
During deployment I upload all the new files and clear the cache on all *.html files to get the latest reference to js and css files.

How to use NodeJS to combat social sharing and search engines issues when using single-page frameworks like AngularJS

I read an article about social sharing issues in AngularJS and how to combat by using Apache as a proxy.
The solution is usable for small websites. But if a web app has 20+ different pages, I have to url-write and create static files for all of them. Moreover, a different stack is added to the app by using PHP and Apache.
Can we use NodeJS as the proxy and re-write the url, and what's the approach?
Is there a way to minimize static files creation?
Is there a way to remove proxy, url-rewrite, and static files all together? For example, inside our NodeJS app to check the user agent, if it is facebook bot or twitter and the like, we use request module to download our page and return the raw html code for them, is it a plausible solution?
Normally when someone shares a url in a social network, that social network request that page to generate a preview/thumbnail (aka "scrape").
Most likely those scrapers won't run javascript, so they need a static html version of that page.
Same applies for search engines (even though Google and others are starting to support javascript sites).
Here's a good approach for an SPA to still support scrapers:
use history.pushState in angular to get virtual urls when navigating thru your app (ie. urls without a #)
server-side (node.js or any), detect if a request comes from a user or a bot (eg. check the User-Agent using this lib https://www.npmjs.com/package/is-bot )
if the request url has a file extension, it's probably a static resource request (images, .css, .js), proxy to get the static file
if the request url is a page, for real users, if the url is a page (ie. not a static resource) always serve your index.html that loads your angular app (pro tip: keep this file cached in memory)
if the request url is a page, serve a pre-rendered version of the requested url (they won't run javascript), this is the hard part (side note: ReactJS makes this problem much simpler), you can use a service like https://prerender.io/ they'd take care of loading your angular app, and saving each page as html (if you're curious, they use a headless/virtual browser in memory called PhantomJS to do that, simulating what a real user would do clicking "Save As..."), then you can request and proxy those prerendered pages to bot requests (like social network scrappers). If you want, it's possible to run a prerender instance on your own servers.
All this server-side process I described is implemented in this express.js middleware by prerender:
https://github.com/prerender/prerender-node/blob/master/index.js
(even if you don't like prerender, you can use that code as implementation guide)
Alternatively, here's an implementation example using only nginx:
https://gist.github.com/thoop/8165802

How can I leverage CDN with my AngularJS app on ASP.NET MVC?

I have an MVC project which really just serves an angular application. This will be a public application and I expect it to get a lot of traffic, so I'm trying to use a CDN to keep requests to my servers light.
I've found many articles on how to get angular itself from a CDN but what I'm looking for is for MY file (css, html, js, media) to get served from a CDN. So for example, if I have a directive, then the template would need to be served from a CDN, but I can't hard-code it because the template may not have been uploaded to the test or prod environments yet.
Update: The MVC application only serves the initial layout and home pages. Once my angular script loads MVC isn't really used anymore. The back-end is a separate Web API application.
You might be asking the wrong question, but you're on the right track for how to optimize for high traffic.
First, learn how to serve the angular application as static files. On a past project I worked on I had two projects in my visual studio solution, one was a Web API back-end, and the other was a web project that just served static content. You can configure Visual Studio to run both projects at the same time on different ports from localhost. This doesn't answer your question, but setting up dev like this will get you a step closer.
Next, once you know your project's front-end and back-end are decoupled from the server level, make sure you're not going to get an CORS issues from cross-domain requests. If you're going to be serving your back-end from a subdomain, CORS may be an issue. Look into that, you'll probably need to add some code to your MVC app to solve that problem.
Now that you've got all that figured out, find out how to use a CDN service. There are many offerings with specifics of how to deploy content and SSL certificates varying for each. Amazon Web Services, Microsoft Azure, and many others offer CDN services, but before you even look at that stuff, make sure you can actually serve your back-end from a different port on your dev machine and it still works.
I was trying to think of a good solution to this recently...
In local testing (grunt serve), i use the current non-minified .js/.css file, but when creating a build (grunt build), Id want to point those files to the CDN in my Angular project.
I'm thinking for your grunt(gulp/webpack/etc) build (which creates the minified local files for the build), you could make a task to prepend certain urls/files to your CDN via
https://github.com/callumlocke/grunt-cdnify or https://github.com/tactivos/grunt-cdn (but it would still use your non-minified files for grunt serve).
It would be cool if uploading to CDN and invalidating the files was a grunt/gulp/webpack plugin.
Then you could automate it as part of your prod builds.
Guessing your build system like Jenkins/etc would upload your app to wherever.
and then either with a grunt/gulp plugin like so could invalidate your files:
https://github.com/keycdn/grunt-keycdn if you use that CDN service.
I am guessing your build system like Jenkins could also do this work for you via its own CDN upload/invalidation plugins.
I know this answer is mentioning grunt specifically, but there should be similar plugins for gulp/webpack/etc as well.

Structuring a Rails and Angular app

I'd like to build a new single page app using Rails 4 and Angular and Capistrano for deployment process.
I want all the front end to be a static app on Amazon S3, but I'm openminded for other suggestions.
What's important to me is a fast developing process with the ability to scale up easily.
I was wondering what is the best structure I should use:
keep all assets in app/assets and set Bower path to vender directory.
that way i can use rails precompile methods and enjoying Rails html tags for index.html, but i'm sure it will be easy to upload it to S3 and keep it separated.
keep all assets including Bower components in public/app directory, which will keep it as a complete separate application, but then i need to use Grunt or any other service for precompiling assets.
any other idea?
From my experience, I found this approach to work really well:
API app (Rails/Sinatra/Grape/Node/whatever) serves only JSON APIs. Deploys to a server, say api.yourapp.com. Serves Access-Control headers.
Static web app: started by generating with yeoman an AngularJS, Gulp, Bower app. Deploys using gulp aws deploy module to S3.
There's no real reason to have both views and apis in the same app or built with the same technology (as in Rails).
Now there are issues:
S3 doesn't support nicely Angular's HTML5 mode URLs. So pure S3 website isn't an option.
Facebook doesn't read OpenGraph tags that are not in the source of the page.
Couldn't figure out the state of Google/SEO and Angular apps. I didn't see the content in the search results.
So as a solution I introduced another web server app. Can be based on anything - pure rack, node etc. I chose rack.
Solutions to the problems:
Web server app was hosted on www.yourapp.com and proxied (and cached) requests to S3. It supported all URLs (html5Mode) - just proxied to index.html.
OpenGraph meta tags - the API had an endpoint that gets a URL or ID of an object and returns meta tags information. Web server issues a request to API once per URL (caches the response) and injects it in the served index.html.
SEO - as a middleware, used prerender for rack that rendered pages on the server.
As a bonus -
Most apps today have a landing page/marketing site and the actual app. Sometimes it's better to maintain these separately. The web sever knows according to a cookie which app to present on www.yourapp.com - actual app or marketing site. On sign in - set a cookie on client side and voila.
First, I think there's a bit of a confusion here, let me try to clear it up.
There are a couple of ways for achieving this
Pure client -> API
When you have a static application, there's no need to go through the Rails asset pipeline, there are far better ways to manage assets when you are using the tooling for client side applications.
For example, your client application will be an Angular application and you will manage assets with a combination of bower (dependencies) and grunt (build and distribution).
There's no point of deploying to S3 with Capistrano, if it's a pure static application, you can use aws CLI in order to just upload your content.
I'd go through a CDN as well. Something like Fastly works really well over Amazon S3.
I have a Rake task that uploads to S3 and then clears the cache on Fastly (if I need to).
As for your Rails application, it would act as an API, it should not have any assets
Combined
If you have a combined application, some of the actions are served by the server (Rails) and just invokes some client side code (Angular).
If this is the case, I would go through Rails asset pipeline and just keep everything as Rails best practice with compilation pre-deploy etc...
It's one of those questions where "it depends" is the answer really, it all depends on what you want to achieve.
When I have a client application, I try to have a pure client and have the server only as an API, with no assets at all, this way, I separate the concerns.
EDIT 9/9/15
I'd have to say that as long as you can, I'd keep the apps separate.
It's not always possible, especially with more complex apps.
Most apps I have seen in the recent months have kept the client side and the server side code separate, I have seen less use of rails and more use of rails-api because of that (some even ditched rails completely for thinner solutions).

Resources