Does changing Cloudfront Download Distribution Origin Path result in a cache invalidation? - angularjs

I am working on a solution to get S3 and Cloudfront in sync when I upload a new version of an angular app.
My approach is to upload the new version to a new folder with an increasing version number http://awsbucket/v1 ... /v2 and after that updating the Download Distribution Origin Path to that new folder.
http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/distribution-web-values-specify.html#DownloadDistValuesOriginPath
I am wondering if this change of the Origin Path automatically results in a complete cache invalidation or if i have to send invalidation requests never the less.

So if you keep moving your web resources ( images, scripts or any thing that can be sent over http) to various versions and do to necessary changes in your app; by design; intentionally you would starting using the newer versions resources - the older version's cache would go colder and colder and eventually being taken out of the cache.
The invalidation requests are costly, time consuming while the versioning is easy and natural. The best use cases was found in the areas of newer CSS stylesheets, updation in js scripts being versioned. The same can be extrapolated for your use case.
Also you don't need to change the origin; keep adding the new files to the S3 and ensure the same are being reflected in the app- that would do.

To answer your question, NO - changing the Origin, including just the path, does not result in cache invalidation.
Information can be found here
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/distribution-web-values-specify.html#DownloadDistValuesDomainName
Quoting the specific part:
Changing the origin does not require CloudFront to repopulate edge caches with objects from the new origin. As long as the viewer requests in your application have not changed, CloudFront will continue to serve objects that are already in an edge cache until the TTL on each object expires or until seldom-requested objects are evicted.

Related

Google app engine - how to disable the cache

So some context:
I have a nodeJS api running on a google app engine. All my get requests are being cached by default by the app engine for 10 minutes.
I am using cloudflare for my API as this allows me to remove specific items from the cache when needed.
You can imagine this has caused a bit of an issue because my CF cache was correctly cleared but the app engine kept returning old data.
According to the docs, you can set a default_expiration in the app.yaml file but setting this to 0 or 0s has made no difference and google keeps caching my responses.
Seemingly, there is also no way you can get something uncached from google.
Now my obvious question here is: is there some way I can completely ignore this cache? Preferrably without having to set my entire API's response to private , 0s cache.
It quite irks me that google is forcing this cache on me provides very vague documentation on the whole matter.
You can configure your app.yaml to define a cache period.
If you use default_expiration this will set a global default cache period for all static file handlers for an application. If omitted, the production server sets the expiration to 10 minutes by default.
To set specific expiration times for individual handlers, specify the expiration element within the handler element in your app.yaml file. You can change the duration of time web proxies and browsers should cache a static file served by this handler.
default_expiration: "4d 5h"
handlers:
- url: /stylesheets
static_dir: stylesheets
expiration: "0d 0h"
Seems like you are referring to the static cache (per your link). Try cache bursting techniques such as adding a query parameter to the url e.g.
https://<url>/?{{APP_VERSION_ID}}
where APP_VERSION_ID is the latest version of your deployed App. This way, each time you redeploy your App, the APP_VERSION_ID is changed and the latest version of your static files will always be loaded

Cache busting with versioning does not seem to work

I am currently using versioning to bust cache. I used to generate different file name with date or version. However, it breaks google cached page because google look for the old file name.
I have a webpack setup for the chunking.
output.filename = '[name].js?v=' + hash
output.chunkFilename = '[name].js?v=' + hash
And I can see that browser requesting file with v=xxx correctly
However, sometimes I need to ask my customer to open up dev tool and click clear cache and hard refresh because normal refresh does not work somehow.
I also use Cloudflare cdn and it does have cache policy.
Cloudflare response headers.
cache-control: max-age=31536000
cf-bgj: minify
cf-cache-status: HIT
cf-polished: origSize=9873
How to make sure browser and cloudflare purge all the js and css files when the new code is pushed ?
Do not know what to do when normal refresh does not work.
On Cloudflare there are several ways to control the behavior of the cache
Understanding the Cloudflare CDN (general rules)
Cache level (can be configured to consider or ignore the querystring)
Page Rules (useful to fine tune caching behavior based on URL patterns)
Origin Cache Control (to control the behavior based on the cache headers returned by your origin server)
You also have various options (depending on the plan) for proactively purging certain resources with Cache Purge (both from the dashboard or via APIs).
It is worth reviewing the above settings (in particular cache levels and page rules) to verify that the querystring is being considered part of the cache key used to retrieve the data. In particular, the header cf-cache-status: HIT indicates that the requested resource was fetched from the CDN cached copy.

Delete old react build files in S3

I am deploying my react app in AWS S3 using AWS code build and caching through AWS CloudFront, But the bucket size is increased to more than 10GB within a month due to frequent deployment.
I tried to delete old build files while deploying but it is causing issues to users who has the old code cached in their browser. As the old files trying to get the previous version build but those are deleted, So it throws 404.
I tried to set no-cache for index.html file but that does not resolve this issue.
Does anyone face this issue?
#Nilanth here is what I do for the similar case:
My stack is also a React app (not so business critical) (it is used to offer article selection possibility for main content management flow..) app is build via CodeCommit - Codebuild to s3 Bucket using CodePipeline & buildspec.yml file. Build it triggered by commit of the repository. I faced a similar problem, that Cloudfront didn't "offer" the newest JS files for the browser (html) so it started to feel like Cache-issue.
I make pretty good solution like this:
Update Cloudfront Cache settings (edit behaviour, set to use "Use legacy cache settings") and set min / max TTLs to 0. (this helps for cache, so user should get newest versions immediately)
For JS / CSS file issue, I add "aws cli remove command" lines to buildspec.yml file like:
aws s3 rm s3://<s3_bucket>/static/js/ --recursive
aws s3 rm s3://<s3_bucket>/static/css/ --recursive
Set those as pre_build commands
Note: See that by removing JS files your application can not be used before new ones are offered again from folders /js & /css. I your application is business critical then u could think beyond this, since there will be 30 - 60s time that app can not be used. And what if build fails, then there is no js/css assets at all, well then you can trigger old build from Codebuild. This will require some effort to do business critical app's Devops work here..
To allow "remove" executions to S3 Bucket, you need to give Codebuild additional permissions. Go to build projects, see the environment's service role. Then go to IAM / roles / pick up the correct role name, and give more S3 permissions, e.g. AmazonS3FullAccess, its enough for sure..
I am not sure, that this is 99% correct solution from Cloudfront side, but it seems to avoid caching-problem and also the bucket size stays small.
-MM
There are many elements there that could throw 404 you'll need to prove one-by-one if they are working to find the root cause(s).
First I'd try the bucket itself, use <s3-bucket-url>/index.html and see if the file (in this case index.html ) exists.
Second the cloudfront, I'll assume the cloudfront distribution is configured correctly (i.e. / path redirects to /index.html). Also, every time you edit the bucket files, create an invalidation to speed up propagation.
Third, you'll need to tell your users to constantly hard reload the page, or use incognito, specially if your site is in constant development.

Force updates on installed PWA when changing index.html (prevent caching)

I am building a react app, which consists in a Single Page Application, hosted on Amazon S3.
Sometimes, I deploy a change to the back-end and to the front-end at the same time, and I need all the browser sessions to start running the new version, or at least those whose sessions start after the last front-end deploy.
What happens is that many of my users still running the old front-end version on their phones for weeks, which is not compatible with the new version of the back-end anymore, but some of them get the updates by the time they start the next session.
As I use Webpack to build the app, it generates bundles with hashes in their names, while the index.html file, which defines the bundles that should be used, is uploaded with the following cache-control property: "no-cache, no-store, must-revalidate". The service worker file has the same cache policy.
The idea is that the user's browser can cache everything, execpt for the first files they need. The plan was good, but I'm replacing the index.html file with a newer version and my users are not refetching this file when they restart the app.
Is there a definitive guide or a way to workaround that problem?
I also know that a PWA should work offline, so it has to have the ability to cache to reuse, but this idea doesn't help me to perform a massive and instantaneous update as well, right?
What are the best options I have to do it?
You've got the basic idea correct. Why your index.html is not updated is a tough question to answer to since you're not providing any code – please include your Service Worker code. Keep in mind that depending on the logic implemented in the Service Worker, it doesn't necessarily honor the HTTP caching headers and will cache everything including the index.html file, as it seems now is happening.
In order to have the app work also in offline mode, you would probably want to use a network-first SW strategy. Using network-first the browser tries to load files from the web but if it doesn't succeed it falls back to the latest cached version of the particular file it tried to get. Another option would be to choose what is called a stale-while-revalidate strategy. That first gives the user the old file (which is super fast) and then updates the file in the background. There are other strategies as well, I suggest you read through the documentation of the most widely used SW library Workbox (https://developers.google.com/web/tools/workbox/modules/workbox-strategies).
One thing to keep in mind:
In all other strategies except "skip SW and go to the network", you cannot really ensure the user gets the latest version of the index.html. It is not possible. If the SW gives something back from the cache, it could be an old version and that's that. In these situations what is usually done is a notification to the user that a new version of the app has been donwloaded in the background. Basically user would load the app, see the version that was available in the cache, and SW would then check for updates. If an update was found (there was a new index.html and, because of that, new service-worker.js), the user would see a notification telling that the page should be refreshed. You can also trigger the SW to check for an update from the server manually from your own JS code if you want. In that situation, too, you would show a notification to the user.
Does this help you?

Change to static file doesn't happen immediately after deploy

When I change a static file (here page.html), and then run appcfg.py update, even after deployment is successful and it says the new files are serving, if I curl for the file the change has not actually taken place.
Relevant excerpt from my app.yaml:
default_expiration: "10d"
- url: /
static_files: static/page.html
upload: static/page.html
secure: always
Google's docs say "Static cache expiration - Unless told otherwise, web proxies and browsers retain files they load from a website for a limited period of time." There shouldn't be any browser cache as I am using curl to get the file, and I don't have a proxy set up at home at least.
Possible hints at the answer
Interestingly, if I curl for /static/page.html directly, it has updated, but if I curl for / which should point to the same file, it has not.
Also if I add some dummy GET arg, such as /?foo, then I can also see the updated version. I also tried adding the -H "Cache-Control: no-cache" option to my curl command, but I still got the stale version.
How do I see updates to / immediately after deploy?
As pointed out by Omair, the docs for the standard environment for Pyhton state that "files are likely to be cached by the user's browser, as well as by intermediate caching proxy servers such as Internet Service Providers". But I've found a way to flush static files cached by your app on Google Cloud.
Head to your Google Cloud Console and open your project. Under the left hamburger menu, head to Storage -> Browser. There you should find at least one Bucket: your-project-name.appspot.com. Under the Lifecycle column, click on the link with respect to your-project-name.appspot.com. Delete any existing rules, since they may conflict with the one you will create now.
Create a new rule by clicking on the 'Add rule' button. For the object conditions, choose only the 'Newer version' option and set it to 1. Don't forget to click on the 'Continue' button. For the action, select 'Delete' and click on the 'Continue' button. Save your new rule.
This new rule will take up to 24 hours to take effect, but at least for my project it took only a few minutes. Once it is up and running, the version of the files being served by your app under your-project-name.appspot.com will always be the latest deployed, solving the problem. Also, if you are routinely editing your static files, you should remove any expiration element from handlers related to those static files and the default_expiration element from the app.yaml file, which will help avoid unintended caching by other servers.
According to App Engine's documentation on static cache expiration, this could be due to caching servers between you and your application respecting the caching headers on the responses:
The expiration time will be sent in the Cache-Control and Expires HTTP response headers, and therefore, the files are likely to be cached by the user's browser, as well as by intermediate caching proxy servers such as Internet Service Providers.
Once a file is transmitted with a given cache expiration time, there is generally no way to clear it out of intermediate caches, even if you clear the browser cache or use Curl command with no-cache option. Re-deploying a new version of the app will not reset caches as well.
For files that needs to be modified, shorter expire times are recommended.

Resources