CSS File Not Updating on Deploy (Google AppEngine) - google-app-engine
I pushed a new version of my website, but now the CSS and static images are not deploying properly.
Here is the messed up page: http://www.gaiagps.com
Appengine shows the latest version as being correct though: http://1.latest.gaiagps.appspot.com/
Any help?
I've seen this before on App Engine, even when using cache-busting query parameters like /stylesheets/default.css?{{ App.Version }}.
Here's my (unconfirmed) theory:
You push a new version by deploying or changing a new version to default.
While this update is being propagated to all GAE instances running your app...
...someone hits your site.
The request for static resource default.css{{ App.Version }} is sent to Google's CDN, which doesn't yet have it.
Google's CDN asks GAE for the resource before propagation from step #2 is done for all instances.
If you're unlucky, GAE serves up the resource from an instance running the old version...
...which now gets cached in Google's CDN as the authoritative "new" version.
When this (if this is what happens) happens, I can confirm that no amount of cache-busting browser work will help. The Google CDN servers are holding the wrong version.
To fix: The only way I've found to fix this is to deploy another version. You don't run the risk of this happening again (if you haven't made any CSS changes since the race condition), because even if the race condition occurs, presumably your first update is done by the time you deploy your second one, so all instances will be serving the correct version no matter what.
Following is what has worked for me.
Serve your css file from the static domain. This is automatically created by GAE.
//static.{your-app-id}.appspot.com/{css-file-path}
Deploy your application. At this point your app will be broken.
change the version of the css file
//static.{your-app-id}.appspot.com/{css-file-path}?v={version-Name}
deploy again.
Every time you change the css file. you will have to repeat 2,3 and 4.
Your link looks fine to me, unless I'm missing something.
You may have cached your old CSS, and not getting the new CSS after updating it. Try clearing your browser cache and see if that works.
Going to 1.latest downloads the new CSS since it's not in your cache, so it appears correctly to you.
I had this problem as well. I was using flask with GAE so I didn't have a static handler in my app.yaml. When I added it, the deploy works. Try adding something like this
handlers:
- url: /static
static_dir: static
to your app.yaml and deploy again. It worked for me. Apparently Google is trying to optimize by not updating files that it thinks users can't see.
As found by shoresh, the docs for the standard environment for Pyhton state that both settings for static cache expiration, the individual element expiration and the top-level element default_expiration, are responsible for defining "the expiration time [that] will be sent in the Cache-Control and Expires HTTP response headers". This means that "files are likely to be cached by the user's browser, as well as by intermediate caching proxy servers such as Internet Service Providers".
The problem here is that "re-deploying a new version of the app will not reset any caches". So if one has set default_expiration to, e.g., 15 days, but makes a change to a CSS or JS file and re-deploy the app, there is no guarantee that those files will be automatically served due to active caches, particularly due to intermediate caching proxy servers, which may include Google Cloud servers - what seems to be the case since accessing your-project-name.appspot.com also serves outdated files.
The same documentation linked above states that "if you ever plan to modify a static file, it should have a short (less than one hour) expiration time. In most cases, the default 10-minute expiration time is appropriate". That is something one should think about before setting any static cache expiration. But for those who, like myself, didn't know all of this beforehand and have already been caught by this problem, I've found a solution.
Even though the documentation states that it's not possible to clear those intermediate caching proxies, one can delete at least the Google Cloud cache.
In order to do so, head to your Google Cloud Console and open your project. Under the left hamburger menu, head to Storage -> Browser. There you should find at least one Bucket: your-project-name.appspot.com. Under the Lifecycle column, click on the link with respect to your-project-name.appspot.com. Delete any existing rules, since they may conflict with the one you will create now.
Create a new rule by clicking on the 'Add rule' button. For the object conditions, choose only the 'Newer version' option and set it to 1. Don't forget to click on the 'Continue' button. For the action, select 'Delete' and click on the 'Continue' button. Save your new rule.
This newly created rule will take up to 24 hours to take effect, but at least for my project it took only a few minutes. Once it is up and running, the version of the files being served by your app under your-project-name.appspot.com will always be the latest deployed, solving the problem. Also, if you are routinely editing your static files, you should remove the default_expiration element from the app.yaml file, which will help avoid unintended caching by other servers.
Ok For newer people seeing this problem i tried the cache-bursting approach and seem to have fixed it here is an example of what i did for the css import on app.cfg file create a variable to hold your appid as set in app.yaml file and set it as one below
<link href="{{ url_for('static', filename='file.css') }}?{{config.APP_ID}}" rel="stylesheet">
Also for the app.yaml file add this config to be on the safe side
handlers:
url: /static
static_dir: static
Here whats worked for me:
First, I've changed the version on app.yaml.
Then follow these steps below
Go to your console -> Click on your Project.
On the side menu, click on Computation -> Versions:
There it will be all versions, and which version is default. Mine was set to an older version.
Mark the new version.
For me worked. Any concerns?
From the docs for the standard environment for Pyhton: static_cache_expiration.
After a file is transmitted with a given expiration time, there is
generally no way to clear it out of intermediate caches, even if the
user clears their own browser cache. Re-deploying a new version of the
app will not reset any caches. Therefore, if you ever plan to modify a
static file, it should have a short (less than one hour) expiration
time. In most cases, the default 10-minute expiration time is
appropriate.
Make sure to add wildcard at the end of the url and service setup on dispatch.yaml file.
Example:
dispatch:
- url: "example.com/*"
service: default
- url: "sub.example.com/*"
service: subexample
If you use 2020 GCP App Engine, just add default_expiration to your app.yaml file and set it to 1m.
default_expiration: "1m"
More info: https://cloud.google.com/appengine/docs/standard/python3/config/appref/#runtime_and_app_elements
For new people coming to this old questions/set of answers I wanted to give an updated answer. I think in 2018-19 the following information will probably fix most of the CSS update issues people are having:
Make sure your app.yaml has the following:
handlers:
- url: /static
static_dir: static
Run gcloud app deploy
Chill for 10 minutes.. and the shift-reload your website
This approach is suggested by google as well (https://cloud.google.com/appengine/docs/standard/python/getting-started/serving-static-files).
Try clearing cache on your browser. Had exact same issue and got it fixed by simply clearing cache.
Related
React app hosted by Netlify doesn't update unless F5 or reload
I'm a little surprised there is nothing out there about this that I have found. But just like the title says, I have a React SPA deployed to Netlify. It goes live without error. The only issue is, if the end user has been to the site before, they have to refresh the page to see any changes I have pushed out. Is there something I need to add to the index file perhaps?
The browser caches the compiled js bundle. You can read more here: https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Cache-Control One of your options would be to disable it, or set cache expiration to a lower value during the intense development and increase it if/when you deploy less often. Another option could be to implement some kind of API method to check if newer version has been deployed and trigger a page refresh. (Please be careful not to discard users work, like data filled in forms, during a refresh) Each have pros and cons.
Moved off of React SPA on S3/Cloudfront, but users' browsers are still showing old, cached SPA
DNS Is being handled via Route53. Previously, had a React SPA deployed on AWS S3 (with Cloudfront dist + SSL). Moved off of that to a SSR NextJS app on ElasticBeanstalk, but even after changing the A records and invalidating CF, some users are still reporting their browser is using the old S3 SPA. Only fix is for them to manually clear their cache for the site. Asking each random user that's cached my site to manually refresh their cache for the page doesn't seem like a good solution ;) Here's what I've done thus far: Updated Route53 (A record) to point to the EB server (this is working as intended), Tried disabling the associated CF distribution, When that didn't work, I removed the files in S3 and invalidated the CF dist, EDIT: After deleting the CF dist completely, users are able to go to the new site - but only after hard-refreshing a half-dozen times. I still feel like there should be a more elegant solution to this requiring little/no user know-how. All of my CSS & JS files had version numbers appended to help with cache-busting. Users report that they see the HTML structure of the page, but the versioned JS & CSS 404 for them (as they should, since the files no longer exist). I would have thought this would be enough to have the browser swap update its cache - but apparently not. Only solution so far has been for affected users to manually clear their cache. More than happy to offer more details if needed, any thoughts/input is super appreciated!
react-snap sometimes only crawls a single page
Deployment of react-snap on a CRA app has been mostly painless, giving huge page load speed boosts and requiring zero specialized configuration. However, I'm seeing occasional issues with deploys (both locally and from netlify) only crawling a single page and then appearing done. Like this: Normal result (maybe 50% of the time) means crawling ~50 pages and then everything else successfully finishes. I've tried limiting concurrency to 1 without improvement. What other tools can I use to figure this problem out or configuration options can I include to fix this?
Figured this out: Webpack was setting PUBLIC_URL to the production domain, and new deploys were looking on that domain for a JS file that looked like main.1234abcd.js, using a hash of the js file for cache busting. This didn't exist on the production domain before it was deployed so loading the page failed and no links were detected. Setting the JS links to root-relative URL (i.e. /static/js/main.1234abcd.js) loaded the JS correctly from the snap-created server and allowed it to be crawled correctly. In addition, it was helpful to debug via the anchor crawling section in react-snap here: https://github.com/stereobooster/react-snap/blob/master/src/puppeteer_utils.js#L108-L119
ReactJS + Express Cache Busting issue
I'm trying to avoid cache busting by setting version numbers in the index.html file name (index.hash.html) generated with html-webpack-plugin. However, I'm unable to get the browser to grab the new file from the server because the old index.html file is still cached for X amount of time. I could clear cache to hit the server again, or change the cache-control header but this doesn't really work well for users that already have the file cached since it seems they won't see the changes to cache-control anyway. I'm looking for the correct solution and can't seem to find one for this issue. Any suggestions?
I'm no expert on this, but we're using no hashing for index.html. This means the TTL for it is zero. On the other hand, all other assets ( js, css , svg ... ) have hashes defined and are cached. Our Service worker on the client checks for newer versions and serve them accordingly. Hope this helps!
How to handle expired files without refreshing the browser when using Single Page Application (SPA)?
I have done a full Single Page Application (SPA) application using Angularjs. So far so good. As anyone knows, all javascript files are loaded in the first time access. Or, some file are loaded in lazy mode style when needed. So far so good... The situation is: the server updates all files (html partials, javascripts, css's) and the client remain with a lot of files out-dated. This would be simply solved refreshing the browser, hit F5 key, control+f5, or refresh button in the browser. But this concept does not exists when working with SPA. I'm not sure how to solve this problem. I could detect somehow (doing a ping maybe) and just to re-load that specific file. With document.write strategy. But now rises another problem, I have a single javascript file with all javascript minified. I could try to force a full reload in the browser or force to re-login (and reload because login are SPA part). But reloading is an ugly solution, imagine the client lose all data in the form because he was unlucky the server have just updated. And worse, I must now create some "auto-save" feature just because of this. I'm not sure how to handle this, if possible, doing in "angular way". I wonder how google gmail handles this because I stay logged for many many hours without logging of.
As others have already suggested, keep the logged user on the old version of your webapp. Not only what you ask is difficult to do in Angular, but it can also lead to a bad user experience and surprising behaviour, since there may not be a mapping between what the user is doing with the old version and what the new version provides. Views may be removed, renamed, split or merged. The behaviour of the same view may have changed, and doing so without notice for the user may cause mistakes. You made an example with Gmail, but may have noticed that changes to the UI always happen after you logout, never while you're using it. First of all, if your app is an intranet website used during office time, just update it while nobody is using it. This is a much simpler solution. Otherwise, if you need to provide 24/24 availability, my suggestion is: When you deploy the new version of your SPA, keep the old version in parallel with the new version, keep the current users on the old version, and log new users to the new version. This can be made in a number of ways depending on your setup, but it's not difficult to do. Keep the old version around until you're confident that nobody is still using it or you're pretty sure that the new version is ok and you don't need to rollback to the old version. The backend services should be backward-compatible with the old version of the frontend. If that's not possible you should keep multiple version of the backend services too.
As the rest of the guys said a solution can be to versioning your files. So every time that your browser check those files out the browser notice that the files are different to the ones that are in the server so the browser cache the new files. I suggest to use some build tool like gulp, grunt or webpack, the last one is becoming more popular. By the moment I use gulp for my projects. I´m moving to webpack though. if you are interested in gulp you can have a look to gulp-rev and gulp-rev-replace plugins. What do they do? let´s say that we have the next file in your project app.js what you get after apply gulp-rev to your project is something like app-4j8888dp.js then your html file where the app.js is injected is still pointing to app.js so you need to replace it. To do that you can use gulp-rev-replace plugin. eg. gulp task where var gulp = require('gulp'); var rev = require('gulp-rev'); var revReplace = require('gulp-rev-replace'); var useref = require('gulp-useref'); var filter = require('gulp-filter'); var uglify = require('gulp-uglify'); var csso = require('gulp-csso'); gulp.task("index", function() { var jsFilter = filter("**/*.js", { restore: true }); var cssFilter = filter("**/*.css", { restore: true }); var indexHtmlFilter = filter(['**/*', '!**/index.html'], { restore: true }); return gulp.src("src/index.html") .pipe(useref()) // Concatenate with gulp-useref .pipe(jsFilter) .pipe(uglify()) // Minify any javascript sources .pipe(jsFilter.restore) .pipe(cssFilter) .pipe(csso()) // Minify any CSS sources .pipe(cssFilter.restore) .pipe(indexHtmlFilter) .pipe(rev()) // Rename the concatenated files (but not index.html) .pipe(indexHtmlFilter.restore) .pipe(revReplace()) // Substitute in new filenames .pipe(gulp.dest('public')); }); if you want to know further details see the links bellow. https://github.com/sindresorhus/gulp-rev https://github.com/jamesknelson/gulp-rev-replace
A single page application is that, a single stack that controls the client logic of your application. Thus, any navigation done through the application should be handled by your client, and not by the server. The goal is to have a one single "fat" HTTP request that loads everything you need, and then perform small HTTP requests. That's why you can only have one ng-app in your apps. You are not suppose to have multiple and just load the modules you need (although the AngularJS team wants to move that way). In all cases, you should serve the same minified file and handle everything from your application. It seems to me that you are more worried about the state of your application. As Tom Dale (EmberJS) described in the last Cage Match, we should aim to have applications that can reflect the same data between server and client at any point of time. You have many ways to do so, either by cookies, sessions or local storage. Usually a SPA communicates with a REST based server, and hence perform idempotent operations to your data. tl;dr You are not supposed to refresh anything from the server (styles or scripts, for instance), just the data that your application is handling. An initial single load is what SPA is all about.
separate your data and logic and reload the data using ajax whenever you want, for that i will suggest you use REST API to get the data from server. SPA helps you to reduce the HTTP request again and again but its also require some http request to update a new data to view.
Well, you would have to unload the old existing code (i.e. the old AngularJS app, modules, controllers, services and so on). Theoretically, you could create a custom (randomized) app name (with all modules have this prefix for each unique build!) and then rebuild your app in the browser. But seriously.. that's a) very complex and b) will probably fail due memory leaks. So, my answer is: Don't. Caching issues I would personally recommend to name/prefix all resources depended by a build with a unique id; either the build id, a scm hash, the timestamp or whatever like that. So, the url to the javascript is not domain.tld/resources/scripts.js but domain.tld/resources-1234567890/scripts.js which ensures that this path's content will never conflict with a (newer) version. You can choose your path/url like you want (depending on the underlaying structure: it is all virtually, can you remap urls, etcpp). It would be even not required that each version will exist forever (i.e. map all resources-(\d+)/ to resources/; however, this would be not nice for the concept of URLs. Application state Well, the question is how often will the application change that it would be important that such reloads are required. How long is the SPA open in a browser? Is it really impossible to support two versions at the same time? The client (the app in the browser) could even send its own version within the HTTP requests. In the beginning of a new application, there are a lot of changes that would require a reload. But soon after your application has a solid state, the changes will be smaller and would not require a reload. The user itself will make more refreshs.. more than we ever expected :/
As with what everyone else is saying... Don't, and while socket.io could work it's asking for trouble if you are VERY careful. You have two options, upon server update invalidate any previous session (I would also give users a half hours notice or 5 minutes depending on application before maintenance would be done. The second option is versioning. If they are on version 10, then they communicate with backend 10. If you release version 11 and they are still on 10 then they can still communicate with backend 10. Remember Google wave? It failed for a reason. Too many people writing one source as the same time causes more problems then it solves.
use $state service. create state during loading page using ctor. after specified time re create state and load page. function(state) { state.stateCtor(action); $state.transitionTo(action + '.detail', {}, { notify: true }); }
Versioning your files, so on every update increment version number and the browser will update it automaticallly.
My solution consists of several points. While this is not important, but I send one JavaScript file to the client side, I use grunt to prepare my release. The same grunt file adds to the JavaScript tag a query with version number. Regardless of whether you have one file or lots of files, you need to add a version number to them. You may need to do this for resources as well. (check at the end for an example) I send back in all my responses from the server (I use node) the version number of the app. In Angular, when I receive any response, I check the version number against the version number loaded, if it has changed (this means the server code has been updated) then I alert the user that the app is going to reload. I use location.reload(true); Once reloaded the browser will fetch all new files again because the version number in the script tag is different now, and so it will not get it from cache. Hope this helps. <script src="/scripts/spa.min.js?v=0.1.1-1011"></script>