I have a MEAN app in production on Digital Ocean, but have observed strange behavior when recently migrating a new Bootstrap theme to replace the previous designs. What I've noticed is that even after I replaced the code that included certain elements, such as a background image, the webserver would still be looking for them.
Failed to load resource: the server responded with a status of 404 (Not Found) https://www.website.com/img/frontpage-img.jpg
I even grepped from the root directory to ensure frontpage-img.jpg was no longer included anywhere within the codebase, and got
Binary file .git/index matches
as the only result. This behavior is also not replicated on the localhost, which works exactly as I expect it to. Is this a server-related issue? How do I purge the old elements to prevent the server from trying to load them?
Related
I am building a react app, which consists in a Single Page Application, hosted on Amazon S3.
Sometimes, I deploy a change to the back-end and to the front-end at the same time, and I need all the browser sessions to start running the new version, or at least those whose sessions start after the last front-end deploy.
What happens is that many of my users still running the old front-end version on their phones for weeks, which is not compatible with the new version of the back-end anymore, but some of them get the updates by the time they start the next session.
As I use Webpack to build the app, it generates bundles with hashes in their names, while the index.html file, which defines the bundles that should be used, is uploaded with the following cache-control property: "no-cache, no-store, must-revalidate". The service worker file has the same cache policy.
The idea is that the user's browser can cache everything, execpt for the first files they need. The plan was good, but I'm replacing the index.html file with a newer version and my users are not refetching this file when they restart the app.
Is there a definitive guide or a way to workaround that problem?
I also know that a PWA should work offline, so it has to have the ability to cache to reuse, but this idea doesn't help me to perform a massive and instantaneous update as well, right?
What are the best options I have to do it?
You've got the basic idea correct. Why your index.html is not updated is a tough question to answer to since you're not providing any code – please include your Service Worker code. Keep in mind that depending on the logic implemented in the Service Worker, it doesn't necessarily honor the HTTP caching headers and will cache everything including the index.html file, as it seems now is happening.
In order to have the app work also in offline mode, you would probably want to use a network-first SW strategy. Using network-first the browser tries to load files from the web but if it doesn't succeed it falls back to the latest cached version of the particular file it tried to get. Another option would be to choose what is called a stale-while-revalidate strategy. That first gives the user the old file (which is super fast) and then updates the file in the background. There are other strategies as well, I suggest you read through the documentation of the most widely used SW library Workbox (https://developers.google.com/web/tools/workbox/modules/workbox-strategies).
One thing to keep in mind:
In all other strategies except "skip SW and go to the network", you cannot really ensure the user gets the latest version of the index.html. It is not possible. If the SW gives something back from the cache, it could be an old version and that's that. In these situations what is usually done is a notification to the user that a new version of the app has been donwloaded in the background. Basically user would load the app, see the version that was available in the cache, and SW would then check for updates. If an update was found (there was a new index.html and, because of that, new service-worker.js), the user would see a notification telling that the page should be refreshed. You can also trigger the SW to check for an update from the server manually from your own JS code if you want. In that situation, too, you would show a notification to the user.
Does this help you?
recently we made changes to one of our applications and we noticed that our customers are not getting the new views. So we decided to version the files so we can force the client's browsers to fetch the new views every time we have a new version.
So far so good, but we needed to test this in a local environment before deploying this change (the versioning). Unfortunately, on localhost the views are never cached. I noticed that the requests for the views are sent with Cache-Control:max-age=0. If I am not mistaking this causes the resource to not be cached.
I read also that this could be caused by the ETag header, so I removed it but the views are still not cached. Also, I set the Cache-Control:max-age=86400,public header in the response. So the only reason left was the Cache-Control:max-age=0 header in the request. So I tried to change the header. I set the cache-control header in the request to be Cache-Control:max-age=86400,public, but still no luck.
The views are requested by AngualarJS, they are templates in directives. There is also a difference in the IIS version that we are using locally and that on the server. Locally we are using 7.5 and on the server, it is 8.0. Could this be the problem?
Can anyone guide me to the right direction?
Edit:
The Disable Cache option in the chrome dev tools is disabled.
One thing I can think about is that you have the Disable Cache enabled in your browser, if it's just your local system:
Normally, getting around Browser caching is quite tricky, so most people have trouble disabling browser caching using headers. The Cache-Control:max-age unfortunately is not uniformly implemented across browsers. If the issue is still occurring inspite of the above, could you provide screenshots from the network tab on your Chrome developer tools?
I have a fairly big application which went over a major overhaul.
The newer version uses lot of JSONP calls and I notice 500 server errors. Nothing is logged in the logs section to determine the error cause. It happens on JS, png and even jersey (servlets) too.
Searching SO and groups suggested that these errors are common during deployment. But it happens even after hours after deployment.
BTW, the application has become slightly bigger and it even causes deadline exception while starting few instances in few rare cases. Sometimes, it starts & serves within 6-10secs. Sometimes it goes to more than 75secs thereby causing a timeout for the similar request. I see the same behavior for warmup requests too. Nothing custom is loaded during app warmup.
I feel like you should be seeing the errors in your logs. Are you exceeding quotas or having deadline errors? Perhaps you have an error in your error handler like your file cannot be found, or the path to the error handler overlaps with another static file route?
To troubleshoot, I would implement custom error pages so you could determine the actual error code. I'm assuming Python since you never specified what language you are using. Add the following to your app.yaml and create static html pages that will give the recipient some idea of what's going on and then report back with your findings:
error_handlers:
- file: default_error.html
- error_code: over_quota
file: over_quota.html
- error_code: dos_api_denial
file: dos_api_denial.html
- error_code: timeout
file: timeout.html
If you already have custom error handlers, can you provide some of your app.yaml so we can help you?
Some 500s are not logged in your application logs. They are failures at the front-end of GAE. If, for some reason, you have a spike in requests and new instances of your application cannot be started fast enough to serve those requests, your client may see 500s even though those 500s do not appear in your application's logs. GAE team is working to provide visibility into those front-end logs.
I just saw this myself... I was researching some logs of visitors who only loaded half of the graphics files on a page. I tried clicking on the same link on a blog that they did to get to our site. In my case, I saw a 500 error in the chrome browser developer console for a js file. Yet when I looked at the GAE logs it said it served the file correctly with a 200 status. That js file loads other images which were not. In my case, it was an https request.
It is really important for us to know our customer experience (obviously). I wanted to let you know that this problem is still occurring. Just having it show up in the logs would be great, even attach a warm-up error to it or something so we know it is an unavoidable artefact of a complex server system (totally understandable). I just need to know if I should be adding instances or something else. This error did not wait for 60 seconds, maybe 5 to 10 seconds. It is like the round trip for SSL handshaking failed in the middle but the logs showed it as success.
So can I increase any timeout for the handshake or is that done on the browser side?
I'm programmatically creating Blobs using the experimental files api. I then call getServingUrl() to get the URL of the image that I programmatically put in the blobstore.
Occasionally, this method returns a relative path like: /_ah/img/D4HVTcZm0wvYCcDNhV892A instead of 127.0.0.1/_ah/img/D4HVTcZm0wvYCcDNhV892A
This only happens occasionally and breaks some of my client code. Is this a known issue (quick googleing didnt find anything)? I'd prefer not to have any hacks to fix it (i.e. when on development server, prepend the host if it doesn't exist).
Any ideas?
I'm running into this problem and its driving me crazy.
Also, we are not allowed to update the version of IIS.
The content on the server (asmx files holding WCF webmethods called via SOAP) exists and I can freely browse in inetmgr to the virtual directory that contains this data. The files all exist all on the file system and the virtual directories all point to the correct spot and have the correct ACLS, but when I go to their URLs all I get is a 404 message.
I have reset IIS, I have rebooted the server, and I have done everything I can think of.
The IIS logs simple return "404 - -" as the entire line contents, with no other data in the line.
The event logs show nothing, and ASPNET is not dying or anything like that.
With no event logs, and minimal logs in IIS , I have no idea what to do and was hoping that others had run into this before.
In IIS logs there should some codes along side the 404 error which will help find the issue.
Check IIS has the correct version of .Net installed using aspnet_regiis command line tool. Also check that ASMX extensions are mapped to the .Net ISAPI filter as they are with ASPX pages.