I have a website, but every time I upload a new update or feature I'm afraid it won't show up to the user.
It has happened a few times, we uploaded something new, but for some users, it didn't appear. The old information was left and it only appeared after a while.
As I know that no users will clear their browser cache to prevent this, I would like to know if there is anything I can do on the development side to prevent this, and every time I upload something new, neither user will experience any problems or will not receive the news.
I currently use AWS services like ec2, es3, bucket. cloud front and route 53
What to do
Actions to perform summarized with screenshots really elegantly here: https://stackoverflow.com/a/60049652/14077491
Why to do it
When someone makes a request to your website, AWS automatically caches the result in edge locations to speed up the response time for subsequent requests. The default is 24 hours, but this can be modified.
You can bypass this by either (1) setting the cache expiration to a very short time span, or (2) using cache invalidation. The first is not recommended since then your users will have to wait longer for a response more often, which isn't good. You can perform cache invalidation in a couple of ways, depending on whichever is better for your project. You can read the AWS docs about cache invalidation to choose for your use case.
I have previously added an extra cache invalidation task to my CD pipeline, which automates the process and ensures it is never forgotten. Unless you are posting many, many updates per month, it is also free.
Related
Since we use other tools to collect analytics data I think disabling the web analytics good idea to gain increased webpage load performance such as fast load or reducing requests etc. Would that make a difference?
Thanks in advance.
Block kentico in your devtools, using the network request blocker. Reload the page, measure it's speed with your local lighthouse or performance profiler.
Unblock kentico, and repeat the procedure.
Compare the results.
Repeat until satisfied.
By all means, if you're not using the functionality, I'd recommend disabling it. However, if you want to clean up that data, you need to make sure that happens first before disabling otherwise you won't be able to clean up that analytics data.
There is a scheduled task called "Remove analytics data". You'll want to edit that task and change the "Task data" value to 540 days, then manually run it. Then go back in, edit that task, change the value to 360, then manually run it. Then go back in, edit the task, change the value to 180 days and manually run it. Finally, go back in, change the value to 0 and manually run the task.
After you've run the task with 0 days, there should be no analytics data stored. You are then safe to disable analytics.
Now if you find you really need that data then maybe you want to take a backup of the database OR just leave it in your database, it's up to you.
Lastly, no need to cross post on SO and DevNet as DevNet picks up SO posts tagged with "kentico".
Adding accepted answer from DevNet.
I am currently using the cache for my current project, but i'm not sure if it is the right thing to do.
I need to retrieve a lot of data from a web api (nodes that can be picture, node, folder, gallery.... Those nodes will change very often, so I need fast access (loading up to 300-400 element at once). Currently I store them in cache (key as md5 of node_id, so easy to retrieve, and update).
It is working great so far, but if I clear the cache it takes up to 1 minute to create all the cache again.
Should I use a database to store those nodes ? Will it be quicker / slower / same ?
Your question is very broad and thus hard to answer. Saving 300-400 elements under a cache key sounds problematic to me. You can run into problems where serializing when storing in the cache and deserializing when retrieving the data will cause problems for you. Whenever your cache service is down your app will be practically unusable.
If you already run into problems when clearing/updating the cache you might want to look for an alternative. This might be a database or elasticsearch, advanced cache features like tagged caching could help with preventing you from having to clear the whole cache when part of the information updates. You might also want to use something like the chain provider to store things in multiple caches to prevent the aforementioned problem of an unreachable cache "breaking" your app. You could also look into a pattern that is common with CQRS called a read model.
There are a lot of variables that come into play. If you want to know which one will yield the best results, i.e. which one is quicker, you should do frequent performance tests with realistic data using Symfony's debug toolbar & profilers or a 3rd party service like blackfire.io or tideways. You might also want to do capacity test with a tool like JMeter to ensure those results still hold true, when there are multiple simultaneous users.
On Google App Engine, there are multiple ways a request can start: a web request, a cron job, a taskqueue, and probably others as well.
How could you (especially on Managed VM) determine the time when your current request began?
One solution is to instrument all of your entry points, and save the start time somewhere, but it would be nice if there was an environment variable or something that told when the request started. The reason this is important is because many GAE requests have deadlines (either 60 seconds or 10 minutes in various scenarios), and it's helpful to determine how much time you have left in a request when you are doing some additional work.
We don't specifically expose anything that lets you know how much time is left on the current request. You should be able to do this by recording the time at the entrypoint of a request, and storing it in a thread local static.
The need for this sounds... questionable. Why are you doing this? It may be a better idea to use a worker / queue pattern with polling for something that could take a long time.
You can see all this information in the logs in your Developer console. You can also add more data to the logs in your code, as necessary.
See Writing Application Logs.
my ajax calls to AppEngine doing some very basic logic (and doing all the actual processing in the background, isolated from the frontend) tend to be at least 200% slower than they used to. Like taking 3 seconds instead of one out of a sudden since a week or so.
I am wondering if you guys had a similar experience or something changed in the meantime I am not aware of, quota wise maybe. I am using the free quota.
Thanks
Zac
To my knowledge there is no particular change going on, but we can't be sure. However slow response time can have multiple root causes.
If you have no traffic on your application then you might have zero instance running, therefore when you make your request there is the time for an instance to start up.
If you have a lot of traffic, depending on your configuration the request can take more time. You need to fine tune wether the request waits to be handled by an "overloaded" instance or if another instance should start.
If you use an API maybe there is something wrong with it.
I would suggest you enable appstats in your app, it will show you what takes time in your request: you will definitely see if this is something on your side or not.
More and more sites are displaying the number of views (and clicks like on dzone.com) certain pages receive. What is the best practice for keeping track of view #'s without hitting the database every load?
I have a bunch of potential ideas on how to do this in my head but none of them seem viable.
Thanks,
first time user.
I would try the database approach first - returning the value of an autoincrement counter should be a fairly cheap operation so you might be surprised. Even keeping a table of many items on which to record the hit count should be fairly performant.
But the question was how to avoid hitting the db every call. I'd suggest loading the table into the webapp and incrementing it there, only backing it up to the db periodically or on webapp shutdown.
One cheap trick would be to simply cache the value for a few minutes.
The exact number of views doesn't matter much anyway since, on a busy site, in the time a visitor goes through the page, a whole batch of new views is already in.
One way is to use memcached as a counter. You could modify this rate limit implementation to instead act as general counter. The key could be in yyyymmddhhmm format with an expiration of 15 or 30 minutes (depending on what you consider to be concurrent visitors) and then simply get those keys when displaying the page.
Nice libraries for communicating with the memcache server are available in many languages.
You could set up a flat file that has the number of hits in it. This would have issues scaling, but it could work.
If you don't care about displaying the number of page views, you could use something like google analytics or piwik. Both make requests after the page is already loaded, so it won't impact load times. There might be a way to make a ajax request to the analytics server, but I don't know for sure. Piwik is opensource, so you can probably hack something together.
If you are using server side scripting, increment it in a variable. It's likely to get reset if you restart the services so not such a good idea if accuracy is needed.