CakePHP based website becomes unresponsive in browser - cakephp

I am working on a large website built over Croogo in CakePHP. Today, without any warning, the following started to happen: if making a few consecutive HTTP request, the site remains in loading mode, inside the browser. Any attempt to access any other URL from the same domain results in the same result.
The interesting part, now: if I delete the "CAKEPHP" cookie and then try reloading it, everything works fine, until it freezes again.
Notes:
this is happening client side. The site is responding in other clients
PHP goes to use 30% of the processor power for a very short time just before the site becomes unresponsive
This is application related - I've tested it on three different configurations and all acted the same
I've commented the code I was writing before this happened and still no change
An Apache restart also makes the website responsive in the web browser
There are absolutely no slow queries. The largest time recorded for a series of queries is 134 ms. Also PHP mostly only parses data, with no demanding operations
This is happening equally in scripts with only one query and one variable showing and scripts that are parsing large data sets

This was solved by telling CakePHP to store sessions into database instead of "php".

Related

NextJS - Incremental Static Regeneration on Vercel constantly times out (on a pro plan - 60s time-out limit)

I'm having issues with a NextJS project hosted on a Vercel pro plan. Routes that are set up with ISR just keep timing out. I have revalidate set to 120s currently but I've tried it with 60s, 30s, and 6000s too just in case I was calling it too often.
I'm using Strapi as the headless CMS that serves the API for NextJS to build pages from and is deployed on Render's German region. The database Strapi uses is a mongodb databse hosted on MongoDB Atlas and deployed on MongoDB's Ireland AWS region. (I've also set the Serverless Functions on Vercel to be served from London, UK but I'm not sure if that affects ISR?)
There are 3 different dynamic routes with about 20 pages each and on build-time they average 6999ms, 6508ms and 6174ms respectively. Yet at run-time, if I update some content in the Strapi CMS and wait the 120s that I've set for revalidate the page hardly ever gets rebuilt. If I look at the vercel dashboard "Functions" tab that shows realtime logs, I see that many of the pages have attempted to rebuild all at the same time and they are all hitting the 60s time-out limit.
I also have the vercel logs being sent to LogTail and if I filter logs for the name of the page that I've edited, I can see that it returns a 304 status code before 120s has passed as expected but then after 120s it tries to fetch and build the new page and nearly always returns the time-out error.
So my first question is, why are so many of them trying to rebuild at the same time if nothing has changed in the CMS for all of those pages but the 1 I've deliberately changed myself? And secondly, why at build time does it only take an average of 6000ms to build a page but during ISR they are hitting the 60s time-out limit?
Could it be because so many rebuilds are being triggered that they are all end up causing each other to time-out? If so, then how to I tackle that first issue?
Here is a screenshot of my vercel realtime logs. As you can see, many of the pages are all trying to rebuild at once but I'm not sure why, I've only changed the data for one page in this instance.
To try and debug the issue, I decided to create a Postman Flow for building one of the dynamic routes and then added up the time for each api call that is needed to build the page and I get 7621ms on average after a few tests. Here is a screenshot of the Postman console:
I'm not that experienced with NextJs ISR so I'm hoping I'm just doing something wrong or I've not got a setting correct on vercel etc. but after looking on stackoverflow and other websites, I believe I'm using ISR as expected. If anybody has any ideas or advice about what might be going on, I'd very much appreciate it.

Reduce initial server response time with Netlify and Gatsby

I'm running PageSpeed Insights on my website and one big error that I get sometimes is
Reduce initial server response time
Keep the server response time for the main document short because all
other requests depend on it. Learn more.
React If you are server-side rendering any React components, consider
using renderToNodeStream() or renderToStaticNodeStream() to allow
the client to receive and hydrate different parts of the markup
instead of all at once. Learn more.
I looked up renderToNodeStream() and renderToStaticNodeStream() but I didn't really understand how they could be used with Gatsby.
It looks like a problem others are having also
The domain is https://suddenlysask.com if you want to look at it
My DNS records
Use a CNAME record on a non-apex domain. By using the bare/apex domain you bypass the CDN and force all requests through the load balancer. This means you end up with a single IP address serving all requests (fewer simultaneous connections), the server is proxying to the content without caching, and the distance to the user is likely to be further.
EDIT: Also, your HTML file is over 300KB. That's obscene. It looks like you're including Bootstrap in it twice, you're repeating the same inline <style> tags over and over with slightly different selector hashes, and you have a ton of (unused) utility classes. You only want to inline critical CSS if possible; serve the rest from an external file if you can't treeshake it.
Well the behavior is unexpected, I ran the pagespeed insights of your site and it gave me a warning on first test with initial response time of 0.74 seconds. Then i used my developer tools to look at the initial response time on the document root, which was fairly between 300 to 400ms. So I did the pagespeed test again and the response was 140ms. The test was passed. After that it was 120ms.
See the attached image.
I totally think there is no problem with the site. Still if you wanna try i would recommend you to change the server or your hosting for once, try and go for something different. I don't know what kind of server you have right now where the site is deployed. You can try AWS S3 and CloudFront, works well for me.

CN1 stop() method not working every time when issuing Rest API call

Is it sensible to use the stop() method to issue a rest api call, and send data up to the cloud (which may take 0.1s-5s based on connectivity)?
requestBuilder.acceptJson().body(jsonDataBody).getAsJsonMap()
I ask, as i can consistently reproduce an issue on the simulator where no data is being sent when i close the app, but it goes if i call the same process via a button. On real devices it seems to work fine, but i am getting occasional customer feedback that it isn't always working, ie. data isn't being sent to the cloud (tho no errors). I cannot reproduce it using my own real devices.
I'm having to code blind and just force it by putting in a new async rest call when i do screen navigation, which does the same as stop() except uses this method
requestBuilder.acceptJson().body(jsonDataBody).fetchAsJsonMap()
Background:
I have my data in a cloud database, fronted by Rest APi's. My app uses storage to store the datetime of when the last upload and download of data was. When i open my app, via start(), it issues a rest call and gets all data, with a datetime stamp > last download datetime. when i close my app i issue another call, via stop(), to send all data locally changed since the last upload datetime, to the cloud. Each record has a lastUpdateDatetime entity property.
Thanks
That's problematic due to two reasons. First the simpler case:
OS's can invoke stop()/start() quickly so you're app will stop and start almost immediately and this might trigger data corruption if you don't guard against it
The worse problem is that if an operation takes a bit longer some OS's might kill it. You can use background fetch to perform downloads/uploads while your app isn't running and that would solve the technical problem here
Personally, I would just send data on change. If change it too rapid I'd add a time threshold for sending but send during the app running and not on stop(). Notice that on the device the situation is far more complex as it can suddenly decide to kill the app to make room for the phone app or another critical app. You need to program defensively and try to avoid assumptions where possible.

Force updates on installed PWA when changing index.html (prevent caching)

I am building a react app, which consists in a Single Page Application, hosted on Amazon S3.
Sometimes, I deploy a change to the back-end and to the front-end at the same time, and I need all the browser sessions to start running the new version, or at least those whose sessions start after the last front-end deploy.
What happens is that many of my users still running the old front-end version on their phones for weeks, which is not compatible with the new version of the back-end anymore, but some of them get the updates by the time they start the next session.
As I use Webpack to build the app, it generates bundles with hashes in their names, while the index.html file, which defines the bundles that should be used, is uploaded with the following cache-control property: "no-cache, no-store, must-revalidate". The service worker file has the same cache policy.
The idea is that the user's browser can cache everything, execpt for the first files they need. The plan was good, but I'm replacing the index.html file with a newer version and my users are not refetching this file when they restart the app.
Is there a definitive guide or a way to workaround that problem?
I also know that a PWA should work offline, so it has to have the ability to cache to reuse, but this idea doesn't help me to perform a massive and instantaneous update as well, right?
What are the best options I have to do it?
You've got the basic idea correct. Why your index.html is not updated is a tough question to answer to since you're not providing any code – please include your Service Worker code. Keep in mind that depending on the logic implemented in the Service Worker, it doesn't necessarily honor the HTTP caching headers and will cache everything including the index.html file, as it seems now is happening.
In order to have the app work also in offline mode, you would probably want to use a network-first SW strategy. Using network-first the browser tries to load files from the web but if it doesn't succeed it falls back to the latest cached version of the particular file it tried to get. Another option would be to choose what is called a stale-while-revalidate strategy. That first gives the user the old file (which is super fast) and then updates the file in the background. There are other strategies as well, I suggest you read through the documentation of the most widely used SW library Workbox (https://developers.google.com/web/tools/workbox/modules/workbox-strategies).
One thing to keep in mind:
In all other strategies except "skip SW and go to the network", you cannot really ensure the user gets the latest version of the index.html. It is not possible. If the SW gives something back from the cache, it could be an old version and that's that. In these situations what is usually done is a notification to the user that a new version of the app has been donwloaded in the background. Basically user would load the app, see the version that was available in the cache, and SW would then check for updates. If an update was found (there was a new index.html and, because of that, new service-worker.js), the user would see a notification telling that the page should be refreshed. You can also trigger the SW to check for an update from the server manually from your own JS code if you want. In that situation, too, you would show a notification to the user.
Does this help you?

CakePHP - database sessions, ajax not saving

I am using sessions saved in the database. Works well. Lot of data relating to pagination, browsing history etc is stored perfectly within the database.
However, I notice that data sent to a controller using Ajax is not being stored successfully.
If I debug the session within the controller called by ajax, right after I have set the session vars, I see the values appear to be stored correctly in the session, but on subsequent requests, it transpires that the session vars have NOT been saved.
I have done some testing and have found that the problem disappears if I change back to using "php" for the session instead of "database".
I have eliminated pretty much everything from the mix - and it boils down to Cake not saving session data that is sent by ajax. Again, simply switching back to using "php" for sessions, and everything works perfectly.
I wonder if anyone has experienced anything similar?
CakePHP 2.4
Many thanks.
Well, in case anyone is interested, it turns out that the issue I was having was not strictly related to storing sessions in the database.
My application was making 2 ajax calls at the same time, both attempting to update the session. This was a bug / error on my part, and was causing other session-related issues too, such as returning 403 error status.
I removed the offending bug and all is now well.

Resources