How to cache Wordpress DB so it doesn't Query MySql? - database

I have an unlimited host in everything, except the MySql Database, which has limited number of connection at the same time. I want to know if there is any way to cache the posts in the server/host, so it doesn't load them every from the Database every time a visitor loads the page.
This would not be a problem without a lot of traffic, but I have a lot of traffic and this crashed they crashed my Database yesterday.
Thank you.

use a caching solution. memcache is one of the most used.
there are also some which does a complete page caching - resin and varnish are quite popular.
in fact there is a varnish plugin fir wordpress at http://wordpress.org/extend/plugins/wordpress-varnish/installation/.

Related

Matomo Setup highly Unstable

I’m having trouble with my matomo in my openshift. This matomo is rather unstable.
When I start the pod motomo runs fine for a short time (very short time). Then matomo starts to respond with http 504 regularly … eventualy beeing unable to process any request successfully and respond with 504 only.
My guess is that matomo tries (lots of) communication with the internet. My openshift is not allowed to communication with the internet. Could this be the cause of trouble?
What is the recommended setup for matomo in general and matomo in openshift in particular?
I recently updated to matomo 4. Looks a tiny little bit more stable but still ways to go for production use.
Best Regards
Sebastian
not sure if you're still having the issue but if your container has limited internet connectivity, then there might be something to it. AFAIK by default Matomo has Provider plugin enabled, which performs external DNS lookup. That takes place as part of tracking requests processing, and could fail in your case. Check this: Matomo optimisation how-to
Tracker API: if the ‘Provider’ plugin is activated in your Matomo, the Internet provider by doing a reverse DNS lookup which adds a few milliseconds overhead.
Other than that, I suppose you're speaking about timeouts on tracking requests - you could enable tracker debug mode (in config.ini.php) to see the output of request processing.
If this is about reporting queries - then this may be broader subject, as this may be issue with archiving timeouts.
If this will reach you and you would respond, please specify if this is about tracking or reporting requests :)

Chrome update slowed Ajax/Angular Network rendering and loading

About two weeks ago, a Chrome update crippled users of my angular app. I load a lot of data but the entire single page application loaded in < 4 seconds but every single user went to > 40 seconds after updating Chrome 2 weeks ago. I did not experience the problem, but when I upgraded Chrome to 64.0.3282.167 from 63.0.3239.132, the problem also began for me.
Somewhere between Chrome 63.0.3239.132 and 64.0.3282.167, there was a change that basically slowed my Angular app to a crawl. It affects loading and rendering across the board and made the entire app almost unusable. I've been looking for the issue for a few days with no joy.
Does anyone have any insight or recommendation on what could cause such a performance degradation?
Here is a screenshot of my network tab. All of this used to be very fast before the Chrome update and now it just crawls.
If I set:
httpProvider.useApplyAsync(true), it alleviates the problem but my application is huge and this causes a lot of erratic behavior in a 5 year old application.
I'm not sure if this is still an issue, but I know that Google has continued to ramp up security measures with Chrome. This is especially true with HTTPS and I believe Google is pushing for everything to move to HTTPS. Certificates that are not clean (several criteria for this) present problems and may be requiring extra measures to process. I believe there is an add-on (or built-in) for Chrome dev tools that can break out the TLS processing to show you more detail.
A high TTFB reveals one of two primary issues. Either:
Bad network conditions between client and server, or A slowly
responding server application
To address a high TTFB, first cut out as much network as possible. Ideally, host the application locally and see if there is still a big TTFB. If there is, then the application needs to be optimized for response speed. This could mean optimizing database queries, implementing a cache for certain portions of content, or modifying your web server configuration. There are many reasons a backend can be slow. You will need to do research into your software and figure out what is not meeting your performance budget.
If the TTFB is low locally then the networks between your client and the server are the problem. The network traversal could be hindered by any number of things. There are a lot of points between clients and servers and each one has its own connection limitations and could cause a problem. The simplest method to test reducing this is to put your application on another host and see if the TTFB improves.

redis consuming huge amount of memory

we have recently installed Redis Cache for our magento based sites.
we 2 websites on our server both using same redis server, our server is based on linux with centos.
the issue we are facing is that redis is consuming quite a lot of RAM, and it is growing by every day.
we have set some values on our magento local.xml file for redis
<redis_session>
<host>xxxxxxx</host>
<port>xxxx</port>
<password></password>
<timeout>2.5</timeout>
<persistent></persistent>
<db>1</db>
<compression_threshold>2048</compression_threshold>
<compression_lib>gzip</compression_lib>
<log_level>1</log_level>
<max_concurrency>6</max_concurrency>
<break_after_frontend>5</break_after_frontend>
<break_after_adminhtml>30</break_after_adminhtml>
<first_lifetime>86400</first_lifetime>
<bot_first_lifetime>60</bot_first_lifetime>
<bot_lifetime>7200</bot_lifetime>
<disable_locking>0</disable_locking>
<min_lifetime>60</min_lifetime>
<max_lifetime>2592000</max_lifetime>
<automatic_cleaning_factor>1</automatic_cleaning_factor>
</redis_session>
it seems like we do not have an expire set to it, plus there is no memory usage limit either.
i know there are few instruction on internet for setting expire but there is nothing for use with magento in easy way.
all help is apreciated.

Fastest Open Source Content Management System for Cloud/Cluster deployment

Currently clouds are mushrooming like crazy and people start to deploy everything to the cloud including CMS systems, but so far I have not seen people that have succeeded in deploying popular CMS systems to a load balanced cluster in the cloud. Some performance hurdles seem to prevent standard open-source CMS systems to be deployed to the cloud like this.
CLOUD: A cloud, better load-balanced cluster, has at least one frontend-server, one network-connected(!) database-server and one cloud-storage server. This fits well to Amazon Beanstalk and Google Appengine. (This specifically excludes CMS on a single computer or Linux server with MySQL on the same "CPU".)
To deploy a standard CMS in such a load balanced cluster needs a cloud-ready CMS with the following characteristics:
The CMS must deal with the latency of queries to still be responsive and render pages in less than a second to be cached (or use a precaching strategy)
The filesystem probably must be connected to a remote storage (Amazon S3, Google cloudstorage, etc.)
Currently I know of python/django and Wordpress having middleware modules or plugins that can connect to cloud storages instead of a filesystem, but there might be other cloud-ready CMS implementations (Java, PHP, ?) and systems.
I myself have failed to deploy django-CMS to the cloud, finally due to query latency of the remote DB. So here is my question:
Did you deploy an open-source CMS that still performs well in rendering pages and backend admin? Please post your average page rendering access stats in microseconds for uncached pages.
IMPORTANT: Please describe your configuration, the problems you have encountered, which modules had to be optimized in the CMS to make it work, don't post simple "this works", contribute your experience and knowledge.
Such a CMS probably has to make fewer than 10 queries per page, if more, the queries must be made in parallel, and deal with filesystem access times of 100ms for a stat and query delays of 40ms.
Related:
Slow MySQL Remote Connection
Have you tried Umbraco?
It relies on database, but it keeps layers of cache so you arent doing selects on every request.
http://umbraco.com/azure
It works great on azure too!
I have found an excellent performance test of Wordpress on Appengine. It appears that Google has spent some time to optimize this system for load-balanced cluster and remote DB deployment:
http://www.syseleven.de/blog/4118/google-app-engine-php/
Scaling test from the report.
parallel
hits GAE 1&1 Sys11
1 1,5 2,6 8,5
10 9,8 8,5 69,4
100 14,9 - 146,1
Conclusion from the report the system is slower than on traditional hosting but scales much better.
http://developers.google.com/appengine/articles/wordpress
We have managed to deploy python django-CMS (www.django-cms.org) on GoogleAppEngine with CloudSQL as DB and CloudStore as Filesystem. Cloud store was attached by forking and fixing a django.storage module by Christos Kopanos http://github.com/locandy/django-google-cloud-storage
After that, the second set of problems came up as we discovered we had access times of up to 17s for a single page access. We have investigated this and found that easy-thumbnails 1.4 accessed the normal file system for mod_time requests while writing results to the store (rendering all thumb images on every request). We switched to the development version where that was already fixed.
Then we worked with SmileyChris to fix unnecessary access of mod_times (stat the file) on every request for every image by tracing and posting issues to http://github.com/SmileyChris/easy-thumbnails
This reduced access times from 12-17s to 4-6s per public page on the CMS basically eliminating all storage/"file"-system access. Once that was fixed, easy-thumbnails replaced (per design) file-system accesses with queries to the DB to check on every request if a thumbnail's source image has changed.
One thing for the web-designer: if she uses a image.width statement in the template this forces a ugly slow read on the "filesystem", because image widths are not cached.
Further investigation led to the conclusion that DB accesses are very costly, too and take about 40ms per roundtrip.
Up to now the deployment is unsuccessful mostly due to DB access times in the cloud leading to 4-5s delays on rendering a page before caching it.

load balancing webserver / IIS / MSSQL - many clients

Have to say im not an administrator of any sorts and never needed to distribute load on a server before, but now im in a situation where i can see that i might have a problem.
This is the scenario and my problem :
I have a IIS running on a server with a MSSQL, a client can send off a request that will retrieve a datapackage with a request (1 request) to the MSSQL database, that data is then sent back to the client.
This package of data can be of different lenght, but generally <10 MB.
This is all working fine, but im now facing a what-if if i have 10.000 clients pounding on the server simulataniously, i can see my bandwith getting smashed probably and also imagine that both IIS and MSSQL will be dying of exhaustion.
So my question is, i guess the bandwith issue is only about hosting ? but how can i distribute this so IIS and MSSQL will be able to perform without exhausting them ?
Really appriciate an explanation of how this can be achieved, its probably standard knowledge but for me its abit of a mystery, but know it can be done when i look at dropbox and whatelse just a big question how i can do it.
thanks alot
You will need to consider some form of Load Balancing. Since you are using IIS, I'm assuming that you are hosting on Windows Server, which provides a software based Network Load Balancer. See Network Load Balancing Overview
You need to identify the performance bottleneck then plan to reduce them. A sledgehammer approach here might not be the best idea.
Setup performance counters and record a day or two's worth of data. See this link on how to do SQL server performance troubleshooting.
The bandwidth might just be one of the problems. By setting up performance counters and doing a analysis of what is actually happening you will be able to plan a better solution with the right data.

Resources