Drupal cache queries run slowly - drupal-7

I have two Drupal sites, one on testing server and one on my localhost. These sites are same source and same database.
From query log of Devel module, site's loading speed on localhost is very slow, but on server is not. The problem is caused by cache, session, history queries...
Both of two environments are not using MemCache. The innodb_buffer_pool_size = 256MB (localhost) is greater than server.
Could you please tell me why the localhost is slower and how to config my localhost to be same testing server? Thanks.

Related

Requests from docker container in Google Cloud Run Service to Google Cloud SQL takes up to 2 minutes

Im using the Google Cloud Run service to host my Spring application in a docker container. The database is running in the Google SQL service. My problem is that the requests from the application to the database can take up to 2 minutes. See the Google Cloud Run log (the long requests are painted yellow). And here's the Dockerfile and Docker Compose File
The database is quite empty, it contains about 20 tables but each contains only few rows, so no request is bigger than few kB. And to make it more strange, after re-deploying the application the requests are fast again. But after few minutes, hours or even after a whole day the requests slow down again. When I start the application on my local machine the requests are always fast (to my local SQL and Google SQL instance), never had any slow connection. All actions within my application that doesn't require any DB request are still fast and takes only few ms.
Both services are running in the same region (europe-west) and CPU usage of the run service is never higher than 15%, of the Google SQL never above 3%. The Google SQL uses 1 CPU and 3.75GB, the Google run service has 4GB RAM and 2CPUs. But increasing the power of the Google Run Service and Google SQL doesn't improve the request latency. Google Cloud SQL is using MySQL 5.7 (like my local DB).
And after seeing the logs only warnings are shown in the filtered Google SQL log (I really dont know why this happens). Additionally here are my DB connection settings in the Spring config. But I dont think this has any impact, the config works perfect when connecting my local application to my local SQL instance or to the Google SQL instance.
But maybe one of you has an idea?
While not a real answer, there is a bug filed at Google that is tracking the issue:
https://issuetracker.google.com/issues/186313545
This is really hurting our customers experience and makes us loose trust in the service quality of cloud run. Even more so if there is no feedback from Google to know if they are even addressing the issue.
Edit:
The issue now seems to be resolved, according to the interactions in https://issuetracker.google.com/issues/186313545

What step should be considered for CPU/memory shortage in SQL Server

We recently faced an issue in a server where 12000 concurrent users were trying to access an application but only 120 SQL Server connections were available.
Basic issue I've found is in the architecture of deployment of application and database as below:
DB & App on Same Server
Data and log files of all database whether system or user, are on system drive i.e. C:\
Questions:
By looking what metrics in perfmon or taking what steps can I prove the above points as the basic cause?
Other than the two causes mentioned above, how to correlate metrics/stats in perfmon with a particular SQL Server query?

Large Scale Distributed Application in AWS / Azure

I am working on a large scale web app where the users can be from anywhere in the world.
Considerations:
1. Web servers needs to be distributed in possibly 3 data centers possibly across 3 continents.
2. Each datacenter might have 2 webservers (ASP.NET) to start with and can scale out
3. The database needs to be partitioned (SQL Server sherding). Not thinking of separate database instances and mirroring
4. The application will have media contents. So, a CDN might be right fit for them.
Hosting Options:
1. Azure/AWS IaaS: In this case the Web and Applciation servers needs to be configured and managed by us
2. Azure/AWS PaaS: Here we get tied up with using vendor specific tools, code blocks and "way of doing things" and finally one fine morning they announce they are retiring a dependant service (eg. SQL Azure federation). Also, to consider the limits around db max size 150GB for Az SQL and throttling within shared services.
So, Hosting Option 1 looks the safe bet around.
Now my questions:
I need to consider a load balancing server for each DATA Center that will route traffic to 2 or more Web Servers in that data center. But what about managing the traffic from anywhere of the world come to its desired data centers? In the IaaS model, where do I put the load balancer that distributes web traffic across datacenters?
I came across Azure traffic manager that seems take care of the problem 1 above, but does that work with their IaaS offering? What is the equivalent in AWS? It is desired that when an user connects from APAC, they get redirected to the DC in Asia.
In sherding model, we want to partition specific database tables and not all. I am not quite familiar, but how does failover work in sherded database? Can I have active passive SQL servers for each member database server in the federation? (BTW is SQL federation same as Sherding?)
The application itself is ASP.NET and SQL Server based.
Max size now 1 terabyte for SQL Azure - see https://azure.microsoft.com/en-us/pricing/details/sql-database/ and look at the premium tier...

Redis on web server front-end or database server back-end

I have two virtual private servers, first is web server front-end and second is database back-end. I want to use Redis for real-time stuff and my questions is: where should I install Redis? On web or database server?
Pros of installing Redis on your Database Server:
The database size of Redis can become large if you have a lot of data. If you are storing stats and storing a lot of them, then your database can become a memory hog. You would not want to keep all that data in memory on your web server, as that could take away memory from your HTTP server.
Its called the database server for a reason
Cons of installing Redis on your Database Server
There will be a higher network response time when polling the server for data, as it is not local
If the server goes down, then you would be out of data.
I personally would keep Redis on its own server, as you can be feeding it a lot of data, but it all depends on what environment you are working in. If you want speed to be the top priority (an extra 50ms or so would be unacceptable), then you should run it on your Web Server, as request times to 127.0.0.1 are a lot faster than an external network address, even if it is inside your local subnet. If not, then you should keep it off the web server.
Well if Redis is being used as you said, and your web process does not use a lot of memory, I would put it on both and have replication to the db server. This would provide redundancy and performance. That data seems more important than simple cache data so redundancy would be nice.
If your web server has less free memory and that free memory is smaller than your data size, keep it all on the db server.

Move a web application to another server but use the same subfolder and domain

my web application is installed on a Server A like
mywebsite.com/
--> GO to Server A
mywebsite.com/myApp/
--> GO to Server A
and for performance reasons I would like have /myApp/ on an another Server B but using the same domain
mywebsite.com/
--> GO to Server A
mywebsite.com/myApp/
--> GO to Server B
How to do it?
I use MS .net 4, IIS 7 on MS Server.
Thanks
Do you have a load balancer in front of servers A and B so that you could direct traffic appropriately? Do you have a firewall that may be able to support a rule to route traffice with that "/myApp" to go to B? Those would be my suggestions for ways to do it without changing the servers. However, if you did have to change the server, I would consider having an ISAPI filter that moves the requests to the other machine, but this is likely done better by setting up different domain names so that you have a mywebsite.com and a myapp.website.com so that each can resolve to a different server. If you do that, then server A just has to redirect the request to the other server for a simpler solution. Anyway, that is a few different ways to do it that I can see.
The trade-offs in firewall vs load balancer are really probably quite minuscule as many big sites have load balancers and firewalls, though I'd question the cost difference in having a load balancer versus changing firewall settings. I'd also investigate how feasible each option is for your situation.

Resources