mssql/server:2019-latest on k8s shows high "other" system cpu utilization - sql-server

I am running "mcr.microsoft.com/mssql/server:2019-latest" on aws: c5.2xlarge and AmazonLinux2.
EC2 node monitoring tab shows: < 20% cpu node utilization.
The only thing running is my mssql/server:2019-latest along with aws and k8 pods.
The performance dashboard in sql server shows:
c5.xlarge
c5.2xlarge
As I go from c5.xlarge to c5.2xlarge the interaction with mssms, management studio, is slightly faster, however, sql server performance dashboard cpu utilization is unchanged. Why is this happening?

Related

Server Slowness

We created an ASP.NET MVC web application using C#, and we're using SQL Server database and hosted on AWS server.
The issue is when we increase the load on the web application, then the whole application goes down. And when check monitor graph on AWS then find out that CPU utilization is very high like 70 to 80%. Normally its like 20 to 25%. Also there is strange issue that according to UTC date at 5AM its again started working properly.
I tried to optimize my stored procedures and increase AWS server load capacity. Checked monitor activity in SQL Server. But did not found why everyday getting down at load time. What might be the reason behind them?
Here are some related please review once
CPU Utilization Graph.
Database Activity Monitor
Please let me know how to find out the cause of these issues.
Providing basic insights here : The three most common causes of server slowdown: CPU, RAM, and disk I/O. CPU use can slow down the host as a whole and make it hard to complete tasks on time. Top and sar are two tools you can check the CPU.
As you are using stored proedure , if it might take more time in querying then this will results in IO Waits which utilmately results down in server slowdown. This is a basic troubleshooting doc .

Tomcat application runs slower when moved to PaaS platform on Azure

Currently my web application runs on two Cloud-based Windows 2012 R2 Servers - one running Tomcat and one running SQL Server Express. The Windows Servers are 4GB Intel XEON(R) CPU E5-2660 v2 #2.20 Gghz
I am testing my application on PaaS (Platform as a Service) in Azure.
I created a Linux Web App with Tomcat 9.0 to hold the application (P1V2) and an Azure SQL Server to hold the database (S2)
One test I did was to compare the time it takes to produce an Excel report (using Apache POI) on both systems.
On the Cloud system (running SQL Server Express) it took about 10 seconds. On Azure it took about 35 seconds.
Obviously I would like the Azure system to be at least as fast as the one based on SQL Server Express, especially as the cloud system runs SQL Server Express which is capped at 1GB and 1 core.
I have tried the following:
Checked to see if there any spikes in the dashboard chart for the
database. There are no significant ones - DTU's are maximum 25%
I added Query Performance Insight / Recommendations / Automate to automatically
tune the database. This did speed it up somewhat but by no means
enough.
I read Help, My Azure Site Performance Sucks! and Why is running a query on SQL Azure so much slower? and Azure SQL query slow
I checked that the database and application were in the same location. (West Europe)
I imagine that the problem is the database.
As an example, I found a query (using the Query Performance Insight / Long Running Queries) that runs in 2 seconds on Azure and in 0 seconds on SQL Server Express. Note that I am NOT asking how to optimize this query. Rather I am imagining that the fact that this query takes longer on Azure - with the same database schema, the same data and the same indexes - might be a clue as to how to speed up my application as a whole.
SELECT cp.*
,(
SELECT min(market_date)
FROM mydb.rates ms
WHERE ms.curr1 = cp.curr1
AND ms.curr2= cp.curr2
) MIN_MARKETDATE
FROM pairs cp
order by curr1, curr2
The easiest way to do an apples-to-apples comparison is to use the vCore model for your Azure SQL. You say you are using an S2 database which is 50 DTU or half of a core. You would need to scale up to at least an S3 to be the equivalent of a 1 core VM.
That will ensure you are testing with the same general setup and should help you to match performance.

Too many hit in the application slows the performance down due to excessive RAM usage by SQL Server?

I have an overall system performance related quetion:
The architecture of our application is as follows:
Client end => AngularJs app with service calls to get/post data - this client app is hosted in a windows application server through IIS.
We have 3 web api 2 application hosted in the same application server on IIS, Data layer is mostly Entity Framework based call. These APIs are called by the client app through AngularJs $https calls.
We have a separate database server with SQL Server 2014.
Server configuration is as follows:
Database Server => RAM 512 GB, Processor - 56 Core processor
Application Server => RAM 256 GB, Processor - 56 Core processor
IIS has 10 worker process.
Now the problem is:
When too many hit is happening in the application (approx. 1000 hits within 10 mins), the SQL server consumes almost 95% of the RAM and the applications becomes too much slow.
I know SQL server after consuming memory it doesn't free it until new requests arrive. Therefore, probably RAM consumption is not an issue. Once the application becomes slow it
remains same until we restart the SQL Server Service in the database server.
Please note that at this stage, application server processor usage is average 20%, RAM usage is average 15%.
Database server processor usage is average 25% and RAM usage is 95% as described earlier.
Is there any work around to boost up the performance? Is there any architectural fault here. Please help.

Effect of Network Performance on Database (RDS)

I would like to know the effect of Network Performance (also the exact name for the metric used by AWS RDS instance types) on a DB.
I am loading Graph database with data at scale using parallel processing (multiple Pods in Kubernetes).
I noticed that by simply changing from one RDS instance type to one more powerful, and monitoring the DB metrics in AWS console, performance is doubled.
Performance figures that are doubled are:
VolumeWriteIOPs - doubled
Network Throughput - doubled
VolumeReadIOPs - tripled
As the better instance type has more CPU, RAM, Disk, and possibly network performance (i believe there is an 'invisible' network performance tiering that is not shown in the instance specs), I suppose my question really is -- if there is a (hypothetical) instance with same CPU, same RAM, same disk performance, what difference does network performance alone make to a DB?
Do DBs and RDS DBs process everything slower if the network performance is lower?
Or does it respond at the same speed, but only serve less connections (making the others wait)?
In my use case they are Kubernetes Pods which are writing to the DB, so does it serve each Pod more slowly, or is it non-responsive above a certain point?

SQL execution time took more time when database is in the different server

SQL execution time took more time to complete when database is in the different server compare if the web and database are on the same server
if the database and web server are in one server only: Execution time is 3mins
if the database and web server are in different servers: Execution time is 12mins
What are the things to check?
There are many factors that delay or slowdown the SQL execution, Size of data, network bandwidth, poor execution plan etc are few of them, You can start looking around with these factors and narrow down your analysis. I would suggest to start from comparing execution plan from both servers.
Assuming both the server IIS and DB Server having same specification, same data size then there are likely chances that your DB server configuration is different from what you have on IIS, Check for memory and CPU settings. Do you see same execution plan ?

Resources