Azure database latency killing app - database

I've done a lot of reading about latency of azure database on stackoverflow and various blogs around the web. I cannot figure out what is going on, however, with the high latency I'm experiencing between my azure website and azure database.
I notice my app was running very slow so I clocked the time to run a query (the query themselves take ~0ms for the db to execute). On average, it is taking 175ms to execute a query and get a response from the db. If I do 10 queries in a single page load, that 1.75 seconds just in latency! I get much better performance than that from a budget host running Mysql.
Any advice on how to address this issue is appreciated.

It looks like the database was in a different region than my website. Moving it into the same datacenter took the latency down from ~175ms to ~30ms.

Related

Server Slowness

We created an ASP.NET MVC web application using C#, and we're using SQL Server database and hosted on AWS server.
The issue is when we increase the load on the web application, then the whole application goes down. And when check monitor graph on AWS then find out that CPU utilization is very high like 70 to 80%. Normally its like 20 to 25%. Also there is strange issue that according to UTC date at 5AM its again started working properly.
I tried to optimize my stored procedures and increase AWS server load capacity. Checked monitor activity in SQL Server. But did not found why everyday getting down at load time. What might be the reason behind them?
Here are some related please review once
CPU Utilization Graph.
Database Activity Monitor
Please let me know how to find out the cause of these issues.
Providing basic insights here : The three most common causes of server slowdown: CPU, RAM, and disk I/O. CPU use can slow down the host as a whole and make it hard to complete tasks on time. Top and sar are two tools you can check the CPU.
As you are using stored proedure , if it might take more time in querying then this will results in IO Waits which utilmately results down in server slowdown. This is a basic troubleshooting doc .

What is the maximum throughput or upload limit on the amount of simultaneous data being transferred using an Azure IRT?

Is there a maximum throughput or upload limit on the amount of simultaneous data being transferred using an Azure IRT as part of an Azure data factory before a pipeline and/or activity may timeout or fail?
Good question. The answer is we cannot tell.
There is an interesting article about Hyperscale throughput and most of the research I did around throughput they don't give me a concrete answer.
On the same article you can find a comment that states:
Dont’t bother with Azure Data Factory. For some reason with type
casting from blob to Azure SQL, but also Azure SQL Database as a
source, the throughput is dramatic. From 6 MB/s to 13 MB/s on high
service tiers for transferring 1 table, 5GB in total. That is beyond
bad.
So there are too many factor to count in: where the data come from, where the data go, the products you are using, etc...
You might need to open a (paid) ticket with Azure support and ask if they hardcoded that trigs a timeout. But in my experience I've seen simple database migration from SQL Server to Azure SQL DB failing for no reason and then complete at a second try.

Azure SQL GeoReplication - Queries on secondary DB are slower

I've setup two SQL DBs on Azure with geo-replication. The primary is in Brazil and a secondary in West Europe.
Similarly I have two web apps running the same web api. A Brazilian web app that reads and writes on the Brazilian DB and a European web app that reads on the European DB and writes in the Brazilian DB.
When I test response times on read-only queries with Postman from Europe, I first notice that on a first "cold" call the European Web app is twice as fast as the Brazilian one. However, immediate next calls response times on the Bazilian web app are 10% of the initial "cold" call whereas response times on the European web app remain the same. I also notice that after a few minutes of inactivity, results are back to the "cold" case.
So:
why do query response times drop in Brazil?
whatever the answer is to 1, why doesn't it happen in Europe?
why does the response times optimization occurring in 1 doesn't last after a few minutes of inactivity?
Note that both web apps and DB are created as copy/paste (except geo-replication) from each other in an Azure ARM json file.
Both web apps are alwaysOn.
Thank you.
UPDATE
Actually there are several parts in action in what I see as a end user. The webapps and the dbs. I wrote this question thinking the issue was around the dbs and geo-replication however, after trying #Alberto's script (see below) I couldn,' see any differences in wait_times when querying Brazil or Europe so the problem may be on the webapps. I don't know how to further analyse/test that.
UPDATE 2
This may be (or not) related to query store. I asked on a new more specific question on that subject.
UPDATE 3
Queries on secondary database are not slower. My question was raised on false conclusions. I won't delete it as others took time to answer it and I thank them.
I was comparing query response times through rest calls to a web api running EF queries on a SQL Server DB. As rest calls to the web api located in the region querying the db replica are slower than rest calls to the same web api deployed in another region targeting the primary db, I concluded the problem was on the db side. However, when I run the queries in SSMS directly, bypassing the web api, I observe almost no differences in response times between primary and replica db.
I still have a problem but it's not the one raised in that question.
On Azure SQL Database your database' memory utilization may be dynamically reduced after some minutes of inactivity, and on this behavior Azure SQL differs from SQL Server on-premises. If you run a query two or three times it then start to execute faster again.
If you examine the query execution plan and its wait stats, you may find a wait named MEMORY_ALLOCATION_EXT for those queries executing after the memory allocation has been shrinked by Azure SQL Database service. Databases with a lot activity and query execution may not see its memory allocation reduced. For a detailed information of my part please read this StackOverflow thread.
Take in consideration also both databases should have the same service tier assigned.
Use below script to determine query waits and see what is the difference in terms of waits between both regions.
DROP TABLE IF EXISTS #before;
SELECT [wait_type], [waiting_tasks_count], [wait_time_ms], [max_wait_time_ms],
[signal_wait_time_ms]
INTO #before
FROM sys.[dm_db_wait_stats];
-- Execute test query here
SELECT *
FROM [dbo].[YourTestQuery]
-- Finish test query
DROP TABLE IF EXISTS #after;
SELECT [wait_type], [waiting_tasks_count], [wait_time_ms], [max_wait_time_ms],
[signal_wait_time_ms]
INTO #after
FROM sys.[dm_db_wait_stats];
-- Show accumulated wait time
SELECT [a].[wait_type], ([a].[wait_time_ms] - [b].[wait_time_ms]) AS [wait_time]
FROM [#after] AS [a]
INNER JOIN [#before] AS [b] ON
[a].[wait_type] = [b].[wait_type]
ORDER BY ([a].[wait_time_ms] - [b].[wait_time_ms]) DESC;

Sudden drop in SQL Azure query performance after moving web app to Azure

What could explain this big drop in performance in an Azure SQL DB after moving the app from an hosted VPS to an Azure App service?
Here's a typical chart from Query Store's High Variation chart over the past two weeks. The red arrow indicates when I moved the production app from another hosting provider to an Azure App. Prior to moving the app, I experienced zero timeouts. Now, using the same Azure SQL DB, timeouts are triggering frequently for longish queries (but by no means too arduous).
The only other change I made was change the user principle in the connection string. This user only has SELECT, INSERT, UPDATE, DELETE and EXECUTE permissions.
My theories are:
- something to do with networking between the app and the db. Resiliency? But I have a SQL exec plan specified
- something wrong with the user I set up?
- bad plan regression (I have now enabled auto FORCE PLAN tuning)
- a problem caused by Hangfire running on two servers simultaneously (now mitigated by moving HF tables to a new DB)
- something is triggering some kind of throttling that I cannot figure out.
Here is a chart of timeouts from Log Analytics:
All help appreciated. Note: this site has had almost identical traffic over the past 30 days.
In fact, take a look at this from the SQL DB metrics over the past week:
And here is some Wait info - last 6 hours:
Blue = PARALLELISM
Orange = BUFFERIO

Using Amazon RDS while having your application running on-premises. Makes sense?

I'm interested in taking advantage of Amazon's managed database (RDS), but, at the same time, I'd like my web application to run on-premises or on another cloud provider that offers data centers near me (less latency, as my application not always has to fetch data from the DB).
Is this scenario common? Would it make sense, or is Amazon RDS supposed to be run with instances running in Amazon?
If you're looking to reduce latency this is probably not your best option, as DB performance is going to be pretty bad (very large latencies between the web application and the DB server, basically cancelling out and advantages of having the app server as close to your clients as possible).
I've actually had to test a similar configuration, with a DB server in Europe and an app server in the US and the performance was much worse than having both in any of the two regions.
I've recently moved the database for our web application to RDS whilst still having our application servers hosted on-premise. Eventually the entire application will live on AWS, but it was too big a move to do all components at the same time. I'm only dealing with latency between Perth and Sydney in Australia, which is about 50ms, and this is working fine for us.
I don't recommend that you adopt this as a permanent configuration. You'll get much better performance from hosting your entire stack at AWS as opposed to keeping parts separated.

Resources