Slow execution SQL query within the application - sql-server

I'm trying to execute a query from my application which is based on asp.net. I'm using ADO commands to execute a function in the Azure SQL database. My application has high traffic during the work hours and this particular query takes too long to execute and eventually times out. I then have to use the below command to clear out the cache which fixes the issue temporarily:
ALTER DATABASE SCOPED CONFIGURATION CLEAR PROCEDURE_CACHE;
the query takes less than a second to execute from SSMS. After clearing the cache by using the above command, it works fine for a few days then the issue comes back.
Can someone help me understand what's the correlation between clearing cache and execution time in the asp.net application. How can I fix it permanently?

Related

SQL Server Query Plan creation in SSMS vs Application Server

I have the following scenario:
After a database deployment we have a .Net application server that is attempting to execute a stored procedure.
The timeout on the application is 30 seconds. When the application first attempts to execute the stored proc an attempt is made to create a new query plan but this takes longer than 30 seconds and the application has many timeouts. My experience with this is that if the stored procedure is run manually(with representative data inputs) from SSMS the first time it runs it takes about 1-2 minutes, a plan gets generated and then the application then runs smoothly.
I work with a third party company and there is s DBA there who is claiming the following:
"Manually invoking this stored procedure will create a plan that is specific to the connection properties used (SSMS), the plan generated would not be used when the procedure is invoked by an application server."
Is this correct? It seems like a poor design if query plan used was linked to connection properties? Is there a difference between a query plan created if you run the stored procedure manually in SSMS vs when it is executed by an application?
If so, What is the optimal way to resolve this issue? Is increase the timeout the only option?

Azure SQL - Running a stored procedure manually from SSMS takes 16 minutes, running it with logic apps takes > 12 hours

I have a stored procedure in an Azure SQL database that does two simple things:
Rebuilds indexes based on fragmentation level (once an index gets above a certain level of fragmentation it'll rebuild it).
Updates SQL statistics
When I run this manually in SSMS it can take anywhere from 15-30 minutes to run. However, when I run it from logic apps, sometimes it will run just fine, and other times it will run until the timeout I have set (which is 12 hours) then fail. Why would running the procedure on logic apps get stuck whereas running it manually always works?
I'm assuming that the logic app will only fail if there's an index that needs rebuilt because after I run it manually, it seems like the logic app will complete just fine until there's an index that needs rebuilt.
Also, I never had this issue with whatever I was using to run the stored procedure before I had to move it to logic apps since Azure was deprecating whatever I was using before (I can't remember what ran the job last time).
Appreciate any help anyone can provide here or troubleshooting steps. Thank you!

Why does it take so long to execute a stored procedure using ado.net / EF 6?

Environment:
ASP.NET MVC 5.2.3.0
SQL Server 2014 (v12.0.2000.8)
Entity Framework 6
hosted on Azure
We have one page that gets data from the database using a stored procedure.
Lately we’ve noticed that some time this page loads about 20 secs. So we started to investigate the problem. I’ve tried to execute this stored procedure directly from Management Studio and it took 150 ms+-:
So next thing I did is create a console application that connects to the Azure SQL database and executes this stored procedure:
I've also tried to use SqlQuery from EF 6:
Same thing.
Important thing: this is not permanent problem. Sometimes this problem occurs, sometimes it works just fine - 50/50.
I've checked the database load in the Azure portal - it is about 50% dtu usage (during this performance issue). But I don’t think this is related to database load because it executes fast from Management Studio.
Currently I have no idea what is the problem so I need help. I would like to notice that a lot of employees use this page (that executes the stored procedure) all the time. Maybe this is somehow related to problem.
So question: why does it take so long to execute this stored procedure using ado.net / EF?
Do some debugging.
Potential culprits include, mostly:
Database side locking that is not released fast, making a SP waiting.
Parameter sniffing where a query path is not optimal for a specific set of parameters (which may lead to locking blocking you). This is a SP problem - someone does not write proper SQL for cases like that.
The info you give is irrelevant. See... SP's are NOT EXECUTED IN EF6 - EF6 fowrards them to ADO.NET which sends them to the database. As you say they work slow, IN THE DATABASE, any C# level debugging is as useless as the menu from my local Pizzeria for this particular question. YOu have to go down and debug and analyze what happens on the database.
The SSMS screenshot you provide is totally useless - you need to run the SP in SSMS, for a case it happens, and then use.... the query plan and proper nalaysis traces to see what happens.

"Fire and forget" T-SQL query in SSMS

I have an Azure SQL Database where I sometimes want to execute ad-hoc SQL statements, that may take a long time to complete. For example to create an index, delete records from a huge table or copy data between tables. Due to the amounts of data involved, these operations can take anywhere from 5 minutes to several hours.
I noticed that if a SQL statement is executed in SSMS, then the entire transaction will be automatically rolled back, in case SSMS loses its connection to the server, before the execution is complete. This is problematic for very long running queries, for example in case of local wifi connectivity issues, or if I simply want to shut down my computer to leave the office.
Is there any way to instruct SQL Server or SSMS to execute a SQL statement without requiring an open connection? We cannot use SQL Server Agent jobs, as this an Azure SQL DB, and we would like to avoid solutions based on other Azure services, if possible, as this is just for simple Ad-hoc needs.
We tried the "Discard results after execution" option in SSMS, but this still keeps an open connection until the statement finishes executing:
It is not an asynchronous solution I am looking for, as I don't really care about the execution result (I can always check if the query is still running using for example sys.dm_exec_requests). So in other words, a simple "fire and forget" mechanism for T-SQL queries.
While my initial requirements stated that we didn't want to use other Azure services, I have found that using Azure Data Factory seems to be the most cost-efficient and simple way to solve the problem. Other solutions proposed here, seems to suffer from either high cost (spinning up VMs), or timeout limitations (Azure Functions, Azure Automation Runbooks), non of which apply to ADF when used for this purpose.
The idea is:
Put the long-running SQL statement into a Stored Procedure
Create a Data Factory pipeline with a Stored Procedure activity to execute the SP on the database. Make sure to set the Timeout and Retry values of the activity to sensible values.
Trigger the pipeline
Since no data movement is taking place in Data Factory, this solution is very cheap, and I have had queries running for 10+ hours using this approach, which worked fine.
If you could put the ad-hoc query in a stored procedure you could then schedule to run on the server assuming you have the necessary privileges.
Note that this may not be a good idea, it but should work.
Unfortunately I don't think you will be able to complete the query the without an open connection in SSMS.
I can suggest the following approaches:
Pass the query into an azure function / AWS Lambda to execute on your behalf (perhaps, expose it as a service via rest) and have it store or send the results somewhere accessible.
Start up a VM in the cloud and run the query from the VM via RDP. Once you are ready you re-establish your RDP connection to the VM and you will be able to view the outcome of the query.
Use an Azure automation runbook to execute the query on a scheduled trigger.

SQL Server randomly 200x slower than normal for simple query

Sometimes queries that normally take almost no time to run at all suddenly start to take as much as 2 seconds to run. (The query is select count(*) from calendars, which returns the number 10). This only happens when running queries through our application, and not when running the query directly against the database server. When we restart our application server software (Tomcat), suddenly performance is back to normal. Normally I would blame the network, but it doesn't make any sense to me that restarting the application server would make it suddenly behave much faster.
My suspicion falls on the connection pool, but I've tried all sorts of different settings and multiple different connection pools and I still have the same result. I'm currently using HikariCP.
Does anyone know what could be causing something like this, or how I might go about diagnosing the problem?
Do you use stored procedures or ad-hoc queries? On reason to get different executions when running a query let's say in management studio vs using stored procedure in you application can be inefficient cached execution plan, which could have been generated like that due to parameter sniffing. You could read more about it here and there are number of solutions you could try (like substituting parameters with local variables). If you restart the whole computer (and SQL Server is also running on it), than this could explain why you get fast queries in the beginning after a restart - because the execution plans are cleaned after reboot.
It turned out we had a rogue process that was grabbing 64 connections to the database at once and using all of them for intense and inefficient work. We were able to diagnose this using jstack. We ran jstack when we noticed the system had slowed down a ton, and it showed us what the application was working on. We saw 64 stack traces all inside the same rogue process, and we had our answer!

Resources