Had a situation on a Client's MS SQL Server (2014) instance yesterday evening when a SP that was being queried by an application suddenly started timing out even though it had completed previously. My thought is that the Explain Plan for the query was somehow altered and it wasn't until I allowed the application to complete the query that a new explain plan was created.
As far as I am aware there are only two reasons an explain plan would need change:
The SP is recompiled
The database is restarted
Is there an other reason for the Explain Plan to change or is there another reason for a SP query suddenly starting to time out?
Related
We upgraded from SQL Server 2008 to SQL Server 2014. The upgrade was successful.
However, there were problems with optimization. Some queries have started to create locks based on. Often the blockade disappears or whip it but the base does not want to move.
The solution to this problem with us is the MAXDOP change. After the change I do not know what is freeing but everything starts to go like before the jam in the database. I have no idea what to do about it anymore.
Our SQL Server configuration
We have already changed the cost and MAXDOP parameters. Doesn't help much. I've optimized queries that cause blockades.
The problem persists all the time. Oddly enough, the MAXDOP change helps with this blockage. The system then completely forgives. SQL queries go down and execute.
The performance issue can raise due to a lot of reasons. improper Maxdop settings is just one of them.
Run a health check with Sp_Blitz
Run sp_blitz [sp] (https://github.com/BrentOzarULTD/SQL-Server-First-Responder-Kit#sp_blitz-overall-health-check) to see what is actually causing your performance bottleneck .
check for priorities from 1to 50, those are most crucial.
start fixing them one by one
I am running my spring application on tomcat7. I use java7, and jtds1.3.1.
I am using sqlserver2014, and the instance is running on a separate machine.
Sometimes, when i run queries through my application, a simple select query (no joins, literally just a select) that takes 1 second if I run it through SQL Server Management Studio, will take 20 minutes or more to complete.
If i check the query on the the sqlserver instance, I see that the total elapsed time keeps incrementing, but the cpu time never increases. Also the status of the query always seems to be 'sleeping'.
I know this is most likely not a problem with SQL Server, as the exact same query finishes instantly when run through SQL Server Management Studio. The fact that the query status seems to constantly be "sleeping" always makes me suspect this.
But beyond this i have no idea how to try and debug what is going on here. Is there anything I can try to change that might help with this issue?
Environment:
ASP.NET MVC 5.2.3.0
SQL Server 2014 (v12.0.2000.8)
Entity Framework 6
hosted on Azure
We have one page that gets data from the database using a stored procedure.
Lately we’ve noticed that some time this page loads about 20 secs. So we started to investigate the problem. I’ve tried to execute this stored procedure directly from Management Studio and it took 150 ms+-:
So next thing I did is create a console application that connects to the Azure SQL database and executes this stored procedure:
I've also tried to use SqlQuery from EF 6:
Same thing.
Important thing: this is not permanent problem. Sometimes this problem occurs, sometimes it works just fine - 50/50.
I've checked the database load in the Azure portal - it is about 50% dtu usage (during this performance issue). But I don’t think this is related to database load because it executes fast from Management Studio.
Currently I have no idea what is the problem so I need help. I would like to notice that a lot of employees use this page (that executes the stored procedure) all the time. Maybe this is somehow related to problem.
So question: why does it take so long to execute this stored procedure using ado.net / EF?
Do some debugging.
Potential culprits include, mostly:
Database side locking that is not released fast, making a SP waiting.
Parameter sniffing where a query path is not optimal for a specific set of parameters (which may lead to locking blocking you). This is a SP problem - someone does not write proper SQL for cases like that.
The info you give is irrelevant. See... SP's are NOT EXECUTED IN EF6 - EF6 fowrards them to ADO.NET which sends them to the database. As you say they work slow, IN THE DATABASE, any C# level debugging is as useless as the menu from my local Pizzeria for this particular question. YOu have to go down and debug and analyze what happens on the database.
The SSMS screenshot you provide is totally useless - you need to run the SP in SSMS, for a case it happens, and then use.... the query plan and proper nalaysis traces to see what happens.
We are running SQL Server 2012, and all the developers can execute a specific stored procedure (which is overly complex), and takes a varying amount of time depending on the machine. (Anywhere up to 20 seconds).
We right now are hosting the SQL Server instances locally, and are passing around one backup of the database to work from (we don't want a shared singular instance for dev work)
On a particular machine, it will not finish executing at all. They are all identical machines, and the settings appear to be the same.
Has anyone experienced this before? What are some things that we can try on this specific SQL Server instance to get it working?
We tried restarting the machine, services, DBCC DROPCLEANBUFFERS, DBCC FREEPROCCACHE, inspecting table locks, with no luck.
Thanks!
The solution we found that finally fixed the problem was to rebuild all the indexes. They had become fragmented and so slow that the Stored Procedures were timing out.
We have a site in development that when we deployed it to the client's production server, we started getting query timeouts after a couple of hours.
This was with a single user testing it and on our server (which is identical in terms of Sql Server version number - 2005 SP3) we have never had the same problem.
One of our senior developers had come across similar behaviour in a previous job and he ran a query to manually update the statistics and the problem magically went away - the query returned in a few miliseconds.
A couple of hours later, the same problem occurred.So we again manually updated the statistics and again, the problem went away. We've checked the database properties and sure enough, auto update statistics isTRUE.
As a temporary measure, we've set a task to update stats periodically, but clearly, this isn't a good solution.
The developer who experienced this problem before is certain it's an environment problem - when it occurred for him previously, it went away of its own accord after a few days.
We have examined the SQL server installation on their db server and it's not what I would regard as normal. Although they have SQL 2005 installed (and not 2008) there's an empty "100" folder in installation directory. There is also MSQL.1, MSQL.2, MSQL.3 and MSQL.4 (which is where the executables and data are actually stored).
If anybody has any ideas we'd be very grateful - I'm of the opinion that rather than the statistics failing to update, they are somehow becoming corrupt.
Many thanks
Tony
Disagreeing with Remus...
Parameter sniffing allows SQL Server to guess the optimal plan for a wide range of input values. Some times, it's wrong and the plan is bad because of an atypical value or a poorly chosen default.
I used to be able to demonstrate this on demand by changing a default between 0 and NULL: plan and performance changed dramatically.
A statistics update will invalidate the plan. The query will thus be compiled and cached when next used
The workarounds are one of these follows:
parameter masking
use OPTIMISE FOR UNKNOWN hint
duplicate "default"
See these SO questions
Why does the SqlServer optimizer get so confused with parameters?
At some point in your career with SQL Server does parameter sniffing just jump out and attack?
SQL poor stored procedure execution plan performance - parameter sniffing
Known issue?: SQL Server 2005 stored procedure fails to complete with a parameter
...and Google search on SO
Now, Remus works for the SQL Server development team. However, this phenomenon is well documented by Microsoft on their own website so blaming developers is unfair
How Data Access Code Affects Database Performance (MSDN mag)
Suboptimal index usage within stored procedure (MS Connect)
Batch Compilation, Recompilation, and Plan Caching Issues in SQL Server 2005 (an excellent white paper)
Is not that the statistics are outdated. What happens when you update statistics all plans get invalidated and some bad cached plan gets evicted. Things run smooth until a bad plan gets again cached and causes slow execution.
The real question is why do you get bad plans to start with? We can get into lengthy technical and philosophical arguments whether a query processor shoudl create a bad plan to start with, but the thing is that, when applications are written in a certain way, bad plans can happen. The typical example is having a where clause like (#somevaribale is null) or (somefield= #somevariable). Ultimately 99% of the bad plans can be traced to developers writing queries that have C style procedural expectation instead of sound, set based, relational processing.
What you need to do now is to identify the bad queries. Is really easy, just check sys.dm_exec_query_stats, the bad queries will stand out in terms of total_elapsed_time and total_logical_reads. Once you identified the bad plan, you can take corrective measures which depend from query to query.